netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17
@ 2017-08-18 14:02 Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 01/13] amd-xgbe: Set the MDIO mode for 10000Base-T configuration Tom Lendacky
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The following updates are included in this driver update series:

- Set the MDIO mode to clause 45 for the 10GBase-T configuration
- Set the MII control width to 8-bits for speeds less than 1Gbps
- Fix an issue to related to module removal when the devices are up
- Fix ethtool statistics related to packet counting of TSO packets
- Add support for device renaming
- Add additional dynamic debug output for the PCS window calculation
- Optimize reading of DMA channel interrupt enablement register
- Add additional dynamic debug output about the hardware features
- Add per queue Tx and Rx ethtool statistics
- Add a macro to clear ethtool_link_ksettings modes
- Convert the driver to use the ethtool_link_ksettings
- Add support for VXLAN offload capabilities
- Add additional ethtool statistics related to VXLAN

This patch series is based on net-next.

---

Changes since v1:
- Remove added debugfs support for reading SFP eeprom and phydev
  registers


Tom Lendacky (13):
      amd-xgbe: Set the MDIO mode for 10000Base-T configuration
      amd-xgbe: Set the MII control width for the MAC interface
      amd-xgbe: Be sure driver shuts down cleanly on module removal
      amd-xgbe: Update TSO packet statistics accuracy
      amd-xgbe: Add support to handle device renaming
      amd-xgbe: Add additional dynamic debug messages
      amd-xgbe: Optimize DMA channel interrupt enablement
      amd-xgbe: Add hardware features debug output
      amd-xgbe: Add per queue Tx and Rx statistics
      net: ethtool: Add macro to clear a link mode setting
      amd-xgbe: Convert to using the new link mode settings
      amd-xgbe: Add support for VXLAN offload capabilities
      amd-xgbe: Add additional ethtool statistics


 drivers/net/ethernet/amd/xgbe/xgbe-common.h  |   25 +
 drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c |   25 +
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c     |  198 ++++++++---
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c     |  487 ++++++++++++++++++++++++++
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |   86 +++--
 drivers/net/ethernet/amd/xgbe/xgbe-main.c    |   97 +++--
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c    |   81 +++-
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c     |    4 
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c  |   54 ++-
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c  |  352 ++++++++++---------
 drivers/net/ethernet/amd/xgbe/xgbe.h         |   91 ++++-
 include/linux/ethtool.h                      |   11 +
 12 files changed, 1158 insertions(+), 353 deletions(-)

-- 
Tom Lendacky

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 01/13] amd-xgbe: Set the MDIO mode for 10000Base-T configuration
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
@ 2017-08-18 14:02 ` Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 02/13] amd-xgbe: Set the MII control width for the MAC interface Tom Lendacky
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Currently the MDIO mode is set to none for the 10000Base-T, which is
incorrect.  The MDIO mode should for this configuration should be
clause 45.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index 04b5c14..81c45fa 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -2921,7 +2921,7 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 			phy_data->start_mode = XGBE_MODE_KR;
 		}
 
-		phy_data->phydev_mode = XGBE_MDIO_MODE_NONE;
+		phy_data->phydev_mode = XGBE_MDIO_MODE_CL45;
 		break;
 
 	/* 10GBase-R support */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 02/13] amd-xgbe: Set the MII control width for the MAC interface
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 01/13] amd-xgbe: Set the MDIO mode for 10000Base-T configuration Tom Lendacky
@ 2017-08-18 14:02 ` Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 03/13] amd-xgbe: Be sure driver shuts down cleanly on module removal Tom Lendacky
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

When running in SGMII mode at speeds below 1000Mbps, the auto-negotition
control register must set the MII control width for the MAC interface
to be 8-bits wide.  By default the width is 4-bits.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h |    1 +
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c   |    2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index 9795419..d07edf9 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -1339,6 +1339,7 @@
 #define XGBE_AN_CL37_PCS_MODE_BASEX	0x00
 #define XGBE_AN_CL37_PCS_MODE_SGMII	0x04
 #define XGBE_AN_CL37_TX_CONFIG_MASK	0x08
+#define XGBE_AN_CL37_MII_CTRL_8BIT	0x0100
 
 /* Bit setting and getting macros
  *  The get macro will extract the current bit field value from within
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 8068491..2222bbf8 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -982,6 +982,8 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata)
 		break;
 	}
 
+	reg |= XGBE_AN_CL37_MII_CTRL_8BIT;
+
 	XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_VEND2_AN_CTRL, reg);
 
 	netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n",

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 03/13] amd-xgbe: Be sure driver shuts down cleanly on module removal
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 01/13] amd-xgbe: Set the MDIO mode for 10000Base-T configuration Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 02/13] amd-xgbe: Set the MII control width for the MAC interface Tom Lendacky
@ 2017-08-18 14:02 ` Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 04/13] amd-xgbe: Update TSO packet statistics accuracy Tom Lendacky
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Sometimes when the driver is being unloaded while the devices are still
up the driver can issue errors.  This is based on timing and the double
invocation of some routines.  The phy_exit() call needs to be run after
the network device has been closed and unregistered from the system.
Also, the phy_exit() does not need to invoke phy_stop() since that will
be called as part of the device closing, so remove that call.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-main.c |    4 ++--
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c |    2 --
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 500147d..53a425c 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -458,6 +458,8 @@ void xgbe_deconfig_netdev(struct xgbe_prv_data *pdata)
 	if (IS_REACHABLE(CONFIG_PTP_1588_CLOCK))
 		xgbe_ptp_unregister(pdata);
 
+	unregister_netdev(netdev);
+
 	pdata->phy_if.phy_exit(pdata);
 
 	flush_workqueue(pdata->an_workqueue);
@@ -465,8 +467,6 @@ void xgbe_deconfig_netdev(struct xgbe_prv_data *pdata)
 
 	flush_workqueue(pdata->dev_workqueue);
 	destroy_workqueue(pdata->dev_workqueue);
-
-	unregister_netdev(netdev);
 }
 
 static int __init xgbe_mod_init(void)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 2222bbf8..2409202 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -1533,8 +1533,6 @@ static int xgbe_phy_best_advertised_speed(struct xgbe_prv_data *pdata)
 
 static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
 {
-	xgbe_phy_stop(pdata);
-
 	pdata->phy_if.phy_impl.exit(pdata);
 }
 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 04/13] amd-xgbe: Update TSO packet statistics accuracy
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (2 preceding siblings ...)
  2017-08-18 14:02 ` [PATCH net-next v2 03/13] amd-xgbe: Be sure driver shuts down cleanly on module removal Tom Lendacky
@ 2017-08-18 14:02 ` Tom Lendacky
  2017-08-18 14:02 ` [PATCH net-next v2 05/13] amd-xgbe: Add support to handle device renaming Tom Lendacky
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

When transmitting a TSO packet, the driver only increments the TSO packet
statistic by one rather than the number of total packets that were sent.
Update the driver to record the total number of packets that resulted from
TSO transmit.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 06f953e..bb60507 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -1740,7 +1740,7 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN,
 				  packet->tcp_header_len / 4);
 
-		pdata->ext_stats.tx_tso_packets++;
+		pdata->ext_stats.tx_tso_packets += packet->tx_packets;
 	} else {
 		/* Enable CRC and Pad Insertion */
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 05/13] amd-xgbe: Add support to handle device renaming
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (3 preceding siblings ...)
  2017-08-18 14:02 ` [PATCH net-next v2 04/13] amd-xgbe: Update TSO packet statistics accuracy Tom Lendacky
@ 2017-08-18 14:02 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 06/13] amd-xgbe: Add additional dynamic debug messages Tom Lendacky
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:02 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Many of the names used by the driver are based upon the name of the device
found during device probe.  Move the formatting of the names into the
device open function so that any renaming that occurs before the device is
brought up will be accounted for.  This also means moving the creation of
some named workqueues into the device open path.

Add support to register for net events so that if a device is renamed
the corresponding debugfs directory can be renamed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c |   25 +++++++++
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c     |   44 +++++++++++++++-
 drivers/net/ethernet/amd/xgbe/xgbe-main.c    |   72 +++++++++++---------------
 drivers/net/ethernet/amd/xgbe/xgbe.h         |    5 +-
 4 files changed, 100 insertions(+), 46 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c b/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
index 7546b66..7d128be 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
@@ -527,3 +527,28 @@ void xgbe_debugfs_exit(struct xgbe_prv_data *pdata)
 	debugfs_remove_recursive(pdata->xgbe_debugfs);
 	pdata->xgbe_debugfs = NULL;
 }
+
+void xgbe_debugfs_rename(struct xgbe_prv_data *pdata)
+{
+	struct dentry *pfile;
+	char *buf;
+
+	if (!pdata->xgbe_debugfs)
+		return;
+
+	buf = kasprintf(GFP_KERNEL, "amd-xgbe-%s", pdata->netdev->name);
+	if (!buf)
+		return;
+
+	if (!strcmp(pdata->xgbe_debugfs->d_name.name, buf))
+		goto out;
+
+	pfile = debugfs_rename(pdata->xgbe_debugfs->d_parent,
+			       pdata->xgbe_debugfs,
+			       pdata->xgbe_debugfs->d_parent, buf);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_rename failed\n");
+
+out:
+	kfree(buf);
+}
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 2fd9b80..d6d29d8 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -887,7 +887,7 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 		     (unsigned long)pdata);
 
 	ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0,
-			       netdev->name, pdata);
+			       netdev_name(netdev), pdata);
 	if (ret) {
 		netdev_alert(netdev, "error requesting irq %d\n",
 			     pdata->dev_irq);
@@ -1589,16 +1589,42 @@ static int xgbe_open(struct net_device *netdev)
 
 	DBGPR("-->xgbe_open\n");
 
+	/* Create the various names based on netdev name */
+	snprintf(pdata->an_name, sizeof(pdata->an_name) - 1, "%s-pcs",
+		 netdev_name(netdev));
+
+	snprintf(pdata->ecc_name, sizeof(pdata->ecc_name) - 1, "%s-ecc",
+		 netdev_name(netdev));
+
+	snprintf(pdata->i2c_name, sizeof(pdata->i2c_name) - 1, "%s-i2c",
+		 netdev_name(netdev));
+
+	/* Create workqueues */
+	pdata->dev_workqueue =
+		create_singlethread_workqueue(netdev_name(netdev));
+	if (!pdata->dev_workqueue) {
+		netdev_err(netdev, "device workqueue creation failed\n");
+		return -ENOMEM;
+	}
+
+	pdata->an_workqueue =
+		create_singlethread_workqueue(pdata->an_name);
+	if (!pdata->an_workqueue) {
+		netdev_err(netdev, "phy workqueue creation failed\n");
+		ret = -ENOMEM;
+		goto err_dev_wq;
+	}
+
 	/* Reset the phy settings */
 	ret = xgbe_phy_reset(pdata);
 	if (ret)
-		return ret;
+		goto err_an_wq;
 
 	/* Enable the clocks */
 	ret = clk_prepare_enable(pdata->sysclk);
 	if (ret) {
 		netdev_alert(netdev, "dma clk_prepare_enable failed\n");
-		return ret;
+		goto err_an_wq;
 	}
 
 	ret = clk_prepare_enable(pdata->ptpclk);
@@ -1651,6 +1677,12 @@ static int xgbe_open(struct net_device *netdev)
 err_sysclk:
 	clk_disable_unprepare(pdata->sysclk);
 
+err_an_wq:
+	destroy_workqueue(pdata->an_workqueue);
+
+err_dev_wq:
+	destroy_workqueue(pdata->dev_workqueue);
+
 	return ret;
 }
 
@@ -1674,6 +1706,12 @@ static int xgbe_close(struct net_device *netdev)
 	clk_disable_unprepare(pdata->ptpclk);
 	clk_disable_unprepare(pdata->sysclk);
 
+	flush_workqueue(pdata->an_workqueue);
+	destroy_workqueue(pdata->an_workqueue);
+
+	flush_workqueue(pdata->dev_workqueue);
+	destroy_workqueue(pdata->dev_workqueue);
+
 	set_bit(XGBE_DOWN, &pdata->dev_state);
 
 	DBGPR("<--xgbe_close\n");
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 53a425c..c5ff385 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -120,6 +120,7 @@
 #include <linux/netdevice.h>
 #include <linux/etherdevice.h>
 #include <linux/io.h>
+#include <linux/notifier.h>
 
 #include "xgbe.h"
 #include "xgbe-common.h"
@@ -399,35 +400,6 @@ int xgbe_config_netdev(struct xgbe_prv_data *pdata)
 		return ret;
 	}
 
-	/* Create the PHY/ANEG name based on netdev name */
-	snprintf(pdata->an_name, sizeof(pdata->an_name) - 1, "%s-pcs",
-		 netdev_name(netdev));
-
-	/* Create the ECC name based on netdev name */
-	snprintf(pdata->ecc_name, sizeof(pdata->ecc_name) - 1, "%s-ecc",
-		 netdev_name(netdev));
-
-	/* Create the I2C name based on netdev name */
-	snprintf(pdata->i2c_name, sizeof(pdata->i2c_name) - 1, "%s-i2c",
-		 netdev_name(netdev));
-
-	/* Create workqueues */
-	pdata->dev_workqueue =
-		create_singlethread_workqueue(netdev_name(netdev));
-	if (!pdata->dev_workqueue) {
-		netdev_err(netdev, "device workqueue creation failed\n");
-		ret = -ENOMEM;
-		goto err_netdev;
-	}
-
-	pdata->an_workqueue =
-		create_singlethread_workqueue(pdata->an_name);
-	if (!pdata->an_workqueue) {
-		netdev_err(netdev, "phy workqueue creation failed\n");
-		ret = -ENOMEM;
-		goto err_wq;
-	}
-
 	if (IS_REACHABLE(CONFIG_PTP_1588_CLOCK))
 		xgbe_ptp_register(pdata);
 
@@ -439,14 +411,6 @@ int xgbe_config_netdev(struct xgbe_prv_data *pdata)
 		  pdata->rx_ring_count);
 
 	return 0;
-
-err_wq:
-	destroy_workqueue(pdata->dev_workqueue);
-
-err_netdev:
-	unregister_netdev(netdev);
-
-	return ret;
 }
 
 void xgbe_deconfig_netdev(struct xgbe_prv_data *pdata)
@@ -461,18 +425,42 @@ void xgbe_deconfig_netdev(struct xgbe_prv_data *pdata)
 	unregister_netdev(netdev);
 
 	pdata->phy_if.phy_exit(pdata);
+}
 
-	flush_workqueue(pdata->an_workqueue);
-	destroy_workqueue(pdata->an_workqueue);
+static int xgbe_netdev_event(struct notifier_block *nb, unsigned long event,
+			     void *data)
+{
+	struct net_device *netdev = netdev_notifier_info_to_dev(data);
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 
-	flush_workqueue(pdata->dev_workqueue);
-	destroy_workqueue(pdata->dev_workqueue);
+	if (netdev->netdev_ops != xgbe_get_netdev_ops())
+		goto out;
+
+	switch (event) {
+	case NETDEV_CHANGENAME:
+		xgbe_debugfs_rename(pdata);
+		break;
+
+	default:
+		break;
+	}
+
+out:
+	return NOTIFY_DONE;
 }
 
+static struct notifier_block xgbe_netdev_notifier = {
+	.notifier_call = xgbe_netdev_event,
+};
+
 static int __init xgbe_mod_init(void)
 {
 	int ret;
 
+	ret = register_netdevice_notifier(&xgbe_netdev_notifier);
+	if (ret)
+		return ret;
+
 	ret = xgbe_platform_init();
 	if (ret)
 		return ret;
@@ -489,6 +477,8 @@ static void __exit xgbe_mod_exit(void)
 	xgbe_pci_exit();
 
 	xgbe_platform_exit();
+
+	unregister_netdevice_notifier(&xgbe_netdev_notifier);
 }
 
 module_init(xgbe_mod_init);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index e9282c9..9a80f20 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -130,6 +130,7 @@
 #include <linux/completion.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
+#include <linux/dcache.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
 #define XGBE_DRV_VERSION	"1.0.3"
@@ -1172,7 +1173,6 @@ struct xgbe_prv_data {
 	struct tasklet_struct tasklet_i2c;
 	struct tasklet_struct tasklet_an;
 
-#ifdef CONFIG_DEBUG_FS
 	struct dentry *xgbe_debugfs;
 
 	unsigned int debugfs_xgmac_reg;
@@ -1183,7 +1183,6 @@ struct xgbe_prv_data {
 	unsigned int debugfs_xprop_reg;
 
 	unsigned int debugfs_xi2c_reg;
-#endif
 };
 
 /* Function prototypes*/
@@ -1232,9 +1231,11 @@ void xgbe_dump_rx_desc(struct xgbe_prv_data *, struct xgbe_ring *,
 #ifdef CONFIG_DEBUG_FS
 void xgbe_debugfs_init(struct xgbe_prv_data *);
 void xgbe_debugfs_exit(struct xgbe_prv_data *);
+void xgbe_debugfs_rename(struct xgbe_prv_data *pdata);
 #else
 static inline void xgbe_debugfs_init(struct xgbe_prv_data *pdata) {}
 static inline void xgbe_debugfs_exit(struct xgbe_prv_data *pdata) {}
+static inline void xgbe_debugfs_rename(struct xgbe_prv_data *pdata) {}
 #endif /* CONFIG_DEBUG_FS */
 
 /* NOTE: Uncomment for function trace log messages in KERNEL LOG */

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 06/13] amd-xgbe: Add additional dynamic debug messages
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (4 preceding siblings ...)
  2017-08-18 14:02 ` [PATCH net-next v2 05/13] amd-xgbe: Add support to handle device renaming Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 07/13] amd-xgbe: Optimize DMA channel interrupt enablement Tom Lendacky
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add some additional dynamic debug message to the driver. The new messages
will provide additional information about the PCS window calculation.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
index 1e56ad7..3e5833c 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
@@ -292,6 +292,10 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	pdata->xpcs_window_size = 1 << (pdata->xpcs_window_size + 7);
 	pdata->xpcs_window_mask = pdata->xpcs_window_size - 1;
 	if (netif_msg_probe(pdata)) {
+		dev_dbg(dev, "xpcs window def  = %#010x\n",
+			pdata->xpcs_window_def_reg);
+		dev_dbg(dev, "xpcs window sel  = %#010x\n",
+			pdata->xpcs_window_sel_reg);
 		dev_dbg(dev, "xpcs window      = %#010x\n",
 			pdata->xpcs_window);
 		dev_dbg(dev, "xpcs window size = %#010x\n",

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 07/13] amd-xgbe: Optimize DMA channel interrupt enablement
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (5 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 06/13] amd-xgbe: Add additional dynamic debug messages Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 08/13] amd-xgbe: Add hardware features debug output Tom Lendacky
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Currently whenever the driver needs to enable or disable interrupts for
a DMA channel it reads the interrupt enable register (IER), updates the
value and then writes the new value back to the IER. Since the hardware
does not change the IER, software can track this value and elimiate the
need to read it each time.

Add the IER value to the channel related data structure and use that as
the base for enabling and disabling interrupts, thus removing the need
for the MMIO read.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c |   77 ++++++++++++++----------------
 drivers/net/ethernet/amd/xgbe/xgbe.h     |    4 +-
 2 files changed, 37 insertions(+), 44 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index bb60507..75a479c 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -605,7 +605,6 @@ static void xgbe_config_flow_control(struct xgbe_prv_data *pdata)
 static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_channel *channel;
-	unsigned int dma_ch_isr, dma_ch_ier;
 	unsigned int i;
 
 	/* Set the interrupt mode if supported */
@@ -617,20 +616,20 @@ static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
 		channel = pdata->channel[i];
 
 		/* Clear all the interrupts which are set */
-		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
-		XGMAC_DMA_IOWRITE(channel, DMA_CH_SR, dma_ch_isr);
+		XGMAC_DMA_IOWRITE(channel, DMA_CH_SR,
+				  XGMAC_DMA_IOREAD(channel, DMA_CH_SR));
 
 		/* Clear all interrupt enable bits */
-		dma_ch_ier = 0;
+		channel->curr_ier = 0;
 
 		/* Enable following interrupts
 		 *   NIE  - Normal Interrupt Summary Enable
 		 *   AIE  - Abnormal Interrupt Summary Enable
 		 *   FBEE - Fatal Bus Error Enable
 		 */
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, NIE, 1);
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, AIE, 1);
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, NIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, AIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, FBEE, 1);
 
 		if (channel->tx_ring) {
 			/* Enable the following Tx interrupts
@@ -639,7 +638,8 @@ static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
 			 *          mode)
 			 */
 			if (!pdata->per_channel_irq || pdata->channel_irq_mode)
-				XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
+				XGMAC_SET_BITS(channel->curr_ier,
+					       DMA_CH_IER, TIE, 1);
 		}
 		if (channel->rx_ring) {
 			/* Enable following Rx interrupts
@@ -648,12 +648,13 @@ static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
 			 *          per channel interrupts in edge triggered
 			 *          mode)
 			 */
-			XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 1);
+			XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RBUE, 1);
 			if (!pdata->per_channel_irq || pdata->channel_irq_mode)
-				XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+				XGMAC_SET_BITS(channel->curr_ier,
+					       DMA_CH_IER, RIE, 1);
 		}
 
-		XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+		XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, channel->curr_ier);
 	}
 }
 
@@ -1964,44 +1965,40 @@ static int xgbe_is_last_desc(struct xgbe_ring_desc *rdesc)
 static int xgbe_enable_int(struct xgbe_channel *channel,
 			   enum xgbe_int int_id)
 {
-	unsigned int dma_ch_ier;
-
-	dma_ch_ier = XGMAC_DMA_IOREAD(channel, DMA_CH_IER);
-
 	switch (int_id) {
 	case XGMAC_INT_DMA_CH_SR_TI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TIE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TPS:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TXSE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TXSE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TBU:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TBUE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TBUE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RIE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RBU:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RBUE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RPS:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RSE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RSE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TI_RI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TIE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RIE, 1);
 		break;
 	case XGMAC_INT_DMA_CH_SR_FBE:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 1);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, FBEE, 1);
 		break;
 	case XGMAC_INT_DMA_ALL:
-		dma_ch_ier |= channel->saved_ier;
+		channel->curr_ier |= channel->saved_ier;
 		break;
 	default:
 		return -1;
 	}
 
-	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, channel->curr_ier);
 
 	return 0;
 }
@@ -2009,45 +2006,41 @@ static int xgbe_enable_int(struct xgbe_channel *channel,
 static int xgbe_disable_int(struct xgbe_channel *channel,
 			    enum xgbe_int int_id)
 {
-	unsigned int dma_ch_ier;
-
-	dma_ch_ier = XGMAC_DMA_IOREAD(channel, DMA_CH_IER);
-
 	switch (int_id) {
 	case XGMAC_INT_DMA_CH_SR_TI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TIE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TPS:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TXSE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TXSE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TBU:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TBUE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TBUE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RIE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RBU:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RBUE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_RPS:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RSE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RSE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_TI_RI:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 0);
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, TIE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, RIE, 0);
 		break;
 	case XGMAC_INT_DMA_CH_SR_FBE:
-		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 0);
+		XGMAC_SET_BITS(channel->curr_ier, DMA_CH_IER, FBEE, 0);
 		break;
 	case XGMAC_INT_DMA_ALL:
-		channel->saved_ier = dma_ch_ier & XGBE_DMA_INTERRUPT_MASK;
-		dma_ch_ier &= ~XGBE_DMA_INTERRUPT_MASK;
+		channel->saved_ier = channel->curr_ier;
+		channel->curr_ier = 0;
 		break;
 	default:
 		return -1;
 	}
 
-	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, channel->curr_ier);
 
 	return 0;
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 9a80f20..58bb455 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -182,8 +182,6 @@
 #define XGBE_IRQ_MODE_EDGE	0
 #define XGBE_IRQ_MODE_LEVEL	1
 
-#define XGBE_DMA_INTERRUPT_MASK	0x31c7
-
 #define XGMAC_MIN_PACKET	60
 #define XGMAC_STD_PACKET_MTU	1500
 #define XGMAC_MAX_STD_PACKET	1518
@@ -462,6 +460,8 @@ struct xgbe_channel {
 	/* Netdev related settings */
 	struct napi_struct napi;
 
+	/* Per channel interrupt enablement tracker */
+	unsigned int curr_ier;
 	unsigned int saved_ier;
 
 	unsigned int tx_timer_active;

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 08/13] amd-xgbe: Add hardware features debug output
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (6 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 07/13] amd-xgbe: Optimize DMA channel interrupt enablement Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 09/13] amd-xgbe: Add per queue Tx and Rx statistics Tom Lendacky
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Use the dynamic debug support to output information about the hardware
features reported by the device.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c |   78 +++++++++++++++++++++++++++++-
 1 file changed, 75 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index d6d29d8..7498bb8 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -732,8 +732,6 @@ void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
 	unsigned int mac_hfr0, mac_hfr1, mac_hfr2;
 	struct xgbe_hw_features *hw_feat = &pdata->hw_feat;
 
-	DBGPR("-->xgbe_get_all_hw_features\n");
-
 	mac_hfr0 = XGMAC_IOREAD(pdata, MAC_HWF0R);
 	mac_hfr1 = XGMAC_IOREAD(pdata, MAC_HWF1R);
 	mac_hfr2 = XGMAC_IOREAD(pdata, MAC_HWF2R);
@@ -828,7 +826,81 @@ void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
 	hw_feat->rx_fifo_size = 1 << (hw_feat->rx_fifo_size + 7);
 	hw_feat->tx_fifo_size = 1 << (hw_feat->tx_fifo_size + 7);
 
-	DBGPR("<--xgbe_get_all_hw_features\n");
+	if (netif_msg_probe(pdata)) {
+		dev_dbg(pdata->dev, "Hardware features:\n");
+
+		/* Hardware feature register 0 */
+		dev_dbg(pdata->dev, "  1GbE support              : %s\n",
+			hw_feat->gmii ? "yes" : "no");
+		dev_dbg(pdata->dev, "  VLAN hash filter          : %s\n",
+			hw_feat->vlhash ? "yes" : "no");
+		dev_dbg(pdata->dev, "  MDIO interface            : %s\n",
+			hw_feat->sma ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Wake-up packet support    : %s\n",
+			hw_feat->rwk ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Magic packet support      : %s\n",
+			hw_feat->mgk ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Management counters       : %s\n",
+			hw_feat->mmc ? "yes" : "no");
+		dev_dbg(pdata->dev, "  ARP offload               : %s\n",
+			hw_feat->aoe ? "yes" : "no");
+		dev_dbg(pdata->dev, "  IEEE 1588-2008 Timestamp  : %s\n",
+			hw_feat->ts ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Energy Efficient Ethernet : %s\n",
+			hw_feat->eee ? "yes" : "no");
+		dev_dbg(pdata->dev, "  TX checksum offload       : %s\n",
+			hw_feat->tx_coe ? "yes" : "no");
+		dev_dbg(pdata->dev, "  RX checksum offload       : %s\n",
+			hw_feat->rx_coe ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Additional MAC addresses  : %u\n",
+			hw_feat->addn_mac);
+		dev_dbg(pdata->dev, "  Timestamp source          : %s\n",
+			(hw_feat->ts_src == 1) ? "internal" :
+			(hw_feat->ts_src == 2) ? "external" :
+			(hw_feat->ts_src == 3) ? "internal/external" : "n/a");
+		dev_dbg(pdata->dev, "  SA/VLAN insertion         : %s\n",
+			hw_feat->sa_vlan_ins ? "yes" : "no");
+
+		/* Hardware feature register 1 */
+		dev_dbg(pdata->dev, "  RX fifo size              : %u\n",
+			hw_feat->rx_fifo_size);
+		dev_dbg(pdata->dev, "  TX fifo size              : %u\n",
+			hw_feat->tx_fifo_size);
+		dev_dbg(pdata->dev, "  IEEE 1588 high word       : %s\n",
+			hw_feat->adv_ts_hi ? "yes" : "no");
+		dev_dbg(pdata->dev, "  DMA width                 : %u\n",
+			hw_feat->dma_width);
+		dev_dbg(pdata->dev, "  Data Center Bridging      : %s\n",
+			hw_feat->dcb ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Split header              : %s\n",
+			hw_feat->sph ? "yes" : "no");
+		dev_dbg(pdata->dev, "  TCP Segmentation Offload  : %s\n",
+			hw_feat->tso ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Debug memory interface    : %s\n",
+			hw_feat->dma_debug ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Receive Side Scaling      : %s\n",
+			hw_feat->rss ? "yes" : "no");
+		dev_dbg(pdata->dev, "  Traffic Class count       : %u\n",
+			hw_feat->tc_cnt);
+		dev_dbg(pdata->dev, "  Hash table size           : %u\n",
+			hw_feat->hash_table_size);
+		dev_dbg(pdata->dev, "  L3/L4 Filters             : %u\n",
+			hw_feat->l3l4_filter_num);
+
+		/* Hardware feature register 2 */
+		dev_dbg(pdata->dev, "  RX queue count            : %u\n",
+			hw_feat->rx_q_cnt);
+		dev_dbg(pdata->dev, "  TX queue count            : %u\n",
+			hw_feat->tx_q_cnt);
+		dev_dbg(pdata->dev, "  RX DMA channel count      : %u\n",
+			hw_feat->rx_ch_cnt);
+		dev_dbg(pdata->dev, "  TX DMA channel count      : %u\n",
+			hw_feat->rx_ch_cnt);
+		dev_dbg(pdata->dev, "  PPS outputs               : %u\n",
+			hw_feat->pps_out_num);
+		dev_dbg(pdata->dev, "  Auxiliary snapshot inputs : %u\n",
+			hw_feat->aux_snap_num);
+	}
 }
 
 static void xgbe_napi_enable(struct xgbe_prv_data *pdata, unsigned int add)

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 09/13] amd-xgbe: Add per queue Tx and Rx statistics
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (7 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 08/13] amd-xgbe: Add hardware features debug output Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 10/13] net: ethtool: Add macro to clear a link mode setting Tom Lendacky
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add per queue Tx and Rx packet and byte counts.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c     |   23 ++++++++++++++++-------
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |   26 +++++++++++++++++++++++++-
 drivers/net/ethernet/amd/xgbe/xgbe.h         |    5 +++++
 3 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 75a479c..a978408 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -1609,6 +1609,7 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 	struct xgbe_ring_data *rdata;
 	struct xgbe_ring_desc *rdesc;
 	struct xgbe_packet_data *packet = &ring->packet_data;
+	unsigned int tx_packets, tx_bytes;
 	unsigned int csum, tso, vlan;
 	unsigned int tso_context, vlan_context;
 	unsigned int tx_set_ic;
@@ -1618,6 +1619,9 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 
 	DBGPR("-->xgbe_dev_xmit\n");
 
+	tx_packets = packet->tx_packets;
+	tx_bytes = packet->tx_bytes;
+
 	csum = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
 			      CSUM_ENABLE);
 	tso = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
@@ -1645,13 +1649,12 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 	 *     - Addition of Tx frame count to the frame count since the
 	 *       last interrupt was set does not exceed the frame count setting
 	 */
-	ring->coalesce_count += packet->tx_packets;
+	ring->coalesce_count += tx_packets;
 	if (!pdata->tx_frames)
 		tx_set_ic = 0;
-	else if (packet->tx_packets > pdata->tx_frames)
+	else if (tx_packets > pdata->tx_frames)
 		tx_set_ic = 1;
-	else if ((ring->coalesce_count % pdata->tx_frames) <
-		 packet->tx_packets)
+	else if ((ring->coalesce_count % pdata->tx_frames) < tx_packets)
 		tx_set_ic = 1;
 	else
 		tx_set_ic = 0;
@@ -1741,7 +1744,7 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN,
 				  packet->tcp_header_len / 4);
 
-		pdata->ext_stats.tx_tso_packets += packet->tx_packets;
+		pdata->ext_stats.tx_tso_packets += tx_packets;
 	} else {
 		/* Enable CRC and Pad Insertion */
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0);
@@ -1789,8 +1792,11 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 		XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, IC, 1);
 
 	/* Save the Tx info to report back during cleanup */
-	rdata->tx.packets = packet->tx_packets;
-	rdata->tx.bytes = packet->tx_bytes;
+	rdata->tx.packets = tx_packets;
+	rdata->tx.bytes = tx_bytes;
+
+	pdata->ext_stats.txq_packets[channel->queue_index] += tx_packets;
+	pdata->ext_stats.txq_bytes[channel->queue_index] += tx_bytes;
 
 	/* In case the Tx DMA engine is running, make sure everything
 	 * is written to the descriptor(s) before setting the OWN bit
@@ -1944,6 +1950,9 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 				       FRAME, 1);
 	}
 
+	pdata->ext_stats.rxq_packets[channel->queue_index]++;
+	pdata->ext_stats.rxq_bytes[channel->queue_index] += rdata->rx.len;
+
 	DBGPR("<--xgbe_dev_read: %s - descriptor=%u (cur=%d)\n", channel->name,
 	      ring->cur & (ring->rdesc_count - 1), ring->cur);
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index 67a2e52..f80b186 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -186,6 +186,7 @@ struct xgbe_stats {
 
 static void xgbe_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
 {
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 	int i;
 
 	switch (stringset) {
@@ -195,6 +196,18 @@ static void xgbe_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
 			       ETH_GSTRING_LEN);
 			data += ETH_GSTRING_LEN;
 		}
+		for (i = 0; i < pdata->tx_ring_count; i++) {
+			sprintf(data, "txq_%u_packets", i);
+			data += ETH_GSTRING_LEN;
+			sprintf(data, "txq_%u_bytes", i);
+			data += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < pdata->rx_ring_count; i++) {
+			sprintf(data, "rxq_%u_packets", i);
+			data += ETH_GSTRING_LEN;
+			sprintf(data, "rxq_%u_bytes", i);
+			data += ETH_GSTRING_LEN;
+		}
 		break;
 	}
 }
@@ -211,15 +224,26 @@ static void xgbe_get_ethtool_stats(struct net_device *netdev,
 		stat = (u8 *)pdata + xgbe_gstring_stats[i].stat_offset;
 		*data++ = *(u64 *)stat;
 	}
+	for (i = 0; i < pdata->tx_ring_count; i++) {
+		*data++ = pdata->ext_stats.txq_packets[i];
+		*data++ = pdata->ext_stats.txq_bytes[i];
+	}
+	for (i = 0; i < pdata->rx_ring_count; i++) {
+		*data++ = pdata->ext_stats.rxq_packets[i];
+		*data++ = pdata->ext_stats.rxq_bytes[i];
+	}
 }
 
 static int xgbe_get_sset_count(struct net_device *netdev, int stringset)
 {
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 	int ret;
 
 	switch (stringset) {
 	case ETH_SS_STATS:
-		ret = XGBE_STATS_COUNT;
+		ret = XGBE_STATS_COUNT +
+		      (pdata->tx_ring_count * 2) +
+		      (pdata->rx_ring_count * 2);
 		break;
 
 	default:
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 58bb455..0e93155 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -668,6 +668,11 @@ struct xgbe_ext_stats {
 	u64 tx_tso_packets;
 	u64 rx_split_header_packets;
 	u64 rx_buffer_unavailable;
+
+	u64 txq_packets[XGBE_MAX_DMA_CHANNELS];
+	u64 txq_bytes[XGBE_MAX_DMA_CHANNELS];
+	u64 rxq_packets[XGBE_MAX_DMA_CHANNELS];
+	u64 rxq_bytes[XGBE_MAX_DMA_CHANNELS];
 };
 
 struct xgbe_hw_if {

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 10/13] net: ethtool: Add macro to clear a link mode setting
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (8 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 09/13] amd-xgbe: Add per queue Tx and Rx statistics Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:03 ` [PATCH net-next v2 11/13] amd-xgbe: Convert to using the new link mode settings Tom Lendacky
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

There are currently macros to set and test an ETHTOOL_LINK_MODE_ setting,
but not to clear one. Add a macro to clear an ETHTOOL_LINK_MODE_ setting.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 include/linux/ethtool.h |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
index afdbb70..4587a4c 100644
--- a/include/linux/ethtool.h
+++ b/include/linux/ethtool.h
@@ -137,6 +137,17 @@ struct ethtool_link_ksettings {
 	__set_bit(ETHTOOL_LINK_MODE_ ## mode ## _BIT, (ptr)->link_modes.name)
 
 /**
+ * ethtool_link_ksettings_del_link_mode - clear bit in link_ksettings
+ * link mode mask
+ *   @ptr : pointer to struct ethtool_link_ksettings
+ *   @name : one of supported/advertising/lp_advertising
+ *   @mode : one of the ETHTOOL_LINK_MODE_*_BIT
+ * (not atomic, no bound checking)
+ */
+#define ethtool_link_ksettings_del_link_mode(ptr, name, mode)		\
+	__clear_bit(ETHTOOL_LINK_MODE_ ## mode ## _BIT, (ptr)->link_modes.name)
+
+/**
  * ethtool_link_ksettings_test_link_mode - test bit in ksettings link mode mask
  *   @ptr : pointer to struct ethtool_link_ksettings
  *   @name : one of supported/advertising/lp_advertising

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 11/13] amd-xgbe: Convert to using the new link mode settings
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (9 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 10/13] net: ethtool: Add macro to clear a link mode setting Tom Lendacky
@ 2017-08-18 14:03 ` Tom Lendacky
  2017-08-18 14:04 ` [PATCH net-next v2 12/13] amd-xgbe: Add support for VXLAN offload capabilities Tom Lendacky
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:03 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Convert from using the old u32 supported, advertising, etc. link settings
to the new link mode settings that support bit positions / settings
greater than 32 bits.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |   56 ++--
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c    |   77 +++---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c  |   54 ++--
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c  |  350 ++++++++++++++------------
 drivers/net/ethernet/amd/xgbe/xgbe.h         |   50 +++-
 5 files changed, 345 insertions(+), 242 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index f80b186..cea25ac 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -267,6 +267,7 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
 			       struct ethtool_pauseparam *pause)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	int ret = 0;
 
 	if (pause->autoneg && (pdata->phy.autoneg != AUTONEG_ENABLE)) {
@@ -279,16 +280,21 @@ static int xgbe_set_pauseparam(struct net_device *netdev,
 	pdata->phy.tx_pause = pause->tx_pause;
 	pdata->phy.rx_pause = pause->rx_pause;
 
-	pdata->phy.advertising &= ~ADVERTISED_Pause;
-	pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
+	XGBE_CLR_ADV(lks, Pause);
+	XGBE_CLR_ADV(lks, Asym_Pause);
 
 	if (pause->rx_pause) {
-		pdata->phy.advertising |= ADVERTISED_Pause;
-		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_ADV(lks, Pause);
+		XGBE_SET_ADV(lks, Asym_Pause);
 	}
 
-	if (pause->tx_pause)
-		pdata->phy.advertising ^= ADVERTISED_Asym_Pause;
+	if (pause->tx_pause) {
+		/* Equivalent to XOR of Asym_Pause */
+		if (XGBE_ADV(lks, Asym_Pause))
+			XGBE_CLR_ADV(lks, Asym_Pause);
+		else
+			XGBE_SET_ADV(lks, Asym_Pause);
+	}
 
 	if (netif_running(netdev))
 		ret = pdata->phy_if.phy_config_aneg(pdata);
@@ -300,22 +306,20 @@ static int xgbe_get_link_ksettings(struct net_device *netdev,
 				   struct ethtool_link_ksettings *cmd)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 
 	cmd->base.phy_address = pdata->phy.address;
 
-	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
-						pdata->phy.supported);
-	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising,
-						pdata->phy.advertising);
-	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.lp_advertising,
-						pdata->phy.lp_advertising);
-
 	cmd->base.autoneg = pdata->phy.autoneg;
 	cmd->base.speed = pdata->phy.speed;
 	cmd->base.duplex = pdata->phy.duplex;
 
 	cmd->base.port = PORT_NONE;
 
+	XGBE_LM_COPY(cmd, supported, lks, supported);
+	XGBE_LM_COPY(cmd, advertising, lks, advertising);
+	XGBE_LM_COPY(cmd, lp_advertising, lks, lp_advertising);
+
 	return 0;
 }
 
@@ -323,7 +327,8 @@ static int xgbe_set_link_ksettings(struct net_device *netdev,
 				   const struct ethtool_link_ksettings *cmd)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
-	u32 advertising;
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+	__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
 	u32 speed;
 	int ret;
 
@@ -355,15 +360,17 @@ static int xgbe_set_link_ksettings(struct net_device *netdev,
 		}
 	}
 
-	ethtool_convert_link_mode_to_legacy_u32(&advertising,
-						cmd->link_modes.advertising);
-
 	netif_dbg(pdata, link, netdev,
-		  "requested advertisement %#x, phy supported %#x\n",
-		  advertising, pdata->phy.supported);
+		  "requested advertisement 0x%*pb, phy supported 0x%*pb\n",
+		  __ETHTOOL_LINK_MODE_MASK_NBITS, cmd->link_modes.advertising,
+		  __ETHTOOL_LINK_MODE_MASK_NBITS, lks->link_modes.supported);
+
+	bitmap_and(advertising,
+		   cmd->link_modes.advertising, lks->link_modes.supported,
+		   __ETHTOOL_LINK_MODE_MASK_NBITS);
 
-	advertising &= pdata->phy.supported;
-	if ((cmd->base.autoneg == AUTONEG_ENABLE) && !advertising) {
+	if ((cmd->base.autoneg == AUTONEG_ENABLE) &&
+	    bitmap_empty(advertising, __ETHTOOL_LINK_MODE_MASK_NBITS)) {
 		netdev_err(netdev,
 			   "unsupported requested advertisement\n");
 		return -EINVAL;
@@ -373,12 +380,13 @@ static int xgbe_set_link_ksettings(struct net_device *netdev,
 	pdata->phy.autoneg = cmd->base.autoneg;
 	pdata->phy.speed = speed;
 	pdata->phy.duplex = cmd->base.duplex;
-	pdata->phy.advertising = advertising;
+	bitmap_copy(lks->link_modes.advertising, advertising,
+		    __ETHTOOL_LINK_MODE_MASK_NBITS);
 
 	if (cmd->base.autoneg == AUTONEG_ENABLE)
-		pdata->phy.advertising |= ADVERTISED_Autoneg;
+		XGBE_SET_ADV(lks, Autoneg);
 	else
-		pdata->phy.advertising &= ~ADVERTISED_Autoneg;
+		XGBE_CLR_ADV(lks, Autoneg);
 
 	if (netif_running(netdev))
 		ret = pdata->phy_if.phy_config_aneg(pdata);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 2409202..072b9f6 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -615,12 +615,14 @@ static enum xgbe_an xgbe_an73_page_received(struct xgbe_prv_data *pdata)
 
 static enum xgbe_an xgbe_an73_incompat_link(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	/* Be sure we aren't looping trying to negotiate */
 	if (xgbe_in_kr_mode(pdata)) {
 		pdata->kr_state = XGBE_RX_ERROR;
 
-		if (!(pdata->phy.advertising & ADVERTISED_1000baseKX_Full) &&
-		    !(pdata->phy.advertising & ADVERTISED_2500baseX_Full))
+		if (!XGBE_ADV(lks, 1000baseKX_Full) &&
+		    !XGBE_ADV(lks, 2500baseX_Full))
 			return XGBE_AN_NO_LINK;
 
 		if (pdata->kx_state != XGBE_RX_BPA)
@@ -628,7 +630,7 @@ static enum xgbe_an xgbe_an73_incompat_link(struct xgbe_prv_data *pdata)
 	} else {
 		pdata->kx_state = XGBE_RX_ERROR;
 
-		if (!(pdata->phy.advertising & ADVERTISED_10000baseKR_Full))
+		if (!XGBE_ADV(lks, 10000baseKR_Full))
 			return XGBE_AN_NO_LINK;
 
 		if (pdata->kr_state != XGBE_RX_BPA)
@@ -944,18 +946,19 @@ static void xgbe_an_state_machine(struct work_struct *work)
 
 static void xgbe_an37_init(struct xgbe_prv_data *pdata)
 {
-	unsigned int advertising, reg;
+	struct ethtool_link_ksettings lks;
+	unsigned int reg;
 
-	advertising = pdata->phy_if.phy_impl.an_advertising(pdata);
+	pdata->phy_if.phy_impl.an_advertising(pdata, &lks);
 
 	/* Set up Advertisement register */
 	reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_VEND2_AN_ADVERTISE);
-	if (advertising & ADVERTISED_Pause)
+	if (XGBE_ADV(&lks, Pause))
 		reg |= 0x100;
 	else
 		reg &= ~0x100;
 
-	if (advertising & ADVERTISED_Asym_Pause)
+	if (XGBE_ADV(&lks, Asym_Pause))
 		reg |= 0x80;
 	else
 		reg &= ~0x80;
@@ -992,13 +995,14 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata)
 
 static void xgbe_an73_init(struct xgbe_prv_data *pdata)
 {
-	unsigned int advertising, reg;
+	struct ethtool_link_ksettings lks;
+	unsigned int reg;
 
-	advertising = pdata->phy_if.phy_impl.an_advertising(pdata);
+	pdata->phy_if.phy_impl.an_advertising(pdata, &lks);
 
 	/* Set up Advertisement register 3 first */
 	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
-	if (advertising & ADVERTISED_10000baseR_FEC)
+	if (XGBE_ADV(&lks, 10000baseR_FEC))
 		reg |= 0xc000;
 	else
 		reg &= ~0xc000;
@@ -1007,13 +1011,13 @@ static void xgbe_an73_init(struct xgbe_prv_data *pdata)
 
 	/* Set up Advertisement register 2 next */
 	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
-	if (advertising & ADVERTISED_10000baseKR_Full)
+	if (XGBE_ADV(&lks, 10000baseKR_Full))
 		reg |= 0x80;
 	else
 		reg &= ~0x80;
 
-	if ((advertising & ADVERTISED_1000baseKX_Full) ||
-	    (advertising & ADVERTISED_2500baseX_Full))
+	if (XGBE_ADV(&lks, 1000baseKX_Full) ||
+	    XGBE_ADV(&lks, 2500baseX_Full))
 		reg |= 0x20;
 	else
 		reg &= ~0x20;
@@ -1022,12 +1026,12 @@ static void xgbe_an73_init(struct xgbe_prv_data *pdata)
 
 	/* Set up Advertisement register 1 last */
 	reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
-	if (advertising & ADVERTISED_Pause)
+	if (XGBE_ADV(&lks, Pause))
 		reg |= 0x400;
 	else
 		reg &= ~0x400;
 
-	if (advertising & ADVERTISED_Asym_Pause)
+	if (XGBE_ADV(&lks, Asym_Pause))
 		reg |= 0x800;
 	else
 		reg &= ~0x800;
@@ -1283,9 +1287,10 @@ static enum xgbe_mode xgbe_phy_status_aneg(struct xgbe_prv_data *pdata)
 
 static void xgbe_phy_status_result(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	enum xgbe_mode mode;
 
-	pdata->phy.lp_advertising = 0;
+	XGBE_ZERO_LP_ADV(lks);
 
 	if ((pdata->phy.autoneg != AUTONEG_ENABLE) || pdata->parallel_detect)
 		mode = xgbe_cur_mode(pdata);
@@ -1515,17 +1520,21 @@ static void xgbe_dump_phy_registers(struct xgbe_prv_data *pdata)
 
 static int xgbe_phy_best_advertised_speed(struct xgbe_prv_data *pdata)
 {
-	if (pdata->phy.advertising & ADVERTISED_10000baseKR_Full)
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
+	if (XGBE_ADV(lks, 10000baseKR_Full))
 		return SPEED_10000;
-	else if (pdata->phy.advertising & ADVERTISED_10000baseT_Full)
+	else if (XGBE_ADV(lks, 10000baseT_Full))
 		return SPEED_10000;
-	else if (pdata->phy.advertising & ADVERTISED_2500baseX_Full)
+	else if (XGBE_ADV(lks, 2500baseX_Full))
 		return SPEED_2500;
-	else if (pdata->phy.advertising & ADVERTISED_1000baseKX_Full)
+	else if (XGBE_ADV(lks, 2500baseT_Full))
+		return SPEED_2500;
+	else if (XGBE_ADV(lks, 1000baseKX_Full))
 		return SPEED_1000;
-	else if (pdata->phy.advertising & ADVERTISED_1000baseT_Full)
+	else if (XGBE_ADV(lks, 1000baseT_Full))
 		return SPEED_1000;
-	else if (pdata->phy.advertising & ADVERTISED_100baseT_Full)
+	else if (XGBE_ADV(lks, 100baseT_Full))
 		return SPEED_100;
 
 	return SPEED_UNKNOWN;
@@ -1538,6 +1547,7 @@ static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
 
 static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	int ret;
 
 	mutex_init(&pdata->an_mutex);
@@ -1555,11 +1565,13 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 	ret = pdata->phy_if.phy_impl.init(pdata);
 	if (ret)
 		return ret;
-	pdata->phy.advertising = pdata->phy.supported;
+
+	/* Copy supported link modes to advertising link modes */
+	XGBE_LM_COPY(lks, advertising, lks, supported);
 
 	pdata->phy.address = 0;
 
-	if (pdata->phy.advertising & ADVERTISED_Autoneg) {
+	if (XGBE_ADV(lks, Autoneg)) {
 		pdata->phy.autoneg = AUTONEG_ENABLE;
 		pdata->phy.speed = SPEED_UNKNOWN;
 		pdata->phy.duplex = DUPLEX_UNKNOWN;
@@ -1576,16 +1588,21 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 	pdata->phy.rx_pause = pdata->rx_pause;
 
 	/* Fix up Flow Control advertising */
-	pdata->phy.advertising &= ~ADVERTISED_Pause;
-	pdata->phy.advertising &= ~ADVERTISED_Asym_Pause;
+	XGBE_CLR_ADV(lks, Pause);
+	XGBE_CLR_ADV(lks, Asym_Pause);
 
 	if (pdata->rx_pause) {
-		pdata->phy.advertising |= ADVERTISED_Pause;
-		pdata->phy.advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_ADV(lks, Pause);
+		XGBE_SET_ADV(lks, Asym_Pause);
 	}
 
-	if (pdata->tx_pause)
-		pdata->phy.advertising ^= ADVERTISED_Asym_Pause;
+	if (pdata->tx_pause) {
+		/* Equivalent to XOR of Asym_Pause */
+		if (XGBE_ADV(lks, Asym_Pause))
+			XGBE_CLR_ADV(lks, Asym_Pause);
+		else
+			XGBE_SET_ADV(lks, Asym_Pause);
+	}
 
 	if (netif_msg_drv(pdata))
 		xgbe_dump_phy_registers(pdata);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c
index c75edca..d16eae4 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v1.c
@@ -231,20 +231,21 @@ static void xgbe_phy_kr_training_post(struct xgbe_prv_data *pdata)
 
 static enum xgbe_mode xgbe_phy_an_outcome(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 	enum xgbe_mode mode;
 	unsigned int ad_reg, lp_reg;
 
-	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
-	pdata->phy.lp_advertising |= ADVERTISED_Backplane;
+	XGBE_SET_LP_ADV(lks, Autoneg);
+	XGBE_SET_LP_ADV(lks, Backplane);
 
 	/* Compare Advertisement and Link Partner register 1 */
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA);
 	if (lp_reg & 0x400)
-		pdata->phy.lp_advertising |= ADVERTISED_Pause;
+		XGBE_SET_LP_ADV(lks, Pause);
 	if (lp_reg & 0x800)
-		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_LP_ADV(lks, Asym_Pause);
 
 	if (pdata->phy.pause_autoneg) {
 		/* Set flow control based on auto-negotiation result */
@@ -266,12 +267,12 @@ static enum xgbe_mode xgbe_phy_an_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 1);
 	if (lp_reg & 0x80)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_LP_ADV(lks, 10000baseKR_Full);
 	if (lp_reg & 0x20) {
 		if (phy_data->speed_set == XGBE_SPEEDSET_2500_10000)
-			pdata->phy.lp_advertising |= ADVERTISED_2500baseX_Full;
+			XGBE_SET_LP_ADV(lks, 2500baseX_Full);
 		else
-			pdata->phy.lp_advertising |= ADVERTISED_1000baseKX_Full;
+			XGBE_SET_LP_ADV(lks, 1000baseKX_Full);
 	}
 
 	ad_reg &= lp_reg;
@@ -290,14 +291,17 @@ static enum xgbe_mode xgbe_phy_an_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 2);
 	if (lp_reg & 0xc000)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseR_FEC;
+		XGBE_SET_LP_ADV(lks, 10000baseR_FEC);
 
 	return mode;
 }
 
-static unsigned int xgbe_phy_an_advertising(struct xgbe_prv_data *pdata)
+static void xgbe_phy_an_advertising(struct xgbe_prv_data *pdata,
+				    struct ethtool_link_ksettings *dlks)
 {
-	return pdata->phy.advertising;
+	struct ethtool_link_ksettings *slks = &pdata->phy.lks;
+
+	XGBE_LM_COPY(dlks, advertising, slks, advertising);
 }
 
 static int xgbe_phy_an_config(struct xgbe_prv_data *pdata)
@@ -565,11 +569,10 @@ static void xgbe_phy_set_mode(struct xgbe_prv_data *pdata, enum xgbe_mode mode)
 }
 
 static bool xgbe_phy_check_mode(struct xgbe_prv_data *pdata,
-				enum xgbe_mode mode, u32 advert)
+				enum xgbe_mode mode, bool advert)
 {
 	if (pdata->phy.autoneg == AUTONEG_ENABLE) {
-		if (pdata->phy.advertising & advert)
-			return true;
+		return advert;
 	} else {
 		enum xgbe_mode cur_mode;
 
@@ -583,16 +586,18 @@ static bool xgbe_phy_check_mode(struct xgbe_prv_data *pdata,
 
 static bool xgbe_phy_use_mode(struct xgbe_prv_data *pdata, enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	switch (mode) {
 	case XGBE_MODE_KX_1000:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseKX_Full);
+					   XGBE_ADV(lks, 1000baseKX_Full));
 	case XGBE_MODE_KX_2500:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_2500baseX_Full);
+					   XGBE_ADV(lks, 2500baseX_Full));
 	case XGBE_MODE_KR:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_10000baseKR_Full);
+					   XGBE_ADV(lks, 10000baseKR_Full));
 	default:
 		return false;
 	}
@@ -672,6 +677,7 @@ static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
 
 static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data;
 	int ret;
 
@@ -790,21 +796,23 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 	}
 
 	/* Initialize supported features */
-	pdata->phy.supported = SUPPORTED_Autoneg;
-	pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-	pdata->phy.supported |= SUPPORTED_Backplane;
-	pdata->phy.supported |= SUPPORTED_10000baseKR_Full;
+	XGBE_ZERO_SUP(lks);
+	XGBE_SET_SUP(lks, Autoneg);
+	XGBE_SET_SUP(lks, Pause);
+	XGBE_SET_SUP(lks, Asym_Pause);
+	XGBE_SET_SUP(lks, Backplane);
+	XGBE_SET_SUP(lks, 10000baseKR_Full);
 	switch (phy_data->speed_set) {
 	case XGBE_SPEEDSET_1000_10000:
-		pdata->phy.supported |= SUPPORTED_1000baseKX_Full;
+		XGBE_SET_SUP(lks, 1000baseKX_Full);
 		break;
 	case XGBE_SPEEDSET_2500_10000:
-		pdata->phy.supported |= SUPPORTED_2500baseX_Full;
+		XGBE_SET_SUP(lks, 2500baseX_Full);
 		break;
 	}
 
 	if (pdata->fec_ability & MDIO_PMA_10GBR_FECABLE_ABLE)
-		pdata->phy.supported |= SUPPORTED_10000baseR_FEC;
+		XGBE_SET_SUP(lks, 10000baseR_FEC);
 
 	pdata->phy_data = phy_data;
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index 81c45fa..3304a29 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -709,18 +709,13 @@ static int xgbe_phy_mii_read(struct mii_bus *mii, int addr, int reg)
 
 static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 
 	if (!phy_data->sfp_mod_absent && !phy_data->sfp_changed)
 		return;
 
-	pdata->phy.supported &= ~SUPPORTED_Autoneg;
-	pdata->phy.supported &= ~(SUPPORTED_Pause | SUPPORTED_Asym_Pause);
-	pdata->phy.supported &= ~SUPPORTED_TP;
-	pdata->phy.supported &= ~SUPPORTED_FIBRE;
-	pdata->phy.supported &= ~SUPPORTED_100baseT_Full;
-	pdata->phy.supported &= ~SUPPORTED_1000baseT_Full;
-	pdata->phy.supported &= ~SUPPORTED_10000baseT_Full;
+	XGBE_ZERO_SUP(lks);
 
 	if (phy_data->sfp_mod_absent) {
 		pdata->phy.speed = SPEED_UNKNOWN;
@@ -728,18 +723,13 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 		pdata->phy.autoneg = AUTONEG_ENABLE;
 		pdata->phy.pause_autoneg = AUTONEG_ENABLE;
 
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
-		pdata->phy.supported |= SUPPORTED_FIBRE;
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
-			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, TP);
+		XGBE_SET_SUP(lks, FIBRE);
 
-		pdata->phy.advertising = pdata->phy.supported;
+		XGBE_LM_COPY(lks, advertising, lks, supported);
 
 		return;
 	}
@@ -753,8 +743,18 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 		pdata->phy.duplex = DUPLEX_UNKNOWN;
 		pdata->phy.autoneg = AUTONEG_ENABLE;
 		pdata->phy.pause_autoneg = AUTONEG_ENABLE;
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		if (phy_data->sfp_base == XGBE_SFP_BASE_1000_T) {
+			if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
+				XGBE_SET_SUP(lks, 100baseT_Full);
+			if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
+				XGBE_SET_SUP(lks, 1000baseT_Full);
+		} else {
+			if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
+				XGBE_SET_SUP(lks, 1000baseX_Full);
+		}
 		break;
 	case XGBE_SFP_BASE_10000_SR:
 	case XGBE_SFP_BASE_10000_LR:
@@ -765,6 +765,27 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 		pdata->phy.duplex = DUPLEX_FULL;
 		pdata->phy.autoneg = AUTONEG_DISABLE;
 		pdata->phy.pause_autoneg = AUTONEG_DISABLE;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
+			switch (phy_data->sfp_base) {
+			case XGBE_SFP_BASE_10000_SR:
+				XGBE_SET_SUP(lks, 10000baseSR_Full);
+				break;
+			case XGBE_SFP_BASE_10000_LR:
+				XGBE_SET_SUP(lks, 10000baseLR_Full);
+				break;
+			case XGBE_SFP_BASE_10000_LRM:
+				XGBE_SET_SUP(lks, 10000baseLRM_Full);
+				break;
+			case XGBE_SFP_BASE_10000_ER:
+				XGBE_SET_SUP(lks, 10000baseER_Full);
+				break;
+			case XGBE_SFP_BASE_10000_CR:
+				XGBE_SET_SUP(lks, 10000baseCR_Full);
+				break;
+			default:
+				break;
+			}
+		}
 		break;
 	default:
 		pdata->phy.speed = SPEED_UNKNOWN;
@@ -778,38 +799,14 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 	case XGBE_SFP_BASE_1000_T:
 	case XGBE_SFP_BASE_1000_CX:
 	case XGBE_SFP_BASE_10000_CR:
-		pdata->phy.supported |= SUPPORTED_TP;
+		XGBE_SET_SUP(lks, TP);
 		break;
 	default:
-		pdata->phy.supported |= SUPPORTED_FIBRE;
-	}
-
-	switch (phy_data->sfp_speed) {
-	case XGBE_SFP_SPEED_100_1000:
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
-		break;
-	case XGBE_SFP_SPEED_1000:
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+		XGBE_SET_SUP(lks, FIBRE);
 		break;
-	case XGBE_SFP_SPEED_10000:
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
-			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
-		break;
-	default:
-		/* Choose the fastest supported speed */
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
-			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
-		else if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
-		else if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
 	}
 
-	pdata->phy.advertising = pdata->phy.supported;
+	XGBE_LM_COPY(lks, advertising, lks, supported);
 }
 
 static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
@@ -886,8 +883,10 @@ static void xgbe_phy_external_phy_quirks(struct xgbe_prv_data *pdata)
 
 static int xgbe_phy_find_phy_device(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 	struct phy_device *phydev;
+	u32 advertising;
 	int ret;
 
 	/* If we already have a PHY, just return */
@@ -943,7 +942,10 @@ static int xgbe_phy_find_phy_device(struct xgbe_prv_data *pdata)
 	phy_data->phydev = phydev;
 
 	xgbe_phy_external_phy_quirks(pdata);
-	phydev->advertising &= pdata->phy.advertising;
+
+	ethtool_convert_link_mode_to_legacy_u32(&advertising,
+						lks->link_modes.advertising);
+	phydev->advertising &= advertising;
 
 	phy_start_aneg(phy_data->phydev);
 
@@ -1277,6 +1279,7 @@ static void xgbe_phy_sfp_detect(struct xgbe_prv_data *pdata)
 
 static void xgbe_phy_phydev_flowctrl(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 	u16 lcl_adv = 0, rmt_adv = 0;
 	u8 fc;
@@ -1293,11 +1296,11 @@ static void xgbe_phy_phydev_flowctrl(struct xgbe_prv_data *pdata)
 		lcl_adv |= ADVERTISE_PAUSE_ASYM;
 
 	if (phy_data->phydev->pause) {
-		pdata->phy.lp_advertising |= ADVERTISED_Pause;
+		XGBE_SET_LP_ADV(lks, Pause);
 		rmt_adv |= LPA_PAUSE_CAP;
 	}
 	if (phy_data->phydev->asym_pause) {
-		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_LP_ADV(lks, Asym_Pause);
 		rmt_adv |= LPA_PAUSE_ASYM;
 	}
 
@@ -1310,10 +1313,11 @@ static void xgbe_phy_phydev_flowctrl(struct xgbe_prv_data *pdata)
 
 static enum xgbe_mode xgbe_phy_an37_sgmii_outcome(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	enum xgbe_mode mode;
 
-	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
-	pdata->phy.lp_advertising |= ADVERTISED_TP;
+	XGBE_SET_LP_ADV(lks, Autoneg);
+	XGBE_SET_LP_ADV(lks, TP);
 
 	/* Use external PHY to determine flow control */
 	if (pdata->phy.pause_autoneg)
@@ -1322,21 +1326,21 @@ static enum xgbe_mode xgbe_phy_an37_sgmii_outcome(struct xgbe_prv_data *pdata)
 	switch (pdata->an_status & XGBE_SGMII_AN_LINK_SPEED) {
 	case XGBE_SGMII_AN_LINK_SPEED_100:
 		if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) {
-			pdata->phy.lp_advertising |= ADVERTISED_100baseT_Full;
+			XGBE_SET_LP_ADV(lks, 100baseT_Full);
 			mode = XGBE_MODE_SGMII_100;
 		} else {
 			/* Half-duplex not supported */
-			pdata->phy.lp_advertising |= ADVERTISED_100baseT_Half;
+			XGBE_SET_LP_ADV(lks, 100baseT_Half);
 			mode = XGBE_MODE_UNKNOWN;
 		}
 		break;
 	case XGBE_SGMII_AN_LINK_SPEED_1000:
 		if (pdata->an_status & XGBE_SGMII_AN_LINK_DUPLEX) {
-			pdata->phy.lp_advertising |= ADVERTISED_1000baseT_Full;
+			XGBE_SET_LP_ADV(lks, 1000baseT_Full);
 			mode = XGBE_MODE_SGMII_1000;
 		} else {
 			/* Half-duplex not supported */
-			pdata->phy.lp_advertising |= ADVERTISED_1000baseT_Half;
+			XGBE_SET_LP_ADV(lks, 1000baseT_Half);
 			mode = XGBE_MODE_UNKNOWN;
 		}
 		break;
@@ -1349,19 +1353,20 @@ static enum xgbe_mode xgbe_phy_an37_sgmii_outcome(struct xgbe_prv_data *pdata)
 
 static enum xgbe_mode xgbe_phy_an37_outcome(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	enum xgbe_mode mode;
 	unsigned int ad_reg, lp_reg;
 
-	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
-	pdata->phy.lp_advertising |= ADVERTISED_FIBRE;
+	XGBE_SET_LP_ADV(lks, Autoneg);
+	XGBE_SET_LP_ADV(lks, FIBRE);
 
 	/* Compare Advertisement and Link Partner register */
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_VEND2_AN_ADVERTISE);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_VEND2_AN_LP_ABILITY);
 	if (lp_reg & 0x100)
-		pdata->phy.lp_advertising |= ADVERTISED_Pause;
+		XGBE_SET_LP_ADV(lks, Pause);
 	if (lp_reg & 0x80)
-		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_LP_ADV(lks, Asym_Pause);
 
 	if (pdata->phy.pause_autoneg) {
 		/* Set flow control based on auto-negotiation result */
@@ -1379,10 +1384,8 @@ static enum xgbe_mode xgbe_phy_an37_outcome(struct xgbe_prv_data *pdata)
 		}
 	}
 
-	if (lp_reg & 0x40)
-		pdata->phy.lp_advertising |= ADVERTISED_1000baseT_Half;
 	if (lp_reg & 0x20)
-		pdata->phy.lp_advertising |= ADVERTISED_1000baseT_Full;
+		XGBE_SET_LP_ADV(lks, 1000baseX_Full);
 
 	/* Half duplex is not supported */
 	ad_reg &= lp_reg;
@@ -1393,12 +1396,13 @@ static enum xgbe_mode xgbe_phy_an37_outcome(struct xgbe_prv_data *pdata)
 
 static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 	enum xgbe_mode mode;
 	unsigned int ad_reg, lp_reg;
 
-	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
-	pdata->phy.lp_advertising |= ADVERTISED_Backplane;
+	XGBE_SET_LP_ADV(lks, Autoneg);
+	XGBE_SET_LP_ADV(lks, Backplane);
 
 	/* Use external PHY to determine flow control */
 	if (pdata->phy.pause_autoneg)
@@ -1408,9 +1412,9 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 1);
 	if (lp_reg & 0x80)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_LP_ADV(lks, 10000baseKR_Full);
 	if (lp_reg & 0x20)
-		pdata->phy.lp_advertising |= ADVERTISED_1000baseKX_Full;
+		XGBE_SET_LP_ADV(lks, 1000baseKX_Full);
 
 	ad_reg &= lp_reg;
 	if (ad_reg & 0x80) {
@@ -1463,26 +1467,27 @@ static enum xgbe_mode xgbe_phy_an73_redrv_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 2);
 	if (lp_reg & 0xc000)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseR_FEC;
+		XGBE_SET_LP_ADV(lks, 10000baseR_FEC);
 
 	return mode;
 }
 
 static enum xgbe_mode xgbe_phy_an73_outcome(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	enum xgbe_mode mode;
 	unsigned int ad_reg, lp_reg;
 
-	pdata->phy.lp_advertising |= ADVERTISED_Autoneg;
-	pdata->phy.lp_advertising |= ADVERTISED_Backplane;
+	XGBE_SET_LP_ADV(lks, Autoneg);
+	XGBE_SET_LP_ADV(lks, Backplane);
 
 	/* Compare Advertisement and Link Partner register 1 */
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA);
 	if (lp_reg & 0x400)
-		pdata->phy.lp_advertising |= ADVERTISED_Pause;
+		XGBE_SET_LP_ADV(lks, Pause);
 	if (lp_reg & 0x800)
-		pdata->phy.lp_advertising |= ADVERTISED_Asym_Pause;
+		XGBE_SET_LP_ADV(lks, Asym_Pause);
 
 	if (pdata->phy.pause_autoneg) {
 		/* Set flow control based on auto-negotiation result */
@@ -1504,9 +1509,9 @@ static enum xgbe_mode xgbe_phy_an73_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 1);
 	if (lp_reg & 0x80)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_LP_ADV(lks, 10000baseKR_Full);
 	if (lp_reg & 0x20)
-		pdata->phy.lp_advertising |= ADVERTISED_1000baseKX_Full;
+		XGBE_SET_LP_ADV(lks, 1000baseKX_Full);
 
 	ad_reg &= lp_reg;
 	if (ad_reg & 0x80)
@@ -1520,7 +1525,7 @@ static enum xgbe_mode xgbe_phy_an73_outcome(struct xgbe_prv_data *pdata)
 	ad_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
 	lp_reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_LPA + 2);
 	if (lp_reg & 0xc000)
-		pdata->phy.lp_advertising |= ADVERTISED_10000baseR_FEC;
+		XGBE_SET_LP_ADV(lks, 10000baseR_FEC);
 
 	return mode;
 }
@@ -1541,41 +1546,43 @@ static enum xgbe_mode xgbe_phy_an_outcome(struct xgbe_prv_data *pdata)
 	}
 }
 
-static unsigned int xgbe_phy_an_advertising(struct xgbe_prv_data *pdata)
+static void xgbe_phy_an_advertising(struct xgbe_prv_data *pdata,
+				    struct ethtool_link_ksettings *dlks)
 {
+	struct ethtool_link_ksettings *slks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int advertising;
+
+	XGBE_LM_COPY(dlks, advertising, slks, advertising);
 
 	/* Without a re-driver, just return current advertising */
 	if (!phy_data->redrv)
-		return pdata->phy.advertising;
+		return;
 
 	/* With the KR re-driver we need to advertise a single speed */
-	advertising = pdata->phy.advertising;
-	advertising &= ~ADVERTISED_1000baseKX_Full;
-	advertising &= ~ADVERTISED_10000baseKR_Full;
+	XGBE_CLR_ADV(dlks, 1000baseKX_Full);
+	XGBE_CLR_ADV(dlks, 10000baseKR_Full);
 
 	switch (phy_data->port_mode) {
 	case XGBE_PORT_MODE_BACKPLANE:
-		advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_ADV(dlks, 10000baseKR_Full);
 		break;
 	case XGBE_PORT_MODE_BACKPLANE_2500:
-		advertising |= ADVERTISED_1000baseKX_Full;
+		XGBE_SET_ADV(dlks, 1000baseKX_Full);
 		break;
 	case XGBE_PORT_MODE_1000BASE_T:
 	case XGBE_PORT_MODE_1000BASE_X:
 	case XGBE_PORT_MODE_NBASE_T:
-		advertising |= ADVERTISED_1000baseKX_Full;
+		XGBE_SET_ADV(dlks, 1000baseKX_Full);
 		break;
 	case XGBE_PORT_MODE_10GBASE_T:
 		if (phy_data->phydev &&
 		    (phy_data->phydev->speed == SPEED_10000))
-			advertising |= ADVERTISED_10000baseKR_Full;
+			XGBE_SET_ADV(dlks, 10000baseKR_Full);
 		else
-			advertising |= ADVERTISED_1000baseKX_Full;
+			XGBE_SET_ADV(dlks, 1000baseKX_Full);
 		break;
 	case XGBE_PORT_MODE_10GBASE_R:
-		advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_ADV(dlks, 10000baseKR_Full);
 		break;
 	case XGBE_PORT_MODE_SFP:
 		switch (phy_data->sfp_base) {
@@ -1583,24 +1590,24 @@ static unsigned int xgbe_phy_an_advertising(struct xgbe_prv_data *pdata)
 		case XGBE_SFP_BASE_1000_SX:
 		case XGBE_SFP_BASE_1000_LX:
 		case XGBE_SFP_BASE_1000_CX:
-			advertising |= ADVERTISED_1000baseKX_Full;
+			XGBE_SET_ADV(dlks, 1000baseKX_Full);
 			break;
 		default:
-			advertising |= ADVERTISED_10000baseKR_Full;
+			XGBE_SET_ADV(dlks, 10000baseKR_Full);
 			break;
 		}
 		break;
 	default:
-		advertising |= ADVERTISED_10000baseKR_Full;
+		XGBE_SET_ADV(dlks, 10000baseKR_Full);
 		break;
 	}
-
-	return advertising;
 }
 
 static int xgbe_phy_an_config(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
+	u32 advertising;
 	int ret;
 
 	ret = xgbe_phy_find_phy_device(pdata);
@@ -1610,9 +1617,12 @@ static int xgbe_phy_an_config(struct xgbe_prv_data *pdata)
 	if (!phy_data->phydev)
 		return 0;
 
+	ethtool_convert_link_mode_to_legacy_u32(&advertising,
+						lks->link_modes.advertising);
+
 	phy_data->phydev->autoneg = pdata->phy.autoneg;
 	phy_data->phydev->advertising = phy_data->phydev->supported &
-					pdata->phy.advertising;
+					advertising;
 
 	if (pdata->phy.autoneg != AUTONEG_ENABLE) {
 		phy_data->phydev->speed = pdata->phy.speed;
@@ -2073,11 +2083,10 @@ static void xgbe_phy_set_mode(struct xgbe_prv_data *pdata, enum xgbe_mode mode)
 }
 
 static bool xgbe_phy_check_mode(struct xgbe_prv_data *pdata,
-				enum xgbe_mode mode, u32 advert)
+				enum xgbe_mode mode, bool advert)
 {
 	if (pdata->phy.autoneg == AUTONEG_ENABLE) {
-		if (pdata->phy.advertising & advert)
-			return true;
+		return advert;
 	} else {
 		enum xgbe_mode cur_mode;
 
@@ -2092,13 +2101,15 @@ static bool xgbe_phy_check_mode(struct xgbe_prv_data *pdata,
 static bool xgbe_phy_use_basex_mode(struct xgbe_prv_data *pdata,
 				    enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	switch (mode) {
 	case XGBE_MODE_X:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseT_Full);
+					   XGBE_ADV(lks, 1000baseX_Full));
 	case XGBE_MODE_KR:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_10000baseT_Full);
+					   XGBE_ADV(lks, 10000baseKR_Full));
 	default:
 		return false;
 	}
@@ -2107,19 +2118,21 @@ static bool xgbe_phy_use_basex_mode(struct xgbe_prv_data *pdata,
 static bool xgbe_phy_use_baset_mode(struct xgbe_prv_data *pdata,
 				    enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	switch (mode) {
 	case XGBE_MODE_SGMII_100:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_100baseT_Full);
+					   XGBE_ADV(lks, 100baseT_Full));
 	case XGBE_MODE_SGMII_1000:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseT_Full);
+					   XGBE_ADV(lks, 1000baseT_Full));
 	case XGBE_MODE_KX_2500:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_2500baseX_Full);
+					   XGBE_ADV(lks, 2500baseT_Full));
 	case XGBE_MODE_KR:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_10000baseT_Full);
+					   XGBE_ADV(lks, 10000baseT_Full));
 	default:
 		return false;
 	}
@@ -2128,6 +2141,7 @@ static bool xgbe_phy_use_baset_mode(struct xgbe_prv_data *pdata,
 static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata,
 				  enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 
 	switch (mode) {
@@ -2135,22 +2149,26 @@ static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata,
 		if (phy_data->sfp_base == XGBE_SFP_BASE_1000_T)
 			return false;
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseT_Full);
+					   XGBE_ADV(lks, 1000baseX_Full));
 	case XGBE_MODE_SGMII_100:
 		if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T)
 			return false;
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_100baseT_Full);
+					   XGBE_ADV(lks, 100baseT_Full));
 	case XGBE_MODE_SGMII_1000:
 		if (phy_data->sfp_base != XGBE_SFP_BASE_1000_T)
 			return false;
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseT_Full);
+					   XGBE_ADV(lks, 1000baseT_Full));
 	case XGBE_MODE_SFI:
 		if (phy_data->sfp_mod_absent)
 			return true;
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_10000baseT_Full);
+					   XGBE_ADV(lks, 10000baseSR_Full)  ||
+					   XGBE_ADV(lks, 10000baseLR_Full)  ||
+					   XGBE_ADV(lks, 10000baseLRM_Full) ||
+					   XGBE_ADV(lks, 10000baseER_Full)  ||
+					   XGBE_ADV(lks, 10000baseCR_Full));
 	default:
 		return false;
 	}
@@ -2159,10 +2177,12 @@ static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata,
 static bool xgbe_phy_use_bp_2500_mode(struct xgbe_prv_data *pdata,
 				      enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	switch (mode) {
 	case XGBE_MODE_KX_2500:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_2500baseX_Full);
+					   XGBE_ADV(lks, 2500baseX_Full));
 	default:
 		return false;
 	}
@@ -2171,13 +2191,15 @@ static bool xgbe_phy_use_bp_2500_mode(struct xgbe_prv_data *pdata,
 static bool xgbe_phy_use_bp_mode(struct xgbe_prv_data *pdata,
 				 enum xgbe_mode mode)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
+
 	switch (mode) {
 	case XGBE_MODE_KX_1000:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_1000baseKX_Full);
+					   XGBE_ADV(lks, 1000baseKX_Full));
 	case XGBE_MODE_KR:
 		return xgbe_phy_check_mode(pdata, mode,
-					   ADVERTISED_10000baseKR_Full);
+					   XGBE_ADV(lks, 10000baseKR_Full));
 	default:
 		return false;
 	}
@@ -2744,6 +2766,7 @@ static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
 
 static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 {
+	struct ethtool_link_ksettings *lks = &pdata->phy.lks;
 	struct xgbe_phy_data *phy_data;
 	struct mii_bus *mii;
 	unsigned int reg;
@@ -2823,32 +2846,33 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 	phy_data->cur_mode = XGBE_MODE_UNKNOWN;
 
 	/* Initialize supported features */
-	pdata->phy.supported = 0;
+	XGBE_ZERO_SUP(lks);
 
 	switch (phy_data->port_mode) {
 	/* Backplane support */
 	case XGBE_PORT_MODE_BACKPLANE:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_Backplane;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, Backplane);
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) {
-			pdata->phy.supported |= SUPPORTED_1000baseKX_Full;
+			XGBE_SET_SUP(lks, 1000baseKX_Full);
 			phy_data->start_mode = XGBE_MODE_KX_1000;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
-			pdata->phy.supported |= SUPPORTED_10000baseKR_Full;
+			XGBE_SET_SUP(lks, 10000baseKR_Full);
 			if (pdata->fec_ability & MDIO_PMA_10GBR_FECABLE_ABLE)
-				pdata->phy.supported |=
-					SUPPORTED_10000baseR_FEC;
+				XGBE_SET_SUP(lks, 10000baseR_FEC);
 			phy_data->start_mode = XGBE_MODE_KR;
 		}
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_NONE;
 		break;
 	case XGBE_PORT_MODE_BACKPLANE_2500:
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_Backplane;
-		pdata->phy.supported |= SUPPORTED_2500baseX_Full;
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, Backplane);
+		XGBE_SET_SUP(lks, 2500baseX_Full);
 		phy_data->start_mode = XGBE_MODE_KX_2500;
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_NONE;
@@ -2856,15 +2880,16 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* MDIO 1GBase-T support */
 	case XGBE_PORT_MODE_1000BASE_T:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, TP);
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
+			XGBE_SET_SUP(lks, 100baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_100;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) {
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+			XGBE_SET_SUP(lks, 1000baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_1000;
 		}
 
@@ -2873,10 +2898,11 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* MDIO Base-X support */
 	case XGBE_PORT_MODE_1000BASE_X:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_FIBRE;
-		pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, FIBRE);
+		XGBE_SET_SUP(lks, 1000baseX_Full);
 		phy_data->start_mode = XGBE_MODE_X;
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_CL22;
@@ -2884,19 +2910,20 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* MDIO NBase-T support */
 	case XGBE_PORT_MODE_NBASE_T:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, TP);
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
+			XGBE_SET_SUP(lks, 100baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_100;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) {
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+			XGBE_SET_SUP(lks, 1000baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_1000;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_2500) {
-			pdata->phy.supported |= SUPPORTED_2500baseX_Full;
+			XGBE_SET_SUP(lks, 2500baseT_Full);
 			phy_data->start_mode = XGBE_MODE_KX_2500;
 		}
 
@@ -2905,19 +2932,20 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* 10GBase-T support */
 	case XGBE_PORT_MODE_10GBASE_T:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, TP);
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
+			XGBE_SET_SUP(lks, 100baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_100;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) {
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+			XGBE_SET_SUP(lks, 1000baseT_Full);
 			phy_data->start_mode = XGBE_MODE_SGMII_1000;
 		}
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
-			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
+			XGBE_SET_SUP(lks, 10000baseT_Full);
 			phy_data->start_mode = XGBE_MODE_KR;
 		}
 
@@ -2926,12 +2954,16 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* 10GBase-R support */
 	case XGBE_PORT_MODE_10GBASE_R:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
-		pdata->phy.supported |= SUPPORTED_10000baseT_Full;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, FIBRE);
+		XGBE_SET_SUP(lks, 10000baseSR_Full);
+		XGBE_SET_SUP(lks, 10000baseLR_Full);
+		XGBE_SET_SUP(lks, 10000baseLRM_Full);
+		XGBE_SET_SUP(lks, 10000baseER_Full);
 		if (pdata->fec_ability & MDIO_PMA_10GBR_FECABLE_ABLE)
-			pdata->phy.supported |= SUPPORTED_10000baseR_FEC;
+			XGBE_SET_SUP(lks, 10000baseR_FEC);
 		phy_data->start_mode = XGBE_MODE_SFI;
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_NONE;
@@ -2939,22 +2971,17 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 
 	/* SFP support */
 	case XGBE_PORT_MODE_SFP:
-		pdata->phy.supported |= SUPPORTED_Autoneg;
-		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
-		pdata->phy.supported |= SUPPORTED_TP;
-		pdata->phy.supported |= SUPPORTED_FIBRE;
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100) {
-			pdata->phy.supported |= SUPPORTED_100baseT_Full;
+		XGBE_SET_SUP(lks, Autoneg);
+		XGBE_SET_SUP(lks, Pause);
+		XGBE_SET_SUP(lks, Asym_Pause);
+		XGBE_SET_SUP(lks, TP);
+		XGBE_SET_SUP(lks, FIBRE);
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
 			phy_data->start_mode = XGBE_MODE_SGMII_100;
-		}
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) {
-			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
 			phy_data->start_mode = XGBE_MODE_SGMII_1000;
-		}
-		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
-			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
 			phy_data->start_mode = XGBE_MODE_SFI;
-		}
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_CL22;
 
@@ -2965,8 +2992,9 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 	}
 
 	if (netif_msg_probe(pdata))
-		dev_dbg(pdata->dev, "phy supported=%#x\n",
-			pdata->phy.supported);
+		dev_dbg(pdata->dev, "phy supported=0x%*pb\n",
+			__ETHTOOL_LINK_MODE_MASK_NBITS,
+			lks->link_modes.supported);
 
 	if ((phy_data->conn_type & XGBE_CONN_TYPE_MDIO) &&
 	    (phy_data->phydev_mode != XGBE_MDIO_MODE_NONE)) {
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 0e93155..48a46a7 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -131,6 +131,7 @@
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/dcache.h>
+#include <linux/ethtool.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
 #define XGBE_DRV_VERSION	"1.0.3"
@@ -296,6 +297,48 @@
 /* MDIO port types */
 #define XGMAC_MAX_C22_PORT		3
 
+/* Link mode bit operations */
+#define XGBE_ZERO_SUP(_ls)		\
+	ethtool_link_ksettings_zero_link_mode((_ls), supported)
+
+#define XGBE_SET_SUP(_ls, _mode)	\
+	ethtool_link_ksettings_add_link_mode((_ls), supported, _mode)
+
+#define XGBE_CLR_SUP(_ls, _mode)	\
+	ethtool_link_ksettings_del_link_mode((_ls), supported, _mode)
+
+#define XGBE_IS_SUP(_ls, _mode)	\
+	ethtool_link_ksettings_test_link_mode((_ls), supported, _mode)
+
+#define XGBE_ZERO_ADV(_ls)		\
+	ethtool_link_ksettings_zero_link_mode((_ls), advertising)
+
+#define XGBE_SET_ADV(_ls, _mode)	\
+	ethtool_link_ksettings_add_link_mode((_ls), advertising, _mode)
+
+#define XGBE_CLR_ADV(_ls, _mode)	\
+	ethtool_link_ksettings_del_link_mode((_ls), advertising, _mode)
+
+#define XGBE_ADV(_ls, _mode)		\
+	ethtool_link_ksettings_test_link_mode((_ls), advertising, _mode)
+
+#define XGBE_ZERO_LP_ADV(_ls)		\
+	ethtool_link_ksettings_zero_link_mode((_ls), lp_advertising)
+
+#define XGBE_SET_LP_ADV(_ls, _mode)	\
+	ethtool_link_ksettings_add_link_mode((_ls), lp_advertising, _mode)
+
+#define XGBE_CLR_LP_ADV(_ls, _mode)	\
+	ethtool_link_ksettings_del_link_mode((_ls), lp_advertising, _mode)
+
+#define XGBE_LP_ADV(_ls, _mode)		\
+	ethtool_link_ksettings_test_link_mode((_ls), lp_advertising, _mode)
+
+#define XGBE_LM_COPY(_dst, _dname, _src, _sname)	\
+	bitmap_copy((_dst)->link_modes._dname,		\
+		    (_src)->link_modes._sname,		\
+		    __ETHTOOL_LINK_MODE_MASK_NBITS)
+
 struct xgbe_prv_data;
 
 struct xgbe_packet_data {
@@ -563,9 +606,7 @@ enum xgbe_mdio_mode {
 };
 
 struct xgbe_phy {
-	u32 supported;
-	u32 advertising;
-	u32 lp_advertising;
+	struct ethtool_link_ksettings lks;
 
 	int address;
 
@@ -817,7 +858,8 @@ struct xgbe_phy_impl_if {
 	int (*an_config)(struct xgbe_prv_data *);
 
 	/* Set/override auto-negotiation advertisement settings */
-	unsigned int (*an_advertising)(struct xgbe_prv_data *);
+	void (*an_advertising)(struct xgbe_prv_data *,
+			       struct ethtool_link_ksettings *);
 
 	/* Process results of auto-negotiation */
 	enum xgbe_mode (*an_outcome)(struct xgbe_prv_data *);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 12/13] amd-xgbe: Add support for VXLAN offload capabilities
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (10 preceding siblings ...)
  2017-08-18 14:03 ` [PATCH net-next v2 11/13] amd-xgbe: Convert to using the new link mode settings Tom Lendacky
@ 2017-08-18 14:04 ` Tom Lendacky
  2017-08-18 14:04 ` [PATCH net-next v2 13/13] amd-xgbe: Add additional ethtool statistics Tom Lendacky
  2017-08-18 23:31 ` [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:04 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The hardware has the capability to perform checksum offload support
(both Tx and Rx) and TSO support for VXLAN packets. Add the support
required to enable this.

The hardware can only support a single VXLAN port for offload. If more
than one VXLAN port is added then the offload capabilities have to be
disabled and can no longer be advertised.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h |   24 ++
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c    |   92 +++++++
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c    |  365 +++++++++++++++++++++++++++
 drivers/net/ethernet/amd/xgbe/xgbe-main.c   |   23 ++
 drivers/net/ethernet/amd/xgbe/xgbe.h        |   22 ++
 5 files changed, 520 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index d07edf9..9431330 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -298,6 +298,7 @@
 #define MAC_RWKPFR			0x00c4
 #define MAC_LPICSR			0x00d0
 #define MAC_LPITCR			0x00d4
+#define MAC_TIR				0x00e0
 #define MAC_VR				0x0110
 #define MAC_DR				0x0114
 #define MAC_HWF0R			0x011c
@@ -364,6 +365,8 @@
 #define MAC_HWF0R_TXCOESEL_WIDTH	1
 #define MAC_HWF0R_VLHASH_INDEX		4
 #define MAC_HWF0R_VLHASH_WIDTH		1
+#define MAC_HWF0R_VXN_INDEX		29
+#define MAC_HWF0R_VXN_WIDTH		1
 #define MAC_HWF1R_ADDR64_INDEX		14
 #define MAC_HWF1R_ADDR64_WIDTH		2
 #define MAC_HWF1R_ADVTHWORD_INDEX	13
@@ -448,6 +451,8 @@
 #define MAC_PFR_PR_WIDTH		1
 #define MAC_PFR_VTFE_INDEX		16
 #define MAC_PFR_VTFE_WIDTH		1
+#define MAC_PFR_VUCC_INDEX		22
+#define MAC_PFR_VUCC_WIDTH		1
 #define MAC_PMTCSR_MGKPKTEN_INDEX	1
 #define MAC_PMTCSR_MGKPKTEN_WIDTH	1
 #define MAC_PMTCSR_PWRDWN_INDEX		0
@@ -510,6 +515,12 @@
 #define MAC_TCR_SS_WIDTH		2
 #define MAC_TCR_TE_INDEX		0
 #define MAC_TCR_TE_WIDTH		1
+#define MAC_TCR_VNE_INDEX		24
+#define MAC_TCR_VNE_WIDTH		1
+#define MAC_TCR_VNM_INDEX		25
+#define MAC_TCR_VNM_WIDTH		1
+#define MAC_TIR_TNID_INDEX		0
+#define MAC_TIR_TNID_WIDTH		16
 #define MAC_TSCR_AV8021ASMEN_INDEX	28
 #define MAC_TSCR_AV8021ASMEN_WIDTH	1
 #define MAC_TSCR_SNAPTYPSEL_INDEX	16
@@ -1153,11 +1164,17 @@
 #define RX_PACKET_ATTRIBUTES_RSS_HASH_WIDTH	1
 #define RX_PACKET_ATTRIBUTES_FIRST_INDEX	7
 #define RX_PACKET_ATTRIBUTES_FIRST_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_TNP_INDEX		8
+#define RX_PACKET_ATTRIBUTES_TNP_WIDTH		1
+#define RX_PACKET_ATTRIBUTES_TNPCSUM_DONE_INDEX	9
+#define RX_PACKET_ATTRIBUTES_TNPCSUM_DONE_WIDTH	1
 
 #define RX_NORMAL_DESC0_OVT_INDEX		0
 #define RX_NORMAL_DESC0_OVT_WIDTH		16
 #define RX_NORMAL_DESC2_HL_INDEX		0
 #define RX_NORMAL_DESC2_HL_WIDTH		10
+#define RX_NORMAL_DESC2_TNP_INDEX		11
+#define RX_NORMAL_DESC2_TNP_WIDTH		1
 #define RX_NORMAL_DESC3_CDA_INDEX		27
 #define RX_NORMAL_DESC3_CDA_WIDTH		1
 #define RX_NORMAL_DESC3_CTXT_INDEX		30
@@ -1184,9 +1201,11 @@
 #define RX_DESC3_L34T_IPV4_TCP			1
 #define RX_DESC3_L34T_IPV4_UDP			2
 #define RX_DESC3_L34T_IPV4_ICMP			3
+#define RX_DESC3_L34T_IPV4_UNKNOWN		7
 #define RX_DESC3_L34T_IPV6_TCP			9
 #define RX_DESC3_L34T_IPV6_UDP			10
 #define RX_DESC3_L34T_IPV6_ICMP			11
+#define RX_DESC3_L34T_IPV6_UNKNOWN		15
 
 #define RX_CONTEXT_DESC3_TSA_INDEX		4
 #define RX_CONTEXT_DESC3_TSA_WIDTH		1
@@ -1201,6 +1220,8 @@
 #define TX_PACKET_ATTRIBUTES_VLAN_CTAG_WIDTH	1
 #define TX_PACKET_ATTRIBUTES_PTP_INDEX		3
 #define TX_PACKET_ATTRIBUTES_PTP_WIDTH		1
+#define TX_PACKET_ATTRIBUTES_VXLAN_INDEX	4
+#define TX_PACKET_ATTRIBUTES_VXLAN_WIDTH	1
 
 #define TX_CONTEXT_DESC2_MSS_INDEX		0
 #define TX_CONTEXT_DESC2_MSS_WIDTH		15
@@ -1241,8 +1262,11 @@
 #define TX_NORMAL_DESC3_TCPPL_WIDTH		18
 #define TX_NORMAL_DESC3_TSE_INDEX		18
 #define TX_NORMAL_DESC3_TSE_WIDTH		1
+#define TX_NORMAL_DESC3_VNP_INDEX		23
+#define TX_NORMAL_DESC3_VNP_WIDTH		3
 
 #define TX_NORMAL_DESC2_VLAN_INSERT		0x2
+#define TX_NORMAL_DESC3_VXLAN_PACKET		0x3
 
 /* MDIO undefined or vendor specific registers */
 #ifndef MDIO_PMA_10GBR_PMD_CTRL
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index a978408..1bf671e 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -479,6 +479,50 @@ static bool xgbe_is_pfc_queue(struct xgbe_prv_data *pdata,
 	return false;
 }
 
+static void xgbe_set_vxlan_id(struct xgbe_prv_data *pdata)
+{
+	/* Program the VXLAN port */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TIR, TNID, pdata->vxlan_port);
+
+	netif_dbg(pdata, drv, pdata->netdev, "VXLAN tunnel id set to %hx\n",
+		  pdata->vxlan_port);
+}
+
+static void xgbe_enable_vxlan(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->hw_feat.vxn)
+		return;
+
+	/* Program the VXLAN port */
+	xgbe_set_vxlan_id(pdata);
+
+	/* Allow for IPv6/UDP zero-checksum VXLAN packets */
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, VUCC, 1);
+
+	/* Enable VXLAN tunneling mode */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, VNM, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, VNE, 1);
+
+	netif_dbg(pdata, drv, pdata->netdev, "VXLAN acceleration enabled\n");
+}
+
+static void xgbe_disable_vxlan(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->hw_feat.vxn)
+		return;
+
+	/* Disable tunneling mode */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, VNE, 0);
+
+	/* Clear IPv6/UDP zero-checksum VXLAN packets setting */
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, VUCC, 0);
+
+	/* Clear the VXLAN port */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TIR, TNID, 0);
+
+	netif_dbg(pdata, drv, pdata->netdev, "VXLAN acceleration disabled\n");
+}
+
 static int xgbe_disable_tx_flow_control(struct xgbe_prv_data *pdata)
 {
 	unsigned int max_q_count, q_count;
@@ -1610,7 +1654,7 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 	struct xgbe_ring_desc *rdesc;
 	struct xgbe_packet_data *packet = &ring->packet_data;
 	unsigned int tx_packets, tx_bytes;
-	unsigned int csum, tso, vlan;
+	unsigned int csum, tso, vlan, vxlan;
 	unsigned int tso_context, vlan_context;
 	unsigned int tx_set_ic;
 	int start_index = ring->cur;
@@ -1628,6 +1672,8 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 			     TSO_ENABLE);
 	vlan = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
 			      VLAN_CTAG);
+	vxlan = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       VXLAN);
 
 	if (tso && (packet->mss != ring->tx.cur_mss))
 		tso_context = 1;
@@ -1759,6 +1805,10 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 				  packet->length);
 	}
 
+	if (vxlan)
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, VNP,
+				  TX_NORMAL_DESC3_VXLAN_PACKET);
+
 	for (i = cur_index - start_index + 1; i < packet->rdesc_count; i++) {
 		cur_index++;
 		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
@@ -1920,9 +1970,27 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 	rdata->rx.len = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, PL);
 
 	/* Set checksum done indicator as appropriate */
-	if (netdev->features & NETIF_F_RXCSUM)
+	if (netdev->features & NETIF_F_RXCSUM) {
 		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 			       CSUM_DONE, 1);
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       TNPCSUM_DONE, 1);
+	}
+
+	/* Set the tunneled packet indicator */
+	if (XGMAC_GET_BITS_LE(rdesc->desc2, RX_NORMAL_DESC2, TNP)) {
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       TNP, 1);
+
+		l34t = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, L34T);
+		switch (l34t) {
+		case RX_DESC3_L34T_IPV4_UNKNOWN:
+		case RX_DESC3_L34T_IPV6_UNKNOWN:
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       TNPCSUM_DONE, 0);
+			break;
+		}
+	}
 
 	/* Check for errors (only valid in last descriptor) */
 	err = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ES);
@@ -1942,12 +2010,23 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 				  packet->vlan_ctag);
 		}
 	} else {
-		if ((etlt == 0x05) || (etlt == 0x06))
+		unsigned int tnp = XGMAC_GET_BITS(packet->attributes,
+						  RX_PACKET_ATTRIBUTES, TNP);
+
+		if ((etlt == 0x05) || (etlt == 0x06)) {
 			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 				       CSUM_DONE, 0);
-		else
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       TNPCSUM_DONE, 0);
+		} else if (tnp && ((etlt == 0x09) || (etlt == 0x0a))) {
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       CSUM_DONE, 0);
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       TNPCSUM_DONE, 0);
+		} else {
 			XGMAC_SET_BITS(packet->errors, RX_PACKET_ERRORS,
 				       FRAME, 1);
+		}
 	}
 
 	pdata->ext_stats.rxq_packets[channel->queue_index]++;
@@ -3536,5 +3615,10 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
 	hw_if->disable_ecc_ded = xgbe_disable_ecc_ded;
 	hw_if->disable_ecc_sec = xgbe_disable_ecc_sec;
 
+	/* For VXLAN */
+	hw_if->enable_vxlan = xgbe_enable_vxlan;
+	hw_if->disable_vxlan = xgbe_disable_vxlan;
+	hw_if->set_vxlan_id = xgbe_set_vxlan_id;
+
 	DBGPR("<--xgbe_init_function_ptrs\n");
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 7498bb8..608693d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -124,6 +124,7 @@
 #include <linux/if_ether.h>
 #include <linux/net_tstamp.h>
 #include <linux/phy.h>
+#include <net/vxlan.h>
 
 #include "xgbe.h"
 #include "xgbe-common.h"
@@ -756,6 +757,7 @@ void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
 					      ADDMACADRSEL);
 	hw_feat->ts_src      = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TSSTSSEL);
 	hw_feat->sa_vlan_ins = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, SAVLANINS);
+	hw_feat->vxn         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, VXN);
 
 	/* Hardware feature register 1 */
 	hw_feat->rx_fifo_size  = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R,
@@ -860,6 +862,8 @@ void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
 			(hw_feat->ts_src == 3) ? "internal/external" : "n/a");
 		dev_dbg(pdata->dev, "  SA/VLAN insertion         : %s\n",
 			hw_feat->sa_vlan_ins ? "yes" : "no");
+		dev_dbg(pdata->dev, "  VXLAN/NVGRE support       : %s\n",
+			hw_feat->vxn ? "yes" : "no");
 
 		/* Hardware feature register 1 */
 		dev_dbg(pdata->dev, "  RX fifo size              : %u\n",
@@ -903,6 +907,116 @@ void xgbe_get_all_hw_features(struct xgbe_prv_data *pdata)
 	}
 }
 
+static void xgbe_disable_vxlan_offloads(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+
+	if (!pdata->vxlan_offloads_set)
+		return;
+
+	netdev_info(netdev, "disabling VXLAN offloads\n");
+
+	netdev->hw_enc_features &= ~(NETIF_F_SG |
+				     NETIF_F_IP_CSUM |
+				     NETIF_F_IPV6_CSUM |
+				     NETIF_F_RXCSUM |
+				     NETIF_F_TSO |
+				     NETIF_F_TSO6 |
+				     NETIF_F_GRO |
+				     NETIF_F_GSO_UDP_TUNNEL |
+				     NETIF_F_GSO_UDP_TUNNEL_CSUM);
+
+	netdev->features &= ~(NETIF_F_GSO_UDP_TUNNEL |
+			      NETIF_F_GSO_UDP_TUNNEL_CSUM);
+
+	pdata->vxlan_offloads_set = 0;
+}
+
+static void xgbe_disable_vxlan_hw(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->vxlan_port_set)
+		return;
+
+	pdata->hw_if.disable_vxlan(pdata);
+
+	pdata->vxlan_port_set = 0;
+	pdata->vxlan_port = 0;
+}
+
+static void xgbe_disable_vxlan_accel(struct xgbe_prv_data *pdata)
+{
+	xgbe_disable_vxlan_offloads(pdata);
+
+	xgbe_disable_vxlan_hw(pdata);
+}
+
+static void xgbe_enable_vxlan_offloads(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+
+	if (pdata->vxlan_offloads_set)
+		return;
+
+	netdev_info(netdev, "enabling VXLAN offloads\n");
+
+	netdev->hw_enc_features |= NETIF_F_SG |
+				   NETIF_F_IP_CSUM |
+				   NETIF_F_IPV6_CSUM |
+				   NETIF_F_RXCSUM |
+				   NETIF_F_TSO |
+				   NETIF_F_TSO6 |
+				   NETIF_F_GRO |
+				   pdata->vxlan_features;
+
+	netdev->features |= pdata->vxlan_features;
+
+	pdata->vxlan_offloads_set = 1;
+}
+
+static void xgbe_enable_vxlan_hw(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_vxlan_data *vdata;
+
+	if (pdata->vxlan_port_set)
+		return;
+
+	if (list_empty(&pdata->vxlan_ports))
+		return;
+
+	vdata = list_first_entry(&pdata->vxlan_ports,
+				 struct xgbe_vxlan_data, list);
+
+	pdata->vxlan_port_set = 1;
+	pdata->vxlan_port = be16_to_cpu(vdata->port);
+
+	pdata->hw_if.enable_vxlan(pdata);
+}
+
+static void xgbe_enable_vxlan_accel(struct xgbe_prv_data *pdata)
+{
+	/* VXLAN acceleration desired? */
+	if (!pdata->vxlan_features)
+		return;
+
+	/* VXLAN acceleration possible? */
+	if (pdata->vxlan_force_disable)
+		return;
+
+	xgbe_enable_vxlan_hw(pdata);
+
+	xgbe_enable_vxlan_offloads(pdata);
+}
+
+static void xgbe_reset_vxlan_accel(struct xgbe_prv_data *pdata)
+{
+	xgbe_disable_vxlan_hw(pdata);
+
+	if (pdata->vxlan_features)
+		xgbe_enable_vxlan_offloads(pdata);
+
+	pdata->vxlan_force_disable = 0;
+}
+
 static void xgbe_napi_enable(struct xgbe_prv_data *pdata, unsigned int add)
 {
 	struct xgbe_channel *channel;
@@ -1226,6 +1340,8 @@ static int xgbe_start(struct xgbe_prv_data *pdata)
 	hw_if->enable_tx(pdata);
 	hw_if->enable_rx(pdata);
 
+	udp_tunnel_get_rx_info(netdev);
+
 	netif_tx_start_all_queues(netdev);
 
 	xgbe_start_timers(pdata);
@@ -1267,6 +1383,8 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
 	xgbe_stop_timers(pdata);
 	flush_workqueue(pdata->dev_workqueue);
 
+	xgbe_reset_vxlan_accel(pdata);
+
 	hw_if->disable_tx(pdata);
 	hw_if->disable_rx(pdata);
 
@@ -1555,10 +1673,18 @@ static int xgbe_prep_tso(struct sk_buff *skb, struct xgbe_packet_data *packet)
 	if (ret)
 		return ret;
 
-	packet->header_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
-	packet->tcp_header_len = tcp_hdrlen(skb);
+	if (XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES, VXLAN)) {
+		packet->header_len = skb_inner_transport_offset(skb) +
+				     inner_tcp_hdrlen(skb);
+		packet->tcp_header_len = inner_tcp_hdrlen(skb);
+	} else {
+		packet->header_len = skb_transport_offset(skb) +
+				     tcp_hdrlen(skb);
+		packet->tcp_header_len = tcp_hdrlen(skb);
+	}
 	packet->tcp_payload_len = skb->len - packet->header_len;
 	packet->mss = skb_shinfo(skb)->gso_size;
+
 	DBGPR("  packet->header_len=%u\n", packet->header_len);
 	DBGPR("  packet->tcp_header_len=%u, packet->tcp_payload_len=%u\n",
 	      packet->tcp_header_len, packet->tcp_payload_len);
@@ -1573,6 +1699,49 @@ static int xgbe_prep_tso(struct sk_buff *skb, struct xgbe_packet_data *packet)
 	return 0;
 }
 
+static bool xgbe_is_vxlan(struct xgbe_prv_data *pdata, struct sk_buff *skb)
+{
+	struct xgbe_vxlan_data *vdata;
+
+	if (pdata->vxlan_force_disable)
+		return false;
+
+	if (!skb->encapsulation)
+		return false;
+
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return false;
+
+	switch (skb->protocol) {
+	case htons(ETH_P_IP):
+		if (ip_hdr(skb)->protocol != IPPROTO_UDP)
+			return false;
+		break;
+
+	case htons(ETH_P_IPV6):
+		if (ipv6_hdr(skb)->nexthdr != IPPROTO_UDP)
+			return false;
+		break;
+
+	default:
+		return false;
+	}
+
+	/* See if we have the UDP port in our list */
+	list_for_each_entry(vdata, &pdata->vxlan_ports, list) {
+		if ((skb->protocol == htons(ETH_P_IP)) &&
+		    (vdata->sa_family == AF_INET) &&
+		    (vdata->port == udp_hdr(skb)->dest))
+			return true;
+		else if ((skb->protocol == htons(ETH_P_IPV6)) &&
+			 (vdata->sa_family == AF_INET6) &&
+			 (vdata->port == udp_hdr(skb)->dest))
+			return true;
+	}
+
+	return false;
+}
+
 static int xgbe_is_tso(struct sk_buff *skb)
 {
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -1621,6 +1790,10 @@ static void xgbe_packet_info(struct xgbe_prv_data *pdata,
 		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
 			       CSUM_ENABLE, 1);
 
+	if (xgbe_is_vxlan(pdata, skb))
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       VXLAN, 1);
+
 	if (skb_vlan_tag_present(skb)) {
 		/* VLAN requires an extra descriptor if tag is different */
 		if (skb_vlan_tag_get(skb) != ring->tx.cur_vlan_ctag)
@@ -2050,18 +2223,83 @@ static int xgbe_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 	return 0;
 }
 
+static netdev_features_t xgbe_fix_features(struct net_device *netdev,
+					   netdev_features_t features)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	netdev_features_t vxlan_base, vxlan_mask;
+
+	vxlan_base = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_RX_UDP_TUNNEL_PORT;
+	vxlan_mask = vxlan_base | NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+	pdata->vxlan_features = features & vxlan_mask;
+
+	/* Only fix VXLAN-related features */
+	if (!pdata->vxlan_features)
+		return features;
+
+	/* If VXLAN isn't supported then clear any features:
+	 *   This is needed because NETIF_F_RX_UDP_TUNNEL_PORT gets
+	 *   automatically set if ndo_udp_tunnel_add is set.
+	 */
+	if (!pdata->hw_feat.vxn)
+		return features & ~vxlan_mask;
+
+	/* VXLAN CSUM requires VXLAN base */
+	if ((features & NETIF_F_GSO_UDP_TUNNEL_CSUM) &&
+	    !(features & NETIF_F_GSO_UDP_TUNNEL)) {
+		netdev_notice(netdev,
+			      "forcing tx udp tunnel support\n");
+		features |= NETIF_F_GSO_UDP_TUNNEL;
+	}
+
+	/* Can't do one without doing the other */
+	if ((features & vxlan_base) != vxlan_base) {
+		netdev_notice(netdev,
+			      "forcing both tx and rx udp tunnel support\n");
+		features |= vxlan_base;
+	}
+
+	if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
+		if (!(features & NETIF_F_GSO_UDP_TUNNEL_CSUM)) {
+			netdev_notice(netdev,
+				      "forcing tx udp tunnel checksumming on\n");
+			features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+		}
+	} else {
+		if (features & NETIF_F_GSO_UDP_TUNNEL_CSUM) {
+			netdev_notice(netdev,
+				      "forcing tx udp tunnel checksumming off\n");
+			features &= ~NETIF_F_GSO_UDP_TUNNEL_CSUM;
+		}
+	}
+
+	pdata->vxlan_features = features & vxlan_mask;
+
+	/* Adjust UDP Tunnel based on current state */
+	if (pdata->vxlan_force_disable) {
+		netdev_notice(netdev,
+			      "VXLAN acceleration disabled, turning off udp tunnel features\n");
+		features &= ~vxlan_mask;
+	}
+
+	return features;
+}
+
 static int xgbe_set_features(struct net_device *netdev,
 			     netdev_features_t features)
 {
 	struct xgbe_prv_data *pdata = netdev_priv(netdev);
 	struct xgbe_hw_if *hw_if = &pdata->hw_if;
 	netdev_features_t rxhash, rxcsum, rxvlan, rxvlan_filter;
+	netdev_features_t udp_tunnel;
 	int ret = 0;
 
 	rxhash = pdata->netdev_features & NETIF_F_RXHASH;
 	rxcsum = pdata->netdev_features & NETIF_F_RXCSUM;
 	rxvlan = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_RX;
 	rxvlan_filter = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_FILTER;
+	udp_tunnel = pdata->netdev_features & NETIF_F_GSO_UDP_TUNNEL;
 
 	if ((features & NETIF_F_RXHASH) && !rxhash)
 		ret = hw_if->enable_rss(pdata);
@@ -2085,6 +2323,11 @@ static int xgbe_set_features(struct net_device *netdev,
 	else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && rxvlan_filter)
 		hw_if->disable_rx_vlan_filtering(pdata);
 
+	if ((features & NETIF_F_GSO_UDP_TUNNEL) && !udp_tunnel)
+		xgbe_enable_vxlan_accel(pdata);
+	else if (!(features & NETIF_F_GSO_UDP_TUNNEL) && udp_tunnel)
+		xgbe_disable_vxlan_accel(pdata);
+
 	pdata->netdev_features = features;
 
 	DBGPR("<--xgbe_set_features\n");
@@ -2092,6 +2335,111 @@ static int xgbe_set_features(struct net_device *netdev,
 	return 0;
 }
 
+static void xgbe_udp_tunnel_add(struct net_device *netdev,
+				struct udp_tunnel_info *ti)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_vxlan_data *vdata;
+
+	if (!pdata->hw_feat.vxn)
+		return;
+
+	if (ti->type != UDP_TUNNEL_TYPE_VXLAN)
+		return;
+
+	pdata->vxlan_port_count++;
+
+	netif_dbg(pdata, drv, netdev,
+		  "adding VXLAN tunnel, family=%hx/port=%hx\n",
+		  ti->sa_family, be16_to_cpu(ti->port));
+
+	if (pdata->vxlan_force_disable)
+		return;
+
+	vdata = kzalloc(sizeof(*vdata), GFP_ATOMIC);
+	if (!vdata) {
+		/* Can no longer properly track VXLAN ports */
+		pdata->vxlan_force_disable = 1;
+		netif_dbg(pdata, drv, netdev,
+			  "internal error, disabling VXLAN accelerations\n");
+
+		xgbe_disable_vxlan_accel(pdata);
+
+		return;
+	}
+	vdata->sa_family = ti->sa_family;
+	vdata->port = ti->port;
+
+	list_add_tail(&vdata->list, &pdata->vxlan_ports);
+
+	/* First port added? */
+	if (pdata->vxlan_port_count == 1) {
+		xgbe_enable_vxlan_accel(pdata);
+
+		return;
+	}
+}
+
+static void xgbe_udp_tunnel_del(struct net_device *netdev,
+				struct udp_tunnel_info *ti)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_vxlan_data *vdata;
+
+	if (!pdata->hw_feat.vxn)
+		return;
+
+	if (ti->type != UDP_TUNNEL_TYPE_VXLAN)
+		return;
+
+	netif_dbg(pdata, drv, netdev,
+		  "deleting VXLAN tunnel, family=%hx/port=%hx\n",
+		  ti->sa_family, be16_to_cpu(ti->port));
+
+	/* Don't need safe version since loop terminates with deletion */
+	list_for_each_entry(vdata, &pdata->vxlan_ports, list) {
+		if (vdata->sa_family != ti->sa_family)
+			continue;
+
+		if (vdata->port != ti->port)
+			continue;
+
+		list_del(&vdata->list);
+		kfree(vdata);
+
+		break;
+	}
+
+	pdata->vxlan_port_count--;
+	if (!pdata->vxlan_port_count) {
+		xgbe_reset_vxlan_accel(pdata);
+
+		return;
+	}
+
+	if (pdata->vxlan_force_disable)
+		return;
+
+	/* See if VXLAN tunnel id needs to be changed */
+	vdata = list_first_entry(&pdata->vxlan_ports,
+				 struct xgbe_vxlan_data, list);
+	if (pdata->vxlan_port == be16_to_cpu(vdata->port))
+		return;
+
+	pdata->vxlan_port = be16_to_cpu(vdata->port);
+	pdata->hw_if.set_vxlan_id(pdata);
+}
+
+static netdev_features_t xgbe_features_check(struct sk_buff *skb,
+					     struct net_device *netdev,
+					     netdev_features_t features)
+{
+	features = vlan_features_check(skb, features);
+	features = vxlan_features_check(skb, features);
+
+	return features;
+}
+
 static const struct net_device_ops xgbe_netdev_ops = {
 	.ndo_open		= xgbe_open,
 	.ndo_stop		= xgbe_close,
@@ -2109,7 +2457,11 @@ static int xgbe_set_features(struct net_device *netdev,
 	.ndo_poll_controller	= xgbe_poll_controller,
 #endif
 	.ndo_setup_tc		= xgbe_setup_tc,
+	.ndo_fix_features	= xgbe_fix_features,
 	.ndo_set_features	= xgbe_set_features,
+	.ndo_udp_tunnel_add	= xgbe_udp_tunnel_add,
+	.ndo_udp_tunnel_del	= xgbe_udp_tunnel_del,
+	.ndo_features_check	= xgbe_features_check,
 };
 
 const struct net_device_ops *xgbe_get_netdev_ops(void)
@@ -2422,6 +2774,15 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
 		if (XGMAC_GET_BITS(packet->attributes,
+				   RX_PACKET_ATTRIBUTES, TNP)) {
+			skb->encapsulation = 1;
+
+			if (XGMAC_GET_BITS(packet->attributes,
+					   RX_PACKET_ATTRIBUTES, TNPCSUM_DONE))
+				skb->csum_level = 1;
+		}
+
+		if (XGMAC_GET_BITS(packet->attributes,
 				   RX_PACKET_ATTRIBUTES, VLAN_CTAG))
 			__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
 					       packet->vlan_ctag);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index c5ff385..d91fa59 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -193,6 +193,7 @@ struct xgbe_prv_data *xgbe_alloc_pdata(struct device *dev)
 	mutex_init(&pdata->i2c_mutex);
 	init_completion(&pdata->i2c_complete);
 	init_completion(&pdata->mdio_complete);
+	INIT_LIST_HEAD(&pdata->vxlan_ports);
 
 	pdata->msg_enable = netif_msg_init(debug, default_msg_level);
 
@@ -374,6 +375,28 @@ int xgbe_config_netdev(struct xgbe_prv_data *pdata)
 	if (pdata->hw_feat.rss)
 		netdev->hw_features |= NETIF_F_RXHASH;
 
+	if (pdata->hw_feat.vxn) {
+		netdev->hw_enc_features = NETIF_F_SG |
+					  NETIF_F_IP_CSUM |
+					  NETIF_F_IPV6_CSUM |
+					  NETIF_F_RXCSUM |
+					  NETIF_F_TSO |
+					  NETIF_F_TSO6 |
+					  NETIF_F_GRO |
+					  NETIF_F_GSO_UDP_TUNNEL |
+					  NETIF_F_GSO_UDP_TUNNEL_CSUM |
+					  NETIF_F_RX_UDP_TUNNEL_PORT;
+
+		netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+				       NETIF_F_GSO_UDP_TUNNEL_CSUM |
+				       NETIF_F_RX_UDP_TUNNEL_PORT;
+
+		pdata->vxlan_offloads_set = 1;
+		pdata->vxlan_features = NETIF_F_GSO_UDP_TUNNEL |
+					NETIF_F_GSO_UDP_TUNNEL_CSUM |
+					NETIF_F_RX_UDP_TUNNEL_PORT;
+	}
+
 	netdev->vlan_features |= NETIF_F_SG |
 				 NETIF_F_IP_CSUM |
 				 NETIF_F_IPV6_CSUM |
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 48a46a7..db155fe 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -132,6 +132,7 @@
 #include <linux/interrupt.h>
 #include <linux/dcache.h>
 #include <linux/ethtool.h>
+#include <linux/list.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
 #define XGBE_DRV_VERSION	"1.0.3"
@@ -817,6 +818,11 @@ struct xgbe_hw_if {
 	/* For ECC */
 	void (*disable_ecc_ded)(struct xgbe_prv_data *);
 	void (*disable_ecc_sec)(struct xgbe_prv_data *, enum xgbe_ecc_sec);
+
+	/* For VXLAN */
+	void (*enable_vxlan)(struct xgbe_prv_data *);
+	void (*disable_vxlan)(struct xgbe_prv_data *);
+	void (*set_vxlan_id)(struct xgbe_prv_data *);
 };
 
 /* This structure represents implementation specific routines for an
@@ -941,6 +947,7 @@ struct xgbe_hw_features {
 	unsigned int addn_mac;		/* Additional MAC Addresses */
 	unsigned int ts_src;		/* Timestamp Source */
 	unsigned int sa_vlan_ins;	/* Source Address or VLAN Insertion */
+	unsigned int vxn;		/* VXLAN/NVGRE */
 
 	/* HW Feature Register1 */
 	unsigned int rx_fifo_size;	/* MTL Receive FIFO Size */
@@ -979,6 +986,12 @@ struct xgbe_version_data {
 	unsigned int rx_desc_prefetch;
 };
 
+struct xgbe_vxlan_data {
+	struct list_head list;
+	sa_family_t sa_family;
+	__be16 port;
+};
+
 struct xgbe_prv_data {
 	struct net_device *netdev;
 	struct pci_dev *pcidev;
@@ -1120,6 +1133,15 @@ struct xgbe_prv_data {
 	u32 rss_table[XGBE_RSS_MAX_TABLE_SIZE];
 	u32 rss_options;
 
+	/* VXLAN settings */
+	unsigned int vxlan_port_set;
+	unsigned int vxlan_offloads_set;
+	unsigned int vxlan_force_disable;
+	unsigned int vxlan_port_count;
+	struct list_head vxlan_ports;
+	u16 vxlan_port;
+	netdev_features_t vxlan_features;
+
 	/* Netdev related settings */
 	unsigned char mac_addr[ETH_ALEN];
 	netdev_features_t netdev_features;

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v2 13/13] amd-xgbe: Add additional ethtool statistics
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (11 preceding siblings ...)
  2017-08-18 14:04 ` [PATCH net-next v2 12/13] amd-xgbe: Add support for VXLAN offload capabilities Tom Lendacky
@ 2017-08-18 14:04 ` Tom Lendacky
  2017-08-18 23:31 ` [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: Tom Lendacky @ 2017-08-18 14:04 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add some additional statistics for tracking VXLAN packets and checksum
errors.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c     |    8 +++++++-
 drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c |    4 ++++
 drivers/net/ethernet/amd/xgbe/xgbe.h         |    5 +++++
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 1bf671e..671203d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -1805,10 +1805,13 @@ static void xgbe_dev_xmit(struct xgbe_channel *channel)
 				  packet->length);
 	}
 
-	if (vxlan)
+	if (vxlan) {
 		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, VNP,
 				  TX_NORMAL_DESC3_VXLAN_PACKET);
 
+		pdata->ext_stats.tx_vxlan_packets += packet->tx_packets;
+	}
+
 	for (i = cur_index - start_index + 1; i < packet->rdesc_count; i++) {
 		cur_index++;
 		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
@@ -1981,6 +1984,7 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 	if (XGMAC_GET_BITS_LE(rdesc->desc2, RX_NORMAL_DESC2, TNP)) {
 		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 			       TNP, 1);
+		pdata->ext_stats.rx_vxlan_packets++;
 
 		l34t = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, L34T);
 		switch (l34t) {
@@ -2018,11 +2022,13 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
 				       CSUM_DONE, 0);
 			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 				       TNPCSUM_DONE, 0);
+			pdata->ext_stats.rx_csum_errors++;
 		} else if (tnp && ((etlt == 0x09) || (etlt == 0x0a))) {
 			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 				       CSUM_DONE, 0);
 			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
 				       TNPCSUM_DONE, 0);
+			pdata->ext_stats.rx_vxlan_csum_errors++;
 		} else {
 			XGMAC_SET_BITS(packet->errors, RX_PACKET_ERRORS,
 				       FRAME, 1);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index cea25ac..ff397bb 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -146,6 +146,7 @@ struct xgbe_stats {
 	XGMAC_MMC_STAT("tx_broadcast_packets", txbroadcastframes_gb),
 	XGMAC_MMC_STAT("tx_multicast_packets", txmulticastframes_gb),
 	XGMAC_MMC_STAT("tx_vlan_packets", txvlanframes_g),
+	XGMAC_EXT_STAT("tx_vxlan_packets", tx_vxlan_packets),
 	XGMAC_EXT_STAT("tx_tso_packets", tx_tso_packets),
 	XGMAC_MMC_STAT("tx_64_byte_packets", tx64octets_gb),
 	XGMAC_MMC_STAT("tx_65_to_127_byte_packets", tx65to127octets_gb),
@@ -162,6 +163,7 @@ struct xgbe_stats {
 	XGMAC_MMC_STAT("rx_broadcast_packets", rxbroadcastframes_g),
 	XGMAC_MMC_STAT("rx_multicast_packets", rxmulticastframes_g),
 	XGMAC_MMC_STAT("rx_vlan_packets", rxvlanframes_gb),
+	XGMAC_EXT_STAT("rx_vxlan_packets", rx_vxlan_packets),
 	XGMAC_MMC_STAT("rx_64_byte_packets", rx64octets_gb),
 	XGMAC_MMC_STAT("rx_65_to_127_byte_packets", rx65to127octets_gb),
 	XGMAC_MMC_STAT("rx_128_to_255_byte_packets", rx128to255octets_gb),
@@ -177,6 +179,8 @@ struct xgbe_stats {
 	XGMAC_MMC_STAT("rx_out_of_range_errors", rxoutofrangetype),
 	XGMAC_MMC_STAT("rx_fifo_overflow_errors", rxfifooverflow),
 	XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror),
+	XGMAC_EXT_STAT("rx_csum_errors", rx_csum_errors),
+	XGMAC_EXT_STAT("rx_vxlan_csum_errors", rx_vxlan_csum_errors),
 	XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes),
 	XGMAC_EXT_STAT("rx_split_header_packets", rx_split_header_packets),
 	XGMAC_EXT_STAT("rx_buffer_unavailable", rx_buffer_unavailable),
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index db155fe..ad102c8 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -715,6 +715,11 @@ struct xgbe_ext_stats {
 	u64 txq_bytes[XGBE_MAX_DMA_CHANNELS];
 	u64 rxq_packets[XGBE_MAX_DMA_CHANNELS];
 	u64 rxq_bytes[XGBE_MAX_DMA_CHANNELS];
+
+	u64 tx_vxlan_packets;
+	u64 rx_vxlan_packets;
+	u64 rx_csum_errors;
+	u64 rx_vxlan_csum_errors;
 };
 
 struct xgbe_hw_if {

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17
  2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
                   ` (12 preceding siblings ...)
  2017-08-18 14:04 ` [PATCH net-next v2 13/13] amd-xgbe: Add additional ethtool statistics Tom Lendacky
@ 2017-08-18 23:31 ` David Miller
  13 siblings, 0 replies; 15+ messages in thread
From: David Miller @ 2017-08-18 23:31 UTC (permalink / raw)
  To: thomas.lendacky; +Cc: netdev

From: Tom Lendacky <thomas.lendacky@amd.com>
Date: Fri, 18 Aug 2017 09:02:09 -0500

> The following updates are included in this driver update series:
> 
> - Set the MDIO mode to clause 45 for the 10GBase-T configuration
> - Set the MII control width to 8-bits for speeds less than 1Gbps
> - Fix an issue to related to module removal when the devices are up
> - Fix ethtool statistics related to packet counting of TSO packets
> - Add support for device renaming
> - Add additional dynamic debug output for the PCS window calculation
> - Optimize reading of DMA channel interrupt enablement register
> - Add additional dynamic debug output about the hardware features
> - Add per queue Tx and Rx ethtool statistics
> - Add a macro to clear ethtool_link_ksettings modes
> - Convert the driver to use the ethtool_link_ksettings
> - Add support for VXLAN offload capabilities
> - Add additional ethtool statistics related to VXLAN
> 
> This patch series is based on net-next.

Series applied, thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-08-18 23:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-18 14:02 [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 Tom Lendacky
2017-08-18 14:02 ` [PATCH net-next v2 01/13] amd-xgbe: Set the MDIO mode for 10000Base-T configuration Tom Lendacky
2017-08-18 14:02 ` [PATCH net-next v2 02/13] amd-xgbe: Set the MII control width for the MAC interface Tom Lendacky
2017-08-18 14:02 ` [PATCH net-next v2 03/13] amd-xgbe: Be sure driver shuts down cleanly on module removal Tom Lendacky
2017-08-18 14:02 ` [PATCH net-next v2 04/13] amd-xgbe: Update TSO packet statistics accuracy Tom Lendacky
2017-08-18 14:02 ` [PATCH net-next v2 05/13] amd-xgbe: Add support to handle device renaming Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 06/13] amd-xgbe: Add additional dynamic debug messages Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 07/13] amd-xgbe: Optimize DMA channel interrupt enablement Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 08/13] amd-xgbe: Add hardware features debug output Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 09/13] amd-xgbe: Add per queue Tx and Rx statistics Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 10/13] net: ethtool: Add macro to clear a link mode setting Tom Lendacky
2017-08-18 14:03 ` [PATCH net-next v2 11/13] amd-xgbe: Convert to using the new link mode settings Tom Lendacky
2017-08-18 14:04 ` [PATCH net-next v2 12/13] amd-xgbe: Add support for VXLAN offload capabilities Tom Lendacky
2017-08-18 14:04 ` [PATCH net-next v2 13/13] amd-xgbe: Add additional ethtool statistics Tom Lendacky
2017-08-18 23:31 ` [PATCH net-next v2 00/13] amd-xgbe: AMD XGBE driver updates 2017-08-17 David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).