netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28
@ 2017-06-28 18:41 Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 01/14] amd-xgbe: Simplify mailbox interface rate change code Tom Lendacky
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:41 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The following updates and fixes are included in this driver update series:

- Simplify mailbox interface code
- Fix SFP supported and advertising settings
- Fix PTP initialization register usage
- Insure there is timestamp skb present before using it
- Add a timeout to timestamp register updates
- Handle return code from software reset function
- Some fixes for handling 2.5Gbps rates
- Limit I2C error messages
- Fix non-DMA interrupt handling through tasklet usage
- Add NUMA affinity support for memory allocations
- Add NUMA affinity support for interrupts
- Prepare for more fine-grained cache coherency controls
- Simplify setting the DMA burst length programming
- Performance improvements

This patch series is based on net-next.

---

Tom Lendacky (14):
      amd-xgbe: Simplify mailbox interface rate change code
      amd-xgbe: Fix SFP PHY supported/advertised settings
      amd-xgbe: Use the proper register during PTP initialization
      amd-xgbe: Add a check for an skb in the timestamp path
      amd-xgbe: Prevent looping forever if timestamp update fails
      amd-xgbe: Handle return code from software reset function
      amd-xgbe: Fixes for working with PHYs that support 2.5GbE
      amd-xgbe: Limit the I2C error messages that are output
      amd-xgbe: Re-issue interrupt if interrupt status not cleared
      amd-xgbe: Add NUMA affinity support for memory allocations
      amd-xgbe: Add NUMA affinity support for IRQ hints
      amd-xgbe: Prepare for more fine grained cache coherency controls
      amd-xgbe: Simplify the burst length settings
      amd-xgbe: Adjust register settings to improve performance


 drivers/net/ethernet/amd/xgbe/xgbe-common.h   |   53 ++---
 drivers/net/ethernet/amd/xgbe/xgbe-desc.c     |   94 +++++++---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c      |  244 ++++++++++---------------
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c      |  245 ++++++++++++++++---------
 drivers/net/ethernet/amd/xgbe/xgbe-i2c.c      |   30 +++
 drivers/net/ethernet/amd/xgbe/xgbe-main.c     |   14 +
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c     |   33 +++
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c      |   14 +
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c   |  240 +++++++++---------------
 drivers/net/ethernet/amd/xgbe/xgbe-platform.c |   10 -
 drivers/net/ethernet/amd/xgbe/xgbe-ptp.c      |    2 
 drivers/net/ethernet/amd/xgbe/xgbe.h          |   56 +++---
 12 files changed, 547 insertions(+), 488 deletions(-)

-- 
Tom Lendacky

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 01/14] amd-xgbe: Simplify mailbox interface rate change code
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
@ 2017-06-28 18:41 ` Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 02/14] amd-xgbe: Fix SFP PHY supported/advertised settings Tom Lendacky
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:41 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Simplify and centralize the mailbox command rate change interface by
having a single function perform the writes to the mailbox registers
to issue the request.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c |  155 +++++----------------------
 1 file changed, 29 insertions(+), 126 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index e707c49..0429840 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -1694,19 +1694,25 @@ static void xgbe_phy_set_redrv_mode(struct xgbe_prv_data *pdata)
 	xgbe_phy_put_comm_ownership(pdata);
 }
 
-static void xgbe_phy_start_ratechange(struct xgbe_prv_data *pdata)
+static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+					unsigned int cmd, unsigned int sub_cmd)
 {
-	if (!XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
-		return;
+	unsigned int s0 = 0;
+	unsigned int wait;
 
 	/* Log if a previous command did not complete */
-	netif_dbg(pdata, link, pdata->netdev,
-		  "firmware mailbox not ready for command\n");
-}
+	if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
+		netif_dbg(pdata, link, pdata->netdev,
+			  "firmware mailbox not ready for command\n");
 
-static void xgbe_phy_complete_ratechange(struct xgbe_prv_data *pdata)
-{
-	unsigned int wait;
+	/* Construct the command */
+	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, cmd);
+	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, sub_cmd);
+
+	/* Issue the command */
+	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
+	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
+	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
 
 	/* Wait for command to complete */
 	wait = XGBE_RATECHANGE_COUNT;
@@ -1723,21 +1729,8 @@ static void xgbe_phy_complete_ratechange(struct xgbe_prv_data *pdata)
 
 static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
 {
-	unsigned int s0;
-
-	xgbe_phy_start_ratechange(pdata);
-
 	/* Receiver Reset Cycle */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 5);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 0);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 5, 0);
 
 	netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n");
 }
@@ -1746,14 +1739,8 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 
-	xgbe_phy_start_ratechange(pdata);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, 0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	/* Power off */
+	xgbe_phy_perform_ratechange(pdata, 0, 0);
 
 	phy_data->cur_mode = XGBE_MODE_UNKNOWN;
 
@@ -1763,33 +1750,21 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata)
 static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 10G/SFI */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 3);
 	if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
-		XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 0);
+		xgbe_phy_perform_ratechange(pdata, 3, 0);
 	} else {
 		if (phy_data->sfp_cable_len <= 1)
-			XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 1);
+			xgbe_phy_perform_ratechange(pdata, 3, 1);
 		else if (phy_data->sfp_cable_len <= 3)
-			XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 2);
+			xgbe_phy_perform_ratechange(pdata, 3, 2);
 		else
-			XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 3);
+			xgbe_phy_perform_ratechange(pdata, 3, 3);
 	}
 
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
-
 	phy_data->cur_mode = XGBE_MODE_SFI;
 
 	netif_dbg(pdata, link, pdata->netdev, "10GbE SFI mode set\n");
@@ -1798,23 +1773,11 @@ static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_x_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 1G/X */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 1);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 3);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 1, 3);
 
 	phy_data->cur_mode = XGBE_MODE_X;
 
@@ -1824,23 +1787,11 @@ static void xgbe_phy_x_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_sgmii_1000_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 1G/SGMII */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 1);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 2);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 1, 2);
 
 	phy_data->cur_mode = XGBE_MODE_SGMII_1000;
 
@@ -1850,23 +1801,11 @@ static void xgbe_phy_sgmii_1000_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
-	/* 1G/SGMII */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 1);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 1);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	/* 100M/SGMII */
+	xgbe_phy_perform_ratechange(pdata, 1, 1);
 
 	phy_data->cur_mode = XGBE_MODE_SGMII_100;
 
@@ -1876,23 +1815,11 @@ static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 10G/KR */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 4);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 0);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 4, 0);
 
 	phy_data->cur_mode = XGBE_MODE_KR;
 
@@ -1902,23 +1829,11 @@ static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_kx_2500_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 2.5G/KX */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 2);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 0);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 2, 0);
 
 	phy_data->cur_mode = XGBE_MODE_KX_2500;
 
@@ -1928,23 +1843,11 @@ static void xgbe_phy_kx_2500_mode(struct xgbe_prv_data *pdata)
 static void xgbe_phy_kx_1000_mode(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
-	unsigned int s0;
 
 	xgbe_phy_set_redrv_mode(pdata);
 
-	xgbe_phy_start_ratechange(pdata);
-
 	/* 1G/KX */
-	s0 = 0;
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, 1);
-	XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, SUB_COMMAND, 3);
-
-	/* Call FW to make the change */
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_0, s0);
-	XP_IOWRITE(pdata, XP_DRIVER_SCRATCH_1, 0);
-	XP_IOWRITE_BITS(pdata, XP_DRIVER_INT_REQ, REQUEST, 1);
-
-	xgbe_phy_complete_ratechange(pdata);
+	xgbe_phy_perform_ratechange(pdata, 1, 3);
 
 	phy_data->cur_mode = XGBE_MODE_KX_1000;
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 02/14] amd-xgbe: Fix SFP PHY supported/advertised settings
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 01/14] amd-xgbe: Simplify mailbox interface rate change code Tom Lendacky
@ 2017-06-28 18:41 ` Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 03/14] amd-xgbe: Use the proper register during PTP initialization Tom Lendacky
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:41 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

When using SFPs, the supported and advertised settings should be initially
based on the SFP that has been detected.  The code currently indicates the
overall support of the device as opposed to what the SFP is capable of.
Update the code to change the supported link modes, auto-negotiation, etc.
to be based on the installed SFP.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c |   69 ++++++++++++++++++---------
 1 file changed, 47 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index 0429840..756e116 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -711,23 +711,39 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_phy_data *phy_data = pdata->phy_data;
 
+	if (!phy_data->sfp_mod_absent && !phy_data->sfp_changed)
+		return;
+
+	pdata->phy.supported &= ~SUPPORTED_Autoneg;
+	pdata->phy.supported &= ~(SUPPORTED_Pause | SUPPORTED_Asym_Pause);
+	pdata->phy.supported &= ~SUPPORTED_TP;
+	pdata->phy.supported &= ~SUPPORTED_FIBRE;
+	pdata->phy.supported &= ~SUPPORTED_100baseT_Full;
+	pdata->phy.supported &= ~SUPPORTED_1000baseT_Full;
+	pdata->phy.supported &= ~SUPPORTED_10000baseT_Full;
+
 	if (phy_data->sfp_mod_absent) {
 		pdata->phy.speed = SPEED_UNKNOWN;
 		pdata->phy.duplex = DUPLEX_UNKNOWN;
 		pdata->phy.autoneg = AUTONEG_ENABLE;
+		pdata->phy.pause_autoneg = AUTONEG_ENABLE;
+
+		pdata->phy.supported |= SUPPORTED_Autoneg;
+		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+		pdata->phy.supported |= SUPPORTED_TP;
+		pdata->phy.supported |= SUPPORTED_FIBRE;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
+			pdata->phy.supported |= SUPPORTED_100baseT_Full;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
+			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
+		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
+			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
+
 		pdata->phy.advertising = pdata->phy.supported;
 
 		return;
 	}
 
-	pdata->phy.advertising &= ~ADVERTISED_Autoneg;
-	pdata->phy.advertising &= ~ADVERTISED_TP;
-	pdata->phy.advertising &= ~ADVERTISED_FIBRE;
-	pdata->phy.advertising &= ~ADVERTISED_100baseT_Full;
-	pdata->phy.advertising &= ~ADVERTISED_1000baseT_Full;
-	pdata->phy.advertising &= ~ADVERTISED_10000baseT_Full;
-	pdata->phy.advertising &= ~ADVERTISED_10000baseR_FEC;
-
 	switch (phy_data->sfp_base) {
 	case XGBE_SFP_BASE_1000_T:
 	case XGBE_SFP_BASE_1000_SX:
@@ -736,17 +752,25 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 		pdata->phy.speed = SPEED_UNKNOWN;
 		pdata->phy.duplex = DUPLEX_UNKNOWN;
 		pdata->phy.autoneg = AUTONEG_ENABLE;
-		pdata->phy.advertising |= ADVERTISED_Autoneg;
+		pdata->phy.pause_autoneg = AUTONEG_ENABLE;
+		pdata->phy.supported |= SUPPORTED_Autoneg;
+		pdata->phy.supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
 		break;
 	case XGBE_SFP_BASE_10000_SR:
 	case XGBE_SFP_BASE_10000_LR:
 	case XGBE_SFP_BASE_10000_LRM:
 	case XGBE_SFP_BASE_10000_ER:
 	case XGBE_SFP_BASE_10000_CR:
-	default:
 		pdata->phy.speed = SPEED_10000;
 		pdata->phy.duplex = DUPLEX_FULL;
 		pdata->phy.autoneg = AUTONEG_DISABLE;
+		pdata->phy.pause_autoneg = AUTONEG_DISABLE;
+		break;
+	default:
+		pdata->phy.speed = SPEED_UNKNOWN;
+		pdata->phy.duplex = DUPLEX_UNKNOWN;
+		pdata->phy.autoneg = AUTONEG_DISABLE;
+		pdata->phy.pause_autoneg = AUTONEG_DISABLE;
 		break;
 	}
 
@@ -754,36 +778,38 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
 	case XGBE_SFP_BASE_1000_T:
 	case XGBE_SFP_BASE_1000_CX:
 	case XGBE_SFP_BASE_10000_CR:
-		pdata->phy.advertising |= ADVERTISED_TP;
+		pdata->phy.supported |= SUPPORTED_TP;
 		break;
 	default:
-		pdata->phy.advertising |= ADVERTISED_FIBRE;
+		pdata->phy.supported |= SUPPORTED_FIBRE;
 	}
 
 	switch (phy_data->sfp_speed) {
 	case XGBE_SFP_SPEED_100_1000:
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
-			pdata->phy.advertising |= ADVERTISED_100baseT_Full;
+			pdata->phy.supported |= SUPPORTED_100baseT_Full;
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.advertising |= ADVERTISED_1000baseT_Full;
+			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
 		break;
 	case XGBE_SFP_SPEED_1000:
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.advertising |= ADVERTISED_1000baseT_Full;
+			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
 		break;
 	case XGBE_SFP_SPEED_10000:
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
-			pdata->phy.advertising |= ADVERTISED_10000baseT_Full;
+			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
 		break;
 	default:
 		/* Choose the fastest supported speed */
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)
-			pdata->phy.advertising |= ADVERTISED_10000baseT_Full;
+			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
 		else if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000)
-			pdata->phy.advertising |= ADVERTISED_1000baseT_Full;
+			pdata->phy.supported |= SUPPORTED_1000baseT_Full;
 		else if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_100)
-			pdata->phy.advertising |= ADVERTISED_100baseT_Full;
+			pdata->phy.supported |= SUPPORTED_100baseT_Full;
 	}
+
+	pdata->phy.advertising = pdata->phy.supported;
 }
 
 static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
@@ -2113,6 +2139,8 @@ static bool xgbe_phy_use_sfp_mode(struct xgbe_prv_data *pdata,
 		return xgbe_phy_check_mode(pdata, mode,
 					   ADVERTISED_1000baseT_Full);
 	case XGBE_MODE_SFI:
+		if (phy_data->sfp_mod_absent)
+			return true;
 		return xgbe_phy_check_mode(pdata, mode,
 					   ADVERTISED_10000baseT_Full);
 	default:
@@ -2916,9 +2944,6 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
 		if (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000) {
 			pdata->phy.supported |= SUPPORTED_10000baseT_Full;
 			phy_data->start_mode = XGBE_MODE_SFI;
-			if (pdata->fec_ability & MDIO_PMA_10GBR_FECABLE_ABLE)
-				pdata->phy.supported |=
-					SUPPORTED_10000baseR_FEC;
 		}
 
 		phy_data->phydev_mode = XGBE_MDIO_MODE_CL22;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 03/14] amd-xgbe: Use the proper register during PTP initialization
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 01/14] amd-xgbe: Simplify mailbox interface rate change code Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 02/14] amd-xgbe: Fix SFP PHY supported/advertised settings Tom Lendacky
@ 2017-06-28 18:41 ` Tom Lendacky
  2017-06-28 18:41 ` [PATCH net-next v1 04/14] amd-xgbe: Add a check for an skb in the timestamp path Tom Lendacky
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:41 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

During PTP initialization, the Timestamp Control register should be
cleared and not the Tx Configuration register.  While this typo causes
the wrong register to be cleared, the default value of each register and
and the fact that the Tx Configuration register is programmed afterwards
doesn't result in a bug, hence only fixing in net-next.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-ptp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
index a533a6c..d06d260 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
@@ -267,7 +267,7 @@ void xgbe_ptp_register(struct xgbe_prv_data *pdata)
 			 ktime_to_ns(ktime_get_real()));
 
 	/* Disable all timestamping to start */
-	XGMAC_IOWRITE(pdata, MAC_TCR, 0);
+	XGMAC_IOWRITE(pdata, MAC_TSCR, 0);
 	pdata->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
 	pdata->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
 }

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 04/14] amd-xgbe: Add a check for an skb in the timestamp path
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (2 preceding siblings ...)
  2017-06-28 18:41 ` [PATCH net-next v1 03/14] amd-xgbe: Use the proper register during PTP initialization Tom Lendacky
@ 2017-06-28 18:41 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 05/14] amd-xgbe: Prevent looping forever if timestamp update fails Tom Lendacky
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:41 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Spurious Tx timestamp interrupts can cause an oops in the Tx timestamp
processing function if a Tx timestamp skb is NULL. Add a check to insure
a Tx timestamp skb is present before attempting to use it.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index a934bd5..2068510 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -1212,6 +1212,10 @@ static void xgbe_tx_tstamp(struct work_struct *work)
 	u64 nsec;
 	unsigned long flags;
 
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+	if (!pdata->tx_tstamp_skb)
+		goto unlock;
+
 	if (pdata->tx_tstamp) {
 		nsec = timecounter_cyc2time(&pdata->tstamp_tc,
 					    pdata->tx_tstamp);
@@ -1223,8 +1227,9 @@ static void xgbe_tx_tstamp(struct work_struct *work)
 
 	dev_kfree_skb_any(pdata->tx_tstamp_skb);
 
-	spin_lock_irqsave(&pdata->tstamp_lock, flags);
 	pdata->tx_tstamp_skb = NULL;
+
+unlock:
 	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
 }
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 05/14] amd-xgbe: Prevent looping forever if timestamp update fails
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (3 preceding siblings ...)
  2017-06-28 18:41 ` [PATCH net-next v1 04/14] amd-xgbe: Add a check for an skb in the timestamp path Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 06/14] amd-xgbe: Handle return code from software reset function Tom Lendacky
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Just to be on the safe side, should the update of the timestamp registers
not complete, issue a warning rather than looping forever waiting for the
update to complete.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c |   15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 24a687c..3ad4036 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -1497,26 +1497,37 @@ static void xgbe_rx_desc_init(struct xgbe_channel *channel)
 static void xgbe_update_tstamp_addend(struct xgbe_prv_data *pdata,
 				      unsigned int addend)
 {
+	unsigned int count = 10000;
+
 	/* Set the addend register value and tell the device */
 	XGMAC_IOWRITE(pdata, MAC_TSAR, addend);
 	XGMAC_IOWRITE_BITS(pdata, MAC_TSCR, TSADDREG, 1);
 
 	/* Wait for addend update to complete */
-	while (XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSADDREG))
+	while (--count && XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSADDREG))
 		udelay(5);
+
+	if (!count)
+		netdev_err(pdata->netdev,
+			   "timed out updating timestamp addend register\n");
 }
 
 static void xgbe_set_tstamp_time(struct xgbe_prv_data *pdata, unsigned int sec,
 				 unsigned int nsec)
 {
+	unsigned int count = 10000;
+
 	/* Set the time values and tell the device */
 	XGMAC_IOWRITE(pdata, MAC_STSUR, sec);
 	XGMAC_IOWRITE(pdata, MAC_STNUR, nsec);
 	XGMAC_IOWRITE_BITS(pdata, MAC_TSCR, TSINIT, 1);
 
 	/* Wait for time update to complete */
-	while (XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSINIT))
+	while (--count && XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSINIT))
 		udelay(5);
+
+	if (!count)
+		netdev_err(pdata->netdev, "timed out initializing timestamp\n");
 }
 
 static u64 xgbe_get_tstamp_time(struct xgbe_prv_data *pdata)

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 06/14] amd-xgbe: Handle return code from software reset function
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (4 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 05/14] amd-xgbe: Prevent looping forever if timestamp update fails Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 07/14] amd-xgbe: Fixes for working with PHYs that support 2.5GbE Tom Lendacky
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Currently the function that performs a software reset of the hardware
provides a return code.  During driver probe check this return code and
exit with an error if the software reset fails.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-main.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 17ac8f9..982368b 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -277,7 +277,11 @@ int xgbe_config_netdev(struct xgbe_prv_data *pdata)
 	pdata->desc_ded_period = jiffies;
 
 	/* Issue software reset to device */
-	pdata->hw_if.exit(pdata);
+	ret = pdata->hw_if.exit(pdata);
+	if (ret) {
+		dev_err(dev, "software reset failed\n");
+		return ret;
+	}
 
 	/* Set default configuration data */
 	xgbe_default_config(pdata);

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 07/14] amd-xgbe: Fixes for working with PHYs that support 2.5GbE
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (5 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 06/14] amd-xgbe: Handle return code from software reset function Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 08/14] amd-xgbe: Limit the I2C error messages that are output Tom Lendacky
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

The driver has some missing functionality when operating in the mode that
supports 2.5GbE.  Fix the driver to fully recognize and support this speed.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index 756e116..b8be62e 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -1966,6 +1966,8 @@ static enum xgbe_mode xgbe_phy_get_baset_mode(struct xgbe_phy_data *phy_data,
 		return XGBE_MODE_SGMII_100;
 	case SPEED_1000:
 		return XGBE_MODE_SGMII_1000;
+	case SPEED_2500:
+		return XGBE_MODE_KX_2500;
 	case SPEED_10000:
 		return XGBE_MODE_KR;
 	default:
@@ -2109,6 +2111,9 @@ static bool xgbe_phy_use_baset_mode(struct xgbe_prv_data *pdata,
 	case XGBE_MODE_SGMII_1000:
 		return xgbe_phy_check_mode(pdata, mode,
 					   ADVERTISED_1000baseT_Full);
+	case XGBE_MODE_KX_2500:
+		return xgbe_phy_check_mode(pdata, mode,
+					   ADVERTISED_2500baseX_Full);
 	case XGBE_MODE_KR:
 		return xgbe_phy_check_mode(pdata, mode,
 					   ADVERTISED_10000baseT_Full);
@@ -2218,6 +2223,8 @@ static bool xgbe_phy_valid_speed_baset_mode(struct xgbe_phy_data *phy_data,
 	case SPEED_100:
 	case SPEED_1000:
 		return true;
+	case SPEED_2500:
+		return (phy_data->port_mode == XGBE_PORT_MODE_NBASE_T);
 	case SPEED_10000:
 		return (phy_data->port_mode == XGBE_PORT_MODE_10GBASE_T);
 	default:

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 08/14] amd-xgbe: Limit the I2C error messages that are output
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (6 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 07/14] amd-xgbe: Fixes for working with PHYs that support 2.5GbE Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 09/14] amd-xgbe: Re-issue interrupt if interrupt status not cleared Tom Lendacky
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

When I2C communication fails, it tends to always fail. Rather than
continuously issue an error message (once per second in most cases),
change the message to be issued just once.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
index b8be62e..04b5c14 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
@@ -1121,7 +1121,8 @@ static int xgbe_phy_sfp_read_eeprom(struct xgbe_prv_data *pdata)
 
 	ret = xgbe_phy_sfp_get_mux(pdata);
 	if (ret) {
-		netdev_err(pdata->netdev, "I2C error setting SFP MUX\n");
+		dev_err_once(pdata->dev, "%s: I2C error setting SFP MUX\n",
+			     netdev_name(pdata->netdev));
 		return ret;
 	}
 
@@ -1131,7 +1132,8 @@ static int xgbe_phy_sfp_read_eeprom(struct xgbe_prv_data *pdata)
 				&eeprom_addr, sizeof(eeprom_addr),
 				&sfp_eeprom, sizeof(sfp_eeprom));
 	if (ret) {
-		netdev_err(pdata->netdev, "I2C error reading SFP EEPROM\n");
+		dev_err_once(pdata->dev, "%s: I2C error reading SFP EEPROM\n",
+			     netdev_name(pdata->netdev));
 		goto put;
 	}
 
@@ -1190,7 +1192,8 @@ static void xgbe_phy_sfp_signals(struct xgbe_prv_data *pdata)
 				&gpio_reg, sizeof(gpio_reg),
 				gpio_ports, sizeof(gpio_ports));
 	if (ret) {
-		netdev_err(pdata->netdev, "I2C error reading SFP GPIOs\n");
+		dev_err_once(pdata->dev, "%s: I2C error reading SFP GPIOs\n",
+			     netdev_name(pdata->netdev));
 		return;
 	}
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 09/14] amd-xgbe: Re-issue interrupt if interrupt status not cleared
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (7 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 08/14] amd-xgbe: Limit the I2C error messages that are output Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:42 ` [PATCH net-next v1 10/14] amd-xgbe: Add NUMA affinity support for memory allocations Tom Lendacky
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Some of the device interrupts should function as level interrupts. For
some hardware configurations this requires setting some control bits
so that if the interrupt status has not been cleared the interrupt
should be reissued.

Additionally, when using MSI or MSI-X interrupts, run the interrupt
service routine as a tasklet so that the re-issuance of the interrupt
is handled properly.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h |    1 +
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c    |   53 +++++++++++++++++++++++----
 drivers/net/ethernet/amd/xgbe/xgbe-i2c.c    |   30 +++++++++++++--
 drivers/net/ethernet/amd/xgbe/xgbe-mdio.c   |   33 +++++++++++++++--
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c    |    4 ++
 drivers/net/ethernet/amd/xgbe/xgbe.h        |   11 +++++-
 6 files changed, 115 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index 127adbe..e7b6804 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -959,6 +959,7 @@
 #define XP_DRIVER_INT_RO		0x0064
 #define XP_DRIVER_SCRATCH_0		0x0068
 #define XP_DRIVER_SCRATCH_1		0x006c
+#define XP_INT_REISSUE_EN		0x0074
 #define XP_INT_EN			0x0078
 #define XP_I2C_MUTEX			0x0080
 #define XP_MDIO_MUTEX			0x0084
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 2068510..ff6d204 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -382,9 +382,9 @@ static bool xgbe_ecc_ded(struct xgbe_prv_data *pdata, unsigned long *period,
 	return false;
 }
 
-static irqreturn_t xgbe_ecc_isr(int irq, void *data)
+static void xgbe_ecc_isr_task(unsigned long data)
 {
-	struct xgbe_prv_data *pdata = data;
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
 	unsigned int ecc_isr;
 	bool stop = false;
 
@@ -435,12 +435,26 @@ static irqreturn_t xgbe_ecc_isr(int irq, void *data)
 	/* Clear all ECC interrupts */
 	XP_IOWRITE(pdata, XP_ECC_ISR, ecc_isr);
 
-	return IRQ_HANDLED;
+	/* Reissue interrupt if status is not clear */
+	if (pdata->vdata->irq_reissue_support)
+		XP_IOWRITE(pdata, XP_INT_REISSUE_EN, 1 << 1);
 }
 
-static irqreturn_t xgbe_isr(int irq, void *data)
+static irqreturn_t xgbe_ecc_isr(int irq, void *data)
 {
 	struct xgbe_prv_data *pdata = data;
+
+	if (pdata->isr_as_tasklet)
+		tasklet_schedule(&pdata->tasklet_ecc);
+	else
+		xgbe_ecc_isr_task((unsigned long)pdata);
+
+	return IRQ_HANDLED;
+}
+
+static void xgbe_isr_task(unsigned long data)
+{
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
 	struct xgbe_hw_if *hw_if = &pdata->hw_if;
 	struct xgbe_channel *channel;
 	unsigned int dma_isr, dma_ch_isr;
@@ -543,15 +557,36 @@ static irqreturn_t xgbe_isr(int irq, void *data)
 isr_done:
 	/* If there is not a separate AN irq, handle it here */
 	if (pdata->dev_irq == pdata->an_irq)
-		pdata->phy_if.an_isr(irq, pdata);
+		pdata->phy_if.an_isr(pdata);
 
 	/* If there is not a separate ECC irq, handle it here */
 	if (pdata->vdata->ecc_support && (pdata->dev_irq == pdata->ecc_irq))
-		xgbe_ecc_isr(irq, pdata);
+		xgbe_ecc_isr_task((unsigned long)pdata);
 
 	/* If there is not a separate I2C irq, handle it here */
 	if (pdata->vdata->i2c_support && (pdata->dev_irq == pdata->i2c_irq))
-		pdata->i2c_if.i2c_isr(irq, pdata);
+		pdata->i2c_if.i2c_isr(pdata);
+
+	/* Reissue interrupt if status is not clear */
+	if (pdata->vdata->irq_reissue_support) {
+		unsigned int reissue_mask;
+
+		reissue_mask = 1 << 0;
+		if (!pdata->per_channel_irq)
+			reissue_mask |= 0xffff < 4;
+
+		XP_IOWRITE(pdata, XP_INT_REISSUE_EN, reissue_mask);
+	}
+}
+
+static irqreturn_t xgbe_isr(int irq, void *data)
+{
+	struct xgbe_prv_data *pdata = data;
+
+	if (pdata->isr_as_tasklet)
+		tasklet_schedule(&pdata->tasklet_dev);
+	else
+		xgbe_isr_task((unsigned long)pdata);
 
 	return IRQ_HANDLED;
 }
@@ -826,6 +861,10 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 	unsigned int i;
 	int ret;
 
+	tasklet_init(&pdata->tasklet_dev, xgbe_isr_task, (unsigned long)pdata);
+	tasklet_init(&pdata->tasklet_ecc, xgbe_ecc_isr_task,
+		     (unsigned long)pdata);
+
 	ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0,
 			       netdev->name, pdata);
 	if (ret) {
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
index 417bdb5..4d9062d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
@@ -274,13 +274,16 @@ static void xgbe_i2c_clear_isr_interrupts(struct xgbe_prv_data *pdata,
 		XI2C_IOREAD(pdata, IC_CLR_STOP_DET);
 }
 
-static irqreturn_t xgbe_i2c_isr(int irq, void *data)
+static void xgbe_i2c_isr_task(unsigned long data)
 {
 	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
 	struct xgbe_i2c_op_state *state = &pdata->i2c.op_state;
 	unsigned int isr;
 
 	isr = XI2C_IOREAD(pdata, IC_RAW_INTR_STAT);
+	if (!isr)
+		goto reissue_check;
+
 	netif_dbg(pdata, intr, pdata->netdev,
 		  "I2C interrupt received: status=%#010x\n", isr);
 
@@ -308,6 +311,21 @@ static irqreturn_t xgbe_i2c_isr(int irq, void *data)
 	if (state->ret || XI2C_GET_BITS(isr, IC_RAW_INTR_STAT, STOP_DET))
 		complete(&pdata->i2c_complete);
 
+reissue_check:
+	/* Reissue interrupt if status is not clear */
+	if (pdata->vdata->irq_reissue_support)
+		XP_IOWRITE(pdata, XP_INT_REISSUE_EN, 1 << 2);
+}
+
+static irqreturn_t xgbe_i2c_isr(int irq, void *data)
+{
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
+
+	if (pdata->isr_as_tasklet)
+		tasklet_schedule(&pdata->tasklet_i2c);
+	else
+		xgbe_i2c_isr_task((unsigned long)pdata);
+
 	return IRQ_HANDLED;
 }
 
@@ -349,12 +367,11 @@ static void xgbe_i2c_set_target(struct xgbe_prv_data *pdata, unsigned int addr)
 	XI2C_IOWRITE(pdata, IC_TAR, addr);
 }
 
-static irqreturn_t xgbe_i2c_combined_isr(int irq, struct xgbe_prv_data *pdata)
+static irqreturn_t xgbe_i2c_combined_isr(struct xgbe_prv_data *pdata)
 {
-	if (!XI2C_IOREAD(pdata, IC_RAW_INTR_STAT))
-		return IRQ_HANDLED;
+	xgbe_i2c_isr_task((unsigned long)pdata);
 
-	return xgbe_i2c_isr(irq, pdata);
+	return IRQ_HANDLED;
 }
 
 static int xgbe_i2c_xfer(struct xgbe_prv_data *pdata, struct xgbe_i2c_op *op)
@@ -445,6 +462,9 @@ static int xgbe_i2c_start(struct xgbe_prv_data *pdata)
 
 	/* If we have a separate I2C irq, enable it */
 	if (pdata->dev_irq != pdata->i2c_irq) {
+		tasklet_init(&pdata->tasklet_i2c, xgbe_i2c_isr_task,
+			     (unsigned long)pdata);
+
 		ret = devm_request_irq(pdata->dev, pdata->i2c_irq,
 				       xgbe_i2c_isr, 0, pdata->i2c_name,
 				       pdata);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index b672d92..8068491 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -665,6 +665,10 @@ static void xgbe_an37_isr(struct xgbe_prv_data *pdata)
 	} else {
 		/* Enable AN interrupts */
 		xgbe_an37_enable_interrupts(pdata);
+
+		/* Reissue interrupt if status is not clear */
+		if (pdata->vdata->irq_reissue_support)
+			XP_IOWRITE(pdata, XP_INT_REISSUE_EN, 1 << 3);
 	}
 }
 
@@ -684,10 +688,14 @@ static void xgbe_an73_isr(struct xgbe_prv_data *pdata)
 	} else {
 		/* Enable AN interrupts */
 		xgbe_an73_enable_interrupts(pdata);
+
+		/* Reissue interrupt if status is not clear */
+		if (pdata->vdata->irq_reissue_support)
+			XP_IOWRITE(pdata, XP_INT_REISSUE_EN, 1 << 3);
 	}
 }
 
-static irqreturn_t xgbe_an_isr(int irq, void *data)
+static void xgbe_an_isr_task(unsigned long data)
 {
 	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
 
@@ -705,13 +713,25 @@ static irqreturn_t xgbe_an_isr(int irq, void *data)
 	default:
 		break;
 	}
+}
+
+static irqreturn_t xgbe_an_isr(int irq, void *data)
+{
+	struct xgbe_prv_data *pdata = (struct xgbe_prv_data *)data;
+
+	if (pdata->isr_as_tasklet)
+		tasklet_schedule(&pdata->tasklet_an);
+	else
+		xgbe_an_isr_task((unsigned long)pdata);
 
 	return IRQ_HANDLED;
 }
 
-static irqreturn_t xgbe_an_combined_isr(int irq, struct xgbe_prv_data *pdata)
+static irqreturn_t xgbe_an_combined_isr(struct xgbe_prv_data *pdata)
 {
-	return xgbe_an_isr(irq, pdata);
+	xgbe_an_isr_task((unsigned long)pdata);
+
+	return IRQ_HANDLED;
 }
 
 static void xgbe_an_irq_work(struct work_struct *work)
@@ -915,6 +935,10 @@ static void xgbe_an_state_machine(struct work_struct *work)
 		break;
 	}
 
+	/* Reissue interrupt if status is not clear */
+	if (pdata->vdata->irq_reissue_support)
+		XP_IOWRITE(pdata, XP_INT_REISSUE_EN, 1 << 3);
+
 	mutex_unlock(&pdata->an_mutex);
 }
 
@@ -1379,6 +1403,9 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata)
 
 	/* If we have a separate AN irq, enable it */
 	if (pdata->dev_irq != pdata->an_irq) {
+		tasklet_init(&pdata->tasklet_an, xgbe_an_isr_task,
+			     (unsigned long)pdata);
+
 		ret = devm_request_irq(pdata->dev, pdata->an_irq,
 				       xgbe_an_isr, 0, pdata->an_name,
 				       pdata);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
index 38392a52..f0c2e88 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
@@ -139,6 +139,7 @@ static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata)
 		return ret;
 	}
 
+	pdata->isr_as_tasklet = 1;
 	pdata->irq_count = ret;
 
 	pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0);
@@ -175,6 +176,7 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata)
 		return ret;
 	}
 
+	pdata->isr_as_tasklet = pdata->pcidev->msi_enabled ? 1 : 0;
 	pdata->irq_count = 1;
 	pdata->channel_irq_count = 1;
 
@@ -445,6 +447,7 @@ static int xgbe_pci_resume(struct pci_dev *pdev)
 	.tx_tstamp_workaround		= 1,
 	.ecc_support			= 1,
 	.i2c_support			= 1,
+	.irq_reissue_support		= 1,
 };
 
 static const struct xgbe_version_data xgbe_v2b = {
@@ -456,6 +459,7 @@ static int xgbe_pci_resume(struct pci_dev *pdev)
 	.tx_tstamp_workaround		= 1,
 	.ecc_support			= 1,
 	.i2c_support			= 1,
+	.irq_reissue_support		= 1,
 };
 
 static const struct pci_device_id xgbe_pci_table[] = {
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index f9a2463..2834961 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -837,7 +837,7 @@ struct xgbe_phy_if {
 	bool (*phy_valid_speed)(struct xgbe_prv_data *, int);
 
 	/* For single interrupt support */
-	irqreturn_t (*an_isr)(int, struct xgbe_prv_data *);
+	irqreturn_t (*an_isr)(struct xgbe_prv_data *);
 
 	/* PHY implementation specific services */
 	struct xgbe_phy_impl_if phy_impl;
@@ -855,7 +855,7 @@ struct xgbe_i2c_if {
 	int (*i2c_xfer)(struct xgbe_prv_data *, struct xgbe_i2c_op *);
 
 	/* For single interrupt support */
-	irqreturn_t (*i2c_isr)(int, struct xgbe_prv_data *);
+	irqreturn_t (*i2c_isr)(struct xgbe_prv_data *);
 };
 
 struct xgbe_desc_if {
@@ -924,6 +924,7 @@ struct xgbe_version_data {
 	unsigned int tx_tstamp_workaround;
 	unsigned int ecc_support;
 	unsigned int i2c_support;
+	unsigned int irq_reissue_support;
 };
 
 struct xgbe_prv_data {
@@ -1159,6 +1160,12 @@ struct xgbe_prv_data {
 
 	unsigned int lpm_ctrl;		/* CTRL1 for resume */
 
+	unsigned int isr_as_tasklet;
+	struct tasklet_struct tasklet_dev;
+	struct tasklet_struct tasklet_ecc;
+	struct tasklet_struct tasklet_i2c;
+	struct tasklet_struct tasklet_an;
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *xgbe_debugfs;
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 10/14] amd-xgbe: Add NUMA affinity support for memory allocations
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (8 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 09/14] amd-xgbe: Re-issue interrupt if interrupt status not cleared Tom Lendacky
@ 2017-06-28 18:42 ` Tom Lendacky
  2017-06-28 18:43 ` [PATCH net-next v1 11/14] amd-xgbe: Add NUMA affinity support for IRQ hints Tom Lendacky
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:42 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add support to perform memory allocations on the node of the device. The
original allocation or the ring structure and Tx/Rx queues allocated all
of the memory at once and then carved it up for each channel and queue.
To best ensure that we get as much memory from the NUMA node as we can,
break the channel and ring allocations into individual allocations.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-desc.c |   94 ++++++++++-----
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c  |  135 +++++++++-------------
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c  |  177 ++++++++++++++++-------------
 drivers/net/ethernet/amd/xgbe/xgbe.h      |    5 +
 4 files changed, 217 insertions(+), 194 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
index 0a98c36..45d9230 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
@@ -176,8 +176,8 @@ static void xgbe_free_ring_resources(struct xgbe_prv_data *pdata)
 
 	DBGPR("-->xgbe_free_ring_resources\n");
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		xgbe_free_ring(pdata, channel->tx_ring);
 		xgbe_free_ring(pdata, channel->rx_ring);
 	}
@@ -185,34 +185,60 @@ static void xgbe_free_ring_resources(struct xgbe_prv_data *pdata)
 	DBGPR("<--xgbe_free_ring_resources\n");
 }
 
+static void *xgbe_alloc_node(size_t size, int node)
+{
+	void *mem;
+
+	mem = kzalloc_node(size, GFP_KERNEL, node);
+	if (!mem)
+		mem = kzalloc(size, GFP_KERNEL);
+
+	return mem;
+}
+
+static void *xgbe_dma_alloc_node(struct device *dev, size_t size,
+				 dma_addr_t *dma, int node)
+{
+	void *mem;
+	int cur_node = dev_to_node(dev);
+
+	set_dev_node(dev, node);
+	mem = dma_alloc_coherent(dev, size, dma, GFP_KERNEL);
+	set_dev_node(dev, cur_node);
+
+	if (!mem)
+		mem = dma_alloc_coherent(dev, size, dma, GFP_KERNEL);
+
+	return mem;
+}
+
 static int xgbe_init_ring(struct xgbe_prv_data *pdata,
 			  struct xgbe_ring *ring, unsigned int rdesc_count)
 {
-	DBGPR("-->xgbe_init_ring\n");
+	size_t size;
 
 	if (!ring)
 		return 0;
 
 	/* Descriptors */
+	size = rdesc_count * sizeof(struct xgbe_ring_desc);
+
 	ring->rdesc_count = rdesc_count;
-	ring->rdesc = dma_alloc_coherent(pdata->dev,
-					 (sizeof(struct xgbe_ring_desc) *
-					  rdesc_count), &ring->rdesc_dma,
-					 GFP_KERNEL);
+	ring->rdesc = xgbe_dma_alloc_node(pdata->dev, size, &ring->rdesc_dma,
+					  ring->node);
 	if (!ring->rdesc)
 		return -ENOMEM;
 
 	/* Descriptor information */
-	ring->rdata = kcalloc(rdesc_count, sizeof(struct xgbe_ring_data),
-			      GFP_KERNEL);
+	size = rdesc_count * sizeof(struct xgbe_ring_data);
+
+	ring->rdata = xgbe_alloc_node(size, ring->node);
 	if (!ring->rdata)
 		return -ENOMEM;
 
 	netif_dbg(pdata, drv, pdata->netdev,
-		  "rdesc=%p, rdesc_dma=%pad, rdata=%p\n",
-		  ring->rdesc, &ring->rdesc_dma, ring->rdata);
-
-	DBGPR("<--xgbe_init_ring\n");
+		  "rdesc=%p, rdesc_dma=%pad, rdata=%p, node=%d\n",
+		  ring->rdesc, &ring->rdesc_dma, ring->rdata, ring->node);
 
 	return 0;
 }
@@ -223,10 +249,8 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
 	unsigned int i;
 	int ret;
 
-	DBGPR("-->xgbe_alloc_ring_resources\n");
-
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		netif_dbg(pdata, drv, pdata->netdev, "%s - Tx ring:\n",
 			  channel->name);
 
@@ -250,8 +274,6 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
 		}
 	}
 
-	DBGPR("<--xgbe_alloc_ring_resources\n");
-
 	return 0;
 
 err_ring:
@@ -261,21 +283,33 @@ static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
 }
 
 static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
-			    struct xgbe_page_alloc *pa, gfp_t gfp, int order)
+			    struct xgbe_page_alloc *pa, int alloc_order,
+			    int node)
 {
 	struct page *pages = NULL;
 	dma_addr_t pages_dma;
-	int ret;
+	gfp_t gfp;
+	int order, ret;
+
+again:
+	order = alloc_order;
 
 	/* Try to obtain pages, decreasing order if necessary */
-	gfp |= __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
+	gfp = GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
 	while (order >= 0) {
-		pages = alloc_pages(gfp, order);
+		pages = alloc_pages_node(node, gfp, order);
 		if (pages)
 			break;
 
 		order--;
 	}
+
+	/* If we couldn't get local pages, try getting from anywhere */
+	if (!pages && (node != NUMA_NO_NODE)) {
+		node = NUMA_NO_NODE;
+		goto again;
+	}
+
 	if (!pages)
 		return -ENOMEM;
 
@@ -327,14 +361,14 @@ static int xgbe_map_rx_buffer(struct xgbe_prv_data *pdata,
 	int ret;
 
 	if (!ring->rx_hdr_pa.pages) {
-		ret = xgbe_alloc_pages(pdata, &ring->rx_hdr_pa, GFP_ATOMIC, 0);
+		ret = xgbe_alloc_pages(pdata, &ring->rx_hdr_pa, 0, ring->node);
 		if (ret)
 			return ret;
 	}
 
 	if (!ring->rx_buf_pa.pages) {
-		ret = xgbe_alloc_pages(pdata, &ring->rx_buf_pa, GFP_ATOMIC,
-				       PAGE_ALLOC_COSTLY_ORDER);
+		ret = xgbe_alloc_pages(pdata, &ring->rx_buf_pa,
+				       PAGE_ALLOC_COSTLY_ORDER, ring->node);
 		if (ret)
 			return ret;
 	}
@@ -362,8 +396,8 @@ static void xgbe_wrapper_tx_descriptor_init(struct xgbe_prv_data *pdata)
 
 	DBGPR("-->xgbe_wrapper_tx_descriptor_init\n");
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		ring = channel->tx_ring;
 		if (!ring)
 			break;
@@ -403,8 +437,8 @@ static void xgbe_wrapper_rx_descriptor_init(struct xgbe_prv_data *pdata)
 
 	DBGPR("-->xgbe_wrapper_rx_descriptor_init\n");
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		ring = channel->rx_ring;
 		if (!ring)
 			break;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 3ad4036..b05393f 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -176,12 +176,10 @@ static unsigned int xgbe_riwt_to_usec(struct xgbe_prv_data *pdata,
 
 static int xgbe_config_pblx8(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++)
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_CR, PBLX8,
+	for (i = 0; i < pdata->channel_count; i++)
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, PBLX8,
 				       pdata->pblx8);
 
 	return 0;
@@ -189,20 +187,18 @@ static int xgbe_config_pblx8(struct xgbe_prv_data *pdata)
 
 static int xgbe_get_tx_pbl_val(struct xgbe_prv_data *pdata)
 {
-	return XGMAC_DMA_IOREAD_BITS(pdata->channel, DMA_CH_TCR, PBL);
+	return XGMAC_DMA_IOREAD_BITS(pdata->channel[0], DMA_CH_TCR, PBL);
 }
 
 static int xgbe_config_tx_pbl_val(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, PBL,
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, PBL,
 				       pdata->tx_pbl);
 	}
 
@@ -211,20 +207,18 @@ static int xgbe_config_tx_pbl_val(struct xgbe_prv_data *pdata)
 
 static int xgbe_get_rx_pbl_val(struct xgbe_prv_data *pdata)
 {
-	return XGMAC_DMA_IOREAD_BITS(pdata->channel, DMA_CH_RCR, PBL);
+	return XGMAC_DMA_IOREAD_BITS(pdata->channel[0], DMA_CH_RCR, PBL);
 }
 
 static int xgbe_config_rx_pbl_val(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, PBL,
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, PBL,
 				       pdata->rx_pbl);
 	}
 
@@ -233,15 +227,13 @@ static int xgbe_config_rx_pbl_val(struct xgbe_prv_data *pdata)
 
 static int xgbe_config_osp_mode(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, OSP,
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, OSP,
 				       pdata->tx_osp_mode);
 	}
 
@@ -292,15 +284,13 @@ static int xgbe_config_tx_threshold(struct xgbe_prv_data *pdata,
 
 static int xgbe_config_rx_coalesce(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RIWT, RWT,
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RIWT, RWT,
 				       pdata->rx_riwt);
 	}
 
@@ -314,44 +304,38 @@ static int xgbe_config_tx_coalesce(struct xgbe_prv_data *pdata)
 
 static void xgbe_config_rx_buffer_size(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, RBSZ,
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, RBSZ,
 				       pdata->rx_buf_size);
 	}
 }
 
 static void xgbe_config_tso_mode(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, TSE, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, TSE, 1);
 	}
 }
 
 static void xgbe_config_sph_mode(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_CR, SPH, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, SPH, 1);
 	}
 
 	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE);
@@ -651,8 +635,9 @@ static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
 		XGMAC_IOWRITE_BITS(pdata, DMA_MR, INTM,
 				   pdata->channel_irq_mode);
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
+
 		/* Clear all the interrupts which are set */
 		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
 		XGMAC_DMA_IOWRITE(channel, DMA_CH_SR, dma_ch_isr);
@@ -3213,16 +3198,14 @@ static void xgbe_prepare_tx_stop(struct xgbe_prv_data *pdata,
 
 static void xgbe_enable_tx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Enable each Tx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, ST, 1);
 	}
 
 	/* Enable each Tx queue */
@@ -3236,7 +3219,6 @@ static void xgbe_enable_tx(struct xgbe_prv_data *pdata)
 
 static void xgbe_disable_tx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Prepare for Tx DMA channel stop */
@@ -3251,12 +3233,11 @@ static void xgbe_disable_tx(struct xgbe_prv_data *pdata)
 		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, 0);
 
 	/* Disable each Tx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 0);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, ST, 0);
 	}
 }
 
@@ -3288,16 +3269,14 @@ static void xgbe_prepare_rx_stop(struct xgbe_prv_data *pdata,
 
 static void xgbe_enable_rx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int reg_val, i;
 
 	/* Enable each Rx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, SR, 1);
 	}
 
 	/* Enable each Rx queue */
@@ -3315,7 +3294,6 @@ static void xgbe_enable_rx(struct xgbe_prv_data *pdata)
 
 static void xgbe_disable_rx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Disable MAC Rx */
@@ -3332,27 +3310,24 @@ static void xgbe_disable_rx(struct xgbe_prv_data *pdata)
 	XGMAC_IOWRITE(pdata, MAC_RQC0R, 0);
 
 	/* Disable each Rx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 0);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, SR, 0);
 	}
 }
 
 static void xgbe_powerup_tx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Enable each Tx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, ST, 1);
 	}
 
 	/* Enable MAC Tx */
@@ -3361,7 +3336,6 @@ static void xgbe_powerup_tx(struct xgbe_prv_data *pdata)
 
 static void xgbe_powerdown_tx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Prepare for Tx DMA channel stop */
@@ -3372,42 +3346,37 @@ static void xgbe_powerdown_tx(struct xgbe_prv_data *pdata)
 	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0);
 
 	/* Disable each Tx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->tx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->tx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 0);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, ST, 0);
 	}
 }
 
 static void xgbe_powerup_rx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Enable each Rx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 1);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, SR, 1);
 	}
 }
 
 static void xgbe_powerdown_rx(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
 	/* Disable each Rx DMA channel */
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		if (!channel->rx_ring)
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!pdata->channel[i]->rx_ring)
 			break;
 
-		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 0);
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, SR, 0);
 	}
 }
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index ff6d204..43b84ff 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -158,81 +158,100 @@
 static int xgbe_all_poll(struct napi_struct *, int);
 static void xgbe_stop(struct xgbe_prv_data *);
 
-static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
+static void *xgbe_alloc_node(size_t size, int node)
 {
-	struct xgbe_channel *channel_mem, *channel;
-	struct xgbe_ring *tx_ring, *rx_ring;
-	unsigned int count, i;
-	int ret = -ENOMEM;
+	void *mem;
 
-	count = max_t(unsigned int, pdata->tx_ring_count, pdata->rx_ring_count);
+	mem = kzalloc_node(size, GFP_KERNEL, node);
+	if (!mem)
+		mem = kzalloc(size, GFP_KERNEL);
+
+	return mem;
+}
+
+static void xgbe_free_channels(struct xgbe_prv_data *pdata)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pdata->channel); i++) {
+		if (!pdata->channel[i])
+			continue;
+
+		kfree(pdata->channel[i]->rx_ring);
+		kfree(pdata->channel[i]->tx_ring);
+		kfree(pdata->channel[i]);
+
+		pdata->channel[i] = NULL;
+	}
 
-	channel_mem = kcalloc(count, sizeof(struct xgbe_channel), GFP_KERNEL);
-	if (!channel_mem)
-		goto err_channel;
+	pdata->channel_count = 0;
+}
+
+static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	unsigned int count, i;
+	int node;
 
-	tx_ring = kcalloc(pdata->tx_ring_count, sizeof(struct xgbe_ring),
-			  GFP_KERNEL);
-	if (!tx_ring)
-		goto err_tx_ring;
+	node = dev_to_node(pdata->dev);
 
-	rx_ring = kcalloc(pdata->rx_ring_count, sizeof(struct xgbe_ring),
-			  GFP_KERNEL);
-	if (!rx_ring)
-		goto err_rx_ring;
+	count = max_t(unsigned int, pdata->tx_ring_count, pdata->rx_ring_count);
+	for (i = 0; i < count; i++) {
+		channel = xgbe_alloc_node(sizeof(*channel), node);
+		if (!channel)
+			goto err_mem;
+		pdata->channel[i] = channel;
 
-	for (i = 0, channel = channel_mem; i < count; i++, channel++) {
 		snprintf(channel->name, sizeof(channel->name), "channel-%u", i);
 		channel->pdata = pdata;
 		channel->queue_index = i;
 		channel->dma_regs = pdata->xgmac_regs + DMA_CH_BASE +
 				    (DMA_CH_INC * i);
+		channel->node = node;
 
 		if (pdata->per_channel_irq)
 			channel->dma_irq = pdata->channel_irq[i];
 
 		if (i < pdata->tx_ring_count) {
-			spin_lock_init(&tx_ring->lock);
-			channel->tx_ring = tx_ring++;
+			ring = xgbe_alloc_node(sizeof(*ring), node);
+			if (!ring)
+				goto err_mem;
+
+			spin_lock_init(&ring->lock);
+			ring->node = node;
+
+			channel->tx_ring = ring;
 		}
 
 		if (i < pdata->rx_ring_count) {
-			spin_lock_init(&rx_ring->lock);
-			channel->rx_ring = rx_ring++;
+			ring = xgbe_alloc_node(sizeof(*ring), node);
+			if (!ring)
+				goto err_mem;
+
+			spin_lock_init(&ring->lock);
+			ring->node = node;
+
+			channel->rx_ring = ring;
 		}
 
 		netif_dbg(pdata, drv, pdata->netdev,
+			  "%s: node=%d\n", channel->name, node);
+
+		netif_dbg(pdata, drv, pdata->netdev,
 			  "%s: dma_regs=%p, dma_irq=%d, tx=%p, rx=%p\n",
 			  channel->name, channel->dma_regs, channel->dma_irq,
 			  channel->tx_ring, channel->rx_ring);
 	}
 
-	pdata->channel = channel_mem;
 	pdata->channel_count = count;
 
 	return 0;
 
-err_rx_ring:
-	kfree(tx_ring);
-
-err_tx_ring:
-	kfree(channel_mem);
-
-err_channel:
-	return ret;
-}
-
-static void xgbe_free_channels(struct xgbe_prv_data *pdata)
-{
-	if (!pdata->channel)
-		return;
-
-	kfree(pdata->channel->rx_ring);
-	kfree(pdata->channel->tx_ring);
-	kfree(pdata->channel);
+err_mem:
+	xgbe_free_channels(pdata);
 
-	pdata->channel = NULL;
-	pdata->channel_count = 0;
+	return -ENOMEM;
 }
 
 static inline unsigned int xgbe_tx_avail_desc(struct xgbe_ring *ring)
@@ -301,12 +320,10 @@ static void xgbe_enable_rx_tx_int(struct xgbe_prv_data *pdata,
 
 static void xgbe_enable_rx_tx_ints(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++)
-		xgbe_enable_rx_tx_int(pdata, channel);
+	for (i = 0; i < pdata->channel_count; i++)
+		xgbe_enable_rx_tx_int(pdata, pdata->channel[i]);
 }
 
 static void xgbe_disable_rx_tx_int(struct xgbe_prv_data *pdata,
@@ -329,12 +346,10 @@ static void xgbe_disable_rx_tx_int(struct xgbe_prv_data *pdata,
 
 static void xgbe_disable_rx_tx_ints(struct xgbe_prv_data *pdata)
 {
-	struct xgbe_channel *channel;
 	unsigned int i;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++)
-		xgbe_disable_rx_tx_int(pdata, channel);
+	for (i = 0; i < pdata->channel_count; i++)
+		xgbe_disable_rx_tx_int(pdata, pdata->channel[i]);
 }
 
 static bool xgbe_ecc_sec(struct xgbe_prv_data *pdata, unsigned long *period,
@@ -475,7 +490,7 @@ static void xgbe_isr_task(unsigned long data)
 		if (!(dma_isr & (1 << i)))
 			continue;
 
-		channel = pdata->channel + i;
+		channel = pdata->channel[i];
 
 		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
 		netif_dbg(pdata, intr, pdata->netdev, "DMA_CH%u_ISR=%#010x\n",
@@ -675,8 +690,8 @@ static void xgbe_init_timers(struct xgbe_prv_data *pdata)
 	setup_timer(&pdata->service_timer, xgbe_service_timer,
 		    (unsigned long)pdata);
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		if (!channel->tx_ring)
 			break;
 
@@ -697,8 +712,8 @@ static void xgbe_stop_timers(struct xgbe_prv_data *pdata)
 
 	del_timer_sync(&pdata->service_timer);
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		if (!channel->tx_ring)
 			break;
 
@@ -816,8 +831,8 @@ static void xgbe_napi_enable(struct xgbe_prv_data *pdata, unsigned int add)
 	unsigned int i;
 
 	if (pdata->per_channel_irq) {
-		channel = pdata->channel;
-		for (i = 0; i < pdata->channel_count; i++, channel++) {
+		for (i = 0; i < pdata->channel_count; i++) {
+			channel = pdata->channel[i];
 			if (add)
 				netif_napi_add(pdata->netdev, &channel->napi,
 					       xgbe_one_poll, NAPI_POLL_WEIGHT);
@@ -839,8 +854,8 @@ static void xgbe_napi_disable(struct xgbe_prv_data *pdata, unsigned int del)
 	unsigned int i;
 
 	if (pdata->per_channel_irq) {
-		channel = pdata->channel;
-		for (i = 0; i < pdata->channel_count; i++, channel++) {
+		for (i = 0; i < pdata->channel_count; i++) {
+			channel = pdata->channel[i];
 			napi_disable(&channel->napi);
 
 			if (del)
@@ -886,8 +901,8 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 	if (!pdata->per_channel_irq)
 		return 0;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		snprintf(channel->dma_irq_name,
 			 sizeof(channel->dma_irq_name) - 1,
 			 "%s-TxRx-%u", netdev_name(netdev),
@@ -907,8 +922,11 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 
 err_dma_irq:
 	/* Using an unsigned int, 'i' will go to UINT_MAX and exit */
-	for (i--, channel--; i < pdata->channel_count; i--, channel--)
+	for (i--; i < pdata->channel_count; i--) {
+		channel = pdata->channel[i];
+
 		devm_free_irq(pdata->dev, channel->dma_irq, channel);
+	}
 
 	if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq))
 		devm_free_irq(pdata->dev, pdata->ecc_irq, pdata);
@@ -932,9 +950,10 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata)
 	if (!pdata->per_channel_irq)
 		return;
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++)
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		devm_free_irq(pdata->dev, channel->dma_irq, channel);
+	}
 }
 
 void xgbe_init_tx_coalesce(struct xgbe_prv_data *pdata)
@@ -969,16 +988,14 @@ void xgbe_init_rx_coalesce(struct xgbe_prv_data *pdata)
 static void xgbe_free_tx_data(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_desc_if *desc_if = &pdata->desc_if;
-	struct xgbe_channel *channel;
 	struct xgbe_ring *ring;
 	struct xgbe_ring_data *rdata;
 	unsigned int i, j;
 
 	DBGPR("-->xgbe_free_tx_data\n");
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		ring = channel->tx_ring;
+	for (i = 0; i < pdata->channel_count; i++) {
+		ring = pdata->channel[i]->tx_ring;
 		if (!ring)
 			break;
 
@@ -994,16 +1011,14 @@ static void xgbe_free_tx_data(struct xgbe_prv_data *pdata)
 static void xgbe_free_rx_data(struct xgbe_prv_data *pdata)
 {
 	struct xgbe_desc_if *desc_if = &pdata->desc_if;
-	struct xgbe_channel *channel;
 	struct xgbe_ring *ring;
 	struct xgbe_ring_data *rdata;
 	unsigned int i, j;
 
 	DBGPR("-->xgbe_free_rx_data\n");
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
-		ring = channel->rx_ring;
+	for (i = 0; i < pdata->channel_count; i++) {
+		ring = pdata->channel[i]->rx_ring;
 		if (!ring)
 			break;
 
@@ -1179,8 +1194,8 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
 
 	hw_if->exit(pdata);
 
-	channel = pdata->channel;
-	for (i = 0; i < pdata->channel_count; i++, channel++) {
+	for (i = 0; i < pdata->channel_count; i++) {
+		channel = pdata->channel[i];
 		if (!channel->tx_ring)
 			continue;
 
@@ -1667,7 +1682,7 @@ static int xgbe_xmit(struct sk_buff *skb, struct net_device *netdev)
 
 	DBGPR("-->xgbe_xmit: skb->len = %d\n", skb->len);
 
-	channel = pdata->channel + skb->queue_mapping;
+	channel = pdata->channel[skb->queue_mapping];
 	txq = netdev_get_tx_queue(netdev, channel->queue_index);
 	ring = channel->tx_ring;
 	packet = &ring->packet_data;
@@ -1877,9 +1892,10 @@ static void xgbe_poll_controller(struct net_device *netdev)
 	DBGPR("-->xgbe_poll_controller\n");
 
 	if (pdata->per_channel_irq) {
-		channel = pdata->channel;
-		for (i = 0; i < pdata->channel_count; i++, channel++)
+		for (i = 0; i < pdata->channel_count; i++) {
+			channel = pdata->channel[i];
 			xgbe_dma_isr(channel->dma_irq, channel);
+		}
 	} else {
 		disable_irq(pdata->dev_irq);
 		xgbe_isr(pdata->dev_irq, pdata);
@@ -2372,8 +2388,9 @@ static int xgbe_all_poll(struct napi_struct *napi, int budget)
 	do {
 		last_processed = processed;
 
-		channel = pdata->channel;
-		for (i = 0; i < pdata->channel_count; i++, channel++) {
+		for (i = 0; i < pdata->channel_count; i++) {
+			channel = pdata->channel[i];
+
 			/* Cleanup Tx ring first */
 			xgbe_tx_poll(channel);
 
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 2834961..ac3b558 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -412,6 +412,7 @@ struct xgbe_ring {
 	/* Page allocation for RX buffers */
 	struct xgbe_page_alloc rx_hdr_pa;
 	struct xgbe_page_alloc rx_buf_pa;
+	int node;
 
 	/* Ring index values
 	 *  cur   - Tx: index of descriptor to be used for current transfer
@@ -462,6 +463,8 @@ struct xgbe_channel {
 
 	struct xgbe_ring *tx_ring;
 	struct xgbe_ring *rx_ring;
+
+	int node;
 } ____cacheline_aligned;
 
 enum xgbe_state {
@@ -1012,7 +1015,7 @@ struct xgbe_prv_data {
 	struct timer_list service_timer;
 
 	/* Rings for Tx/Rx on a DMA channel */
-	struct xgbe_channel *channel;
+	struct xgbe_channel *channel[XGBE_MAX_DMA_CHANNELS];
 	unsigned int tx_max_channel_count;
 	unsigned int rx_max_channel_count;
 	unsigned int channel_count;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 11/14] amd-xgbe: Add NUMA affinity support for IRQ hints
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (9 preceding siblings ...)
  2017-06-28 18:42 ` [PATCH net-next v1 10/14] amd-xgbe: Add NUMA affinity support for memory allocations Tom Lendacky
@ 2017-06-28 18:43 ` Tom Lendacky
  2017-06-28 18:43 ` [PATCH net-next v1 12/14] amd-xgbe: Prepare for more fine grained cache coherency controls Tom Lendacky
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:43 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

For IRQ affinity, set the affinity hints for the IRQs to be (initially) on
the processors corresponding to the NUMA node of the device.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-drv.c |   18 +++++++++++++++---
 drivers/net/ethernet/amd/xgbe/xgbe.h     |    2 ++
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index 43b84ff..ecef3ee 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -192,12 +192,17 @@ static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
 	struct xgbe_channel *channel;
 	struct xgbe_ring *ring;
 	unsigned int count, i;
+	unsigned int cpu;
 	int node;
 
-	node = dev_to_node(pdata->dev);
-
 	count = max_t(unsigned int, pdata->tx_ring_count, pdata->rx_ring_count);
 	for (i = 0; i < count; i++) {
+		/* Attempt to use a CPU on the node the device is on */
+		cpu = cpumask_local_spread(i, dev_to_node(pdata->dev));
+
+		/* Set the allocation node based on the returned CPU */
+		node = cpu_to_node(cpu);
+
 		channel = xgbe_alloc_node(sizeof(*channel), node);
 		if (!channel)
 			goto err_mem;
@@ -209,6 +214,7 @@ static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
 		channel->dma_regs = pdata->xgmac_regs + DMA_CH_BASE +
 				    (DMA_CH_INC * i);
 		channel->node = node;
+		cpumask_set_cpu(cpu, &channel->affinity_mask);
 
 		if (pdata->per_channel_irq)
 			channel->dma_irq = pdata->channel_irq[i];
@@ -236,7 +242,7 @@ static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
 		}
 
 		netif_dbg(pdata, drv, pdata->netdev,
-			  "%s: node=%d\n", channel->name, node);
+			  "%s: cpu=%u, node=%d\n", channel->name, cpu, node);
 
 		netif_dbg(pdata, drv, pdata->netdev,
 			  "%s: dma_regs=%p, dma_irq=%d, tx=%p, rx=%p\n",
@@ -916,6 +922,9 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 				     channel->dma_irq);
 			goto err_dma_irq;
 		}
+
+		irq_set_affinity_hint(channel->dma_irq,
+				      &channel->affinity_mask);
 	}
 
 	return 0;
@@ -925,6 +934,7 @@ static int xgbe_request_irqs(struct xgbe_prv_data *pdata)
 	for (i--; i < pdata->channel_count; i--) {
 		channel = pdata->channel[i];
 
+		irq_set_affinity_hint(channel->dma_irq, NULL);
 		devm_free_irq(pdata->dev, channel->dma_irq, channel);
 	}
 
@@ -952,6 +962,8 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata)
 
 	for (i = 0; i < pdata->channel_count; i++) {
 		channel = pdata->channel[i];
+
+		irq_set_affinity_hint(channel->dma_irq, NULL);
 		devm_free_irq(pdata->dev, channel->dma_irq, channel);
 	}
 }
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index ac3b558..7b50469 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -128,6 +128,7 @@
 #include <linux/net_tstamp.h>
 #include <net/dcbnl.h>
 #include <linux/completion.h>
+#include <linux/cpumask.h>
 
 #define XGBE_DRV_NAME		"amd-xgbe"
 #define XGBE_DRV_VERSION	"1.0.3"
@@ -465,6 +466,7 @@ struct xgbe_channel {
 	struct xgbe_ring *rx_ring;
 
 	int node;
+	cpumask_t affinity_mask;
 } ____cacheline_aligned;
 
 enum xgbe_state {

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 12/14] amd-xgbe: Prepare for more fine grained cache coherency controls
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (10 preceding siblings ...)
  2017-06-28 18:43 ` [PATCH net-next v1 11/14] amd-xgbe: Add NUMA affinity support for IRQ hints Tom Lendacky
@ 2017-06-28 18:43 ` Tom Lendacky
  2017-06-28 18:43 ` [PATCH net-next v1 13/14] amd-xgbe: Simplify the burst length settings Tom Lendacky
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:43 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

In prep for setting fine grained read and write DMA cache coherency
controls, allow specific values to be used to set the cache coherency
registers.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h   |   28 -------------------------
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c      |   23 ++-------------------
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c      |    5 ++--
 drivers/net/ethernet/amd/xgbe/xgbe-platform.c |   10 ++++-----
 drivers/net/ethernet/amd/xgbe/xgbe.h          |   15 +++++--------
 5 files changed, 14 insertions(+), 67 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index e7b6804..dc09883 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -127,34 +127,6 @@
 #define DMA_DSR1			0x3024
 
 /* DMA register entry bit positions and sizes */
-#define DMA_AXIARCR_DRC_INDEX		0
-#define DMA_AXIARCR_DRC_WIDTH		4
-#define DMA_AXIARCR_DRD_INDEX		4
-#define DMA_AXIARCR_DRD_WIDTH		2
-#define DMA_AXIARCR_TEC_INDEX		8
-#define DMA_AXIARCR_TEC_WIDTH		4
-#define DMA_AXIARCR_TED_INDEX		12
-#define DMA_AXIARCR_TED_WIDTH		2
-#define DMA_AXIARCR_THC_INDEX		16
-#define DMA_AXIARCR_THC_WIDTH		4
-#define DMA_AXIARCR_THD_INDEX		20
-#define DMA_AXIARCR_THD_WIDTH		2
-#define DMA_AXIAWCR_DWC_INDEX		0
-#define DMA_AXIAWCR_DWC_WIDTH		4
-#define DMA_AXIAWCR_DWD_INDEX		4
-#define DMA_AXIAWCR_DWD_WIDTH		2
-#define DMA_AXIAWCR_RPC_INDEX		8
-#define DMA_AXIAWCR_RPC_WIDTH		4
-#define DMA_AXIAWCR_RPD_INDEX		12
-#define DMA_AXIAWCR_RPD_WIDTH		2
-#define DMA_AXIAWCR_RHC_INDEX		16
-#define DMA_AXIAWCR_RHC_WIDTH		4
-#define DMA_AXIAWCR_RHD_INDEX		20
-#define DMA_AXIAWCR_RHD_WIDTH		2
-#define DMA_AXIAWCR_TDC_INDEX		24
-#define DMA_AXIAWCR_TDC_WIDTH		4
-#define DMA_AXIAWCR_TDD_INDEX		28
-#define DMA_AXIAWCR_TDD_WIDTH		2
 #define DMA_ISR_MACIS_INDEX		17
 #define DMA_ISR_MACIS_WIDTH		1
 #define DMA_ISR_MTLIS_INDEX		16
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index b05393f..98da249 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -2146,27 +2146,8 @@ static void xgbe_config_dma_bus(struct xgbe_prv_data *pdata)
 
 static void xgbe_config_dma_cache(struct xgbe_prv_data *pdata)
 {
-	unsigned int arcache, awcache;
-
-	arcache = 0;
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, DRC, pdata->arcache);
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, DRD, pdata->axdomain);
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, TEC, pdata->arcache);
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, TED, pdata->axdomain);
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, THC, pdata->arcache);
-	XGMAC_SET_BITS(arcache, DMA_AXIARCR, THD, pdata->axdomain);
-	XGMAC_IOWRITE(pdata, DMA_AXIARCR, arcache);
-
-	awcache = 0;
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, DWC, pdata->awcache);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, DWD, pdata->axdomain);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RPC, pdata->awcache);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RPD, pdata->axdomain);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RHC, pdata->awcache);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RHD, pdata->axdomain);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, TDC, pdata->awcache);
-	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, TDD, pdata->axdomain);
-	XGMAC_IOWRITE(pdata, DMA_AXIAWCR, awcache);
+	XGMAC_IOWRITE(pdata, DMA_AXIARCR, pdata->arcr);
+	XGMAC_IOWRITE(pdata, DMA_AXIAWCR, pdata->awcr);
 }
 
 static void xgbe_config_mtl_mode(struct xgbe_prv_data *pdata)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
index f0c2e88..1e73768 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
@@ -327,9 +327,8 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	/* Set the DMA coherency values */
 	pdata->coherent = 1;
-	pdata->axdomain = XGBE_DMA_OS_AXDOMAIN;
-	pdata->arcache = XGBE_DMA_OS_ARCACHE;
-	pdata->awcache = XGBE_DMA_OS_AWCACHE;
+	pdata->arcr = XGBE_DMA_OS_ARCR;
+	pdata->awcr = XGBE_DMA_OS_AWCR;
 
 	/* Set the maximum channels and queues */
 	reg = XP_IOREAD(pdata, XP_PROP_1);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-platform.c b/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
index 84d4c51..d0f3dfb 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
@@ -448,13 +448,11 @@ static int xgbe_platform_probe(struct platform_device *pdev)
 	}
 	pdata->coherent = (attr == DEV_DMA_COHERENT);
 	if (pdata->coherent) {
-		pdata->axdomain = XGBE_DMA_OS_AXDOMAIN;
-		pdata->arcache = XGBE_DMA_OS_ARCACHE;
-		pdata->awcache = XGBE_DMA_OS_AWCACHE;
+		pdata->arcr = XGBE_DMA_OS_ARCR;
+		pdata->awcr = XGBE_DMA_OS_AWCR;
 	} else {
-		pdata->axdomain = XGBE_DMA_SYS_AXDOMAIN;
-		pdata->arcache = XGBE_DMA_SYS_ARCACHE;
-		pdata->awcache = XGBE_DMA_SYS_AWCACHE;
+		pdata->arcr = XGBE_DMA_SYS_ARCR;
+		pdata->awcr = XGBE_DMA_SYS_AWCR;
 	}
 
 	/* Set the maximum fifo amounts */
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 7b50469..46780aa 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -164,14 +164,12 @@
 #define XGBE_DMA_STOP_TIMEOUT	1
 
 /* DMA cache settings - Outer sharable, write-back, write-allocate */
-#define XGBE_DMA_OS_AXDOMAIN	0x2
-#define XGBE_DMA_OS_ARCACHE	0xb
-#define XGBE_DMA_OS_AWCACHE	0xf
+#define XGBE_DMA_OS_ARCR	0x002b2b2b
+#define XGBE_DMA_OS_AWCR	0x2f2f2f2f
 
 /* DMA cache settings - System, no caches used */
-#define XGBE_DMA_SYS_AXDOMAIN	0x3
-#define XGBE_DMA_SYS_ARCACHE	0x0
-#define XGBE_DMA_SYS_AWCACHE	0x0
+#define XGBE_DMA_SYS_ARCR	0x00303030
+#define XGBE_DMA_SYS_AWCR	0x30303030
 
 /* DMA channel interrupt modes */
 #define XGBE_IRQ_MODE_EDGE	0
@@ -1007,9 +1005,8 @@ struct xgbe_prv_data {
 
 	/* AXI DMA settings */
 	unsigned int coherent;
-	unsigned int axdomain;
-	unsigned int arcache;
-	unsigned int awcache;
+	unsigned int arcr;
+	unsigned int awcr;
 
 	/* Service routine support */
 	struct workqueue_struct *dev_workqueue;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 13/14] amd-xgbe: Simplify the burst length settings
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (11 preceding siblings ...)
  2017-06-28 18:43 ` [PATCH net-next v1 12/14] amd-xgbe: Prepare for more fine grained cache coherency controls Tom Lendacky
@ 2017-06-28 18:43 ` Tom Lendacky
  2017-06-28 18:43 ` [PATCH net-next v1 14/14] amd-xgbe: Adjust register settings to improve performance Tom Lendacky
  2017-06-29 19:16 ` [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:43 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Currently the driver hardcodes the PBLx8 setting.  Remove the need for
specifying the PBLx8 setting and automatically calculate based on the
specified PBL value. Since the PBLx8 setting applies to both Tx and Rx
use the same PBL value for both of them.

Also, the driver currently uses a bit field to set the AXI master burst
len setting. Change to the full bit field range and set the burst length
based on the specified value.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h |   11 ++++
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c    |   67 +++++++--------------------
 drivers/net/ethernet/amd/xgbe/xgbe-main.c   |    5 +-
 drivers/net/ethernet/amd/xgbe/xgbe.h        |   12 +----
 4 files changed, 31 insertions(+), 64 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index dc09883..6b5c72d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -137,12 +137,19 @@
 #define DMA_MR_SWR_WIDTH		1
 #define DMA_SBMR_EAME_INDEX		11
 #define DMA_SBMR_EAME_WIDTH		1
-#define DMA_SBMR_BLEN_256_INDEX		7
-#define DMA_SBMR_BLEN_256_WIDTH		1
+#define DMA_SBMR_BLEN_INDEX		1
+#define DMA_SBMR_BLEN_WIDTH		7
 #define DMA_SBMR_UNDEF_INDEX		0
 #define DMA_SBMR_UNDEF_WIDTH		1
 
 /* DMA register values */
+#define DMA_SBMR_BLEN_256		256
+#define DMA_SBMR_BLEN_128		128
+#define DMA_SBMR_BLEN_64		64
+#define DMA_SBMR_BLEN_32		32
+#define DMA_SBMR_BLEN_16		16
+#define DMA_SBMR_BLEN_8			8
+#define DMA_SBMR_BLEN_4			4
 #define DMA_DSR_RPS_WIDTH		4
 #define DMA_DSR_TPS_WIDTH		4
 #define DMA_DSR_Q_WIDTH			(DMA_DSR_RPS_WIDTH + DMA_DSR_TPS_WIDTH)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index 98da249..a51ece5 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -174,52 +174,30 @@ static unsigned int xgbe_riwt_to_usec(struct xgbe_prv_data *pdata,
 	return ret;
 }
 
-static int xgbe_config_pblx8(struct xgbe_prv_data *pdata)
+static int xgbe_config_pbl_val(struct xgbe_prv_data *pdata)
 {
+	unsigned int pblx8, pbl;
 	unsigned int i;
 
-	for (i = 0; i < pdata->channel_count; i++)
-		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, PBLX8,
-				       pdata->pblx8);
-
-	return 0;
-}
-
-static int xgbe_get_tx_pbl_val(struct xgbe_prv_data *pdata)
-{
-	return XGMAC_DMA_IOREAD_BITS(pdata->channel[0], DMA_CH_TCR, PBL);
-}
-
-static int xgbe_config_tx_pbl_val(struct xgbe_prv_data *pdata)
-{
-	unsigned int i;
-
-	for (i = 0; i < pdata->channel_count; i++) {
-		if (!pdata->channel[i]->tx_ring)
-			break;
+	pblx8 = DMA_PBL_X8_DISABLE;
+	pbl = pdata->pbl;
 
-		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR, PBL,
-				       pdata->tx_pbl);
+	if (pdata->pbl > 32) {
+		pblx8 = DMA_PBL_X8_ENABLE;
+		pbl >>= 3;
 	}
 
-	return 0;
-}
-
-static int xgbe_get_rx_pbl_val(struct xgbe_prv_data *pdata)
-{
-	return XGMAC_DMA_IOREAD_BITS(pdata->channel[0], DMA_CH_RCR, PBL);
-}
-
-static int xgbe_config_rx_pbl_val(struct xgbe_prv_data *pdata)
-{
-	unsigned int i;
-
 	for (i = 0; i < pdata->channel_count; i++) {
-		if (!pdata->channel[i]->rx_ring)
-			break;
+		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, PBLX8,
+				       pblx8);
+
+		if (pdata->channel[i]->tx_ring)
+			XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_TCR,
+					       PBL, pbl);
 
-		XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR, PBL,
-				       pdata->rx_pbl);
+		if (pdata->channel[i]->rx_ring)
+			XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_RCR,
+					       PBL, pbl);
 	}
 
 	return 0;
@@ -2141,7 +2119,7 @@ static void xgbe_config_dma_bus(struct xgbe_prv_data *pdata)
 
 	/* Set the System Bus mode */
 	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, UNDEF, 1);
-	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, BLEN_256, 1);
+	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, BLEN, pdata->blen >> 2);
 }
 
 static void xgbe_config_dma_cache(struct xgbe_prv_data *pdata)
@@ -3381,9 +3359,7 @@ static int xgbe_init(struct xgbe_prv_data *pdata)
 	xgbe_config_dma_bus(pdata);
 	xgbe_config_dma_cache(pdata);
 	xgbe_config_osp_mode(pdata);
-	xgbe_config_pblx8(pdata);
-	xgbe_config_tx_pbl_val(pdata);
-	xgbe_config_rx_pbl_val(pdata);
+	xgbe_config_pbl_val(pdata);
 	xgbe_config_rx_coalesce(pdata);
 	xgbe_config_tx_coalesce(pdata);
 	xgbe_config_rx_buffer_size(pdata);
@@ -3511,13 +3487,6 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
 	/* For TX DMA Operating on Second Frame config */
 	hw_if->config_osp_mode = xgbe_config_osp_mode;
 
-	/* For RX and TX PBL config */
-	hw_if->config_rx_pbl_val = xgbe_config_rx_pbl_val;
-	hw_if->get_rx_pbl_val = xgbe_get_rx_pbl_val;
-	hw_if->config_tx_pbl_val = xgbe_config_tx_pbl_val;
-	hw_if->get_tx_pbl_val = xgbe_get_tx_pbl_val;
-	hw_if->config_pblx8 = xgbe_config_pblx8;
-
 	/* For MMC statistics support */
 	hw_if->tx_mmc_int = xgbe_tx_mmc_int;
 	hw_if->rx_mmc_int = xgbe_rx_mmc_int;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 982368b..8eec9f5 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -140,14 +140,13 @@ static void xgbe_default_config(struct xgbe_prv_data *pdata)
 {
 	DBGPR("-->xgbe_default_config\n");
 
-	pdata->pblx8 = DMA_PBL_X8_ENABLE;
+	pdata->blen = DMA_SBMR_BLEN_256;
+	pdata->pbl = DMA_PBL_128;
 	pdata->tx_sf_mode = MTL_TSF_ENABLE;
 	pdata->tx_threshold = MTL_TX_THRESHOLD_64;
-	pdata->tx_pbl = DMA_PBL_16;
 	pdata->tx_osp_mode = DMA_OSP_ENABLE;
 	pdata->rx_sf_mode = MTL_RSF_DISABLE;
 	pdata->rx_threshold = MTL_RX_THRESHOLD_64;
-	pdata->rx_pbl = DMA_PBL_16;
 	pdata->pause_autoneg = 1;
 	pdata->tx_pause = 1;
 	pdata->rx_pause = 1;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 46780aa..4bf82eb 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -737,13 +737,6 @@ struct xgbe_hw_if {
 	/* For TX DMA Operate on Second Frame config */
 	int (*config_osp_mode)(struct xgbe_prv_data *);
 
-	/* For RX and TX PBL config */
-	int (*config_rx_pbl_val)(struct xgbe_prv_data *);
-	int (*get_rx_pbl_val)(struct xgbe_prv_data *);
-	int (*config_tx_pbl_val)(struct xgbe_prv_data *);
-	int (*get_tx_pbl_val)(struct xgbe_prv_data *);
-	int (*config_pblx8)(struct xgbe_prv_data *);
-
 	/* For MMC statistics */
 	void (*rx_mmc_int)(struct xgbe_prv_data *);
 	void (*tx_mmc_int)(struct xgbe_prv_data *);
@@ -1029,19 +1022,18 @@ struct xgbe_prv_data {
 	unsigned int rx_q_count;
 
 	/* Tx/Rx common settings */
-	unsigned int pblx8;
+	unsigned int blen;
+	unsigned int pbl;
 
 	/* Tx settings */
 	unsigned int tx_sf_mode;
 	unsigned int tx_threshold;
-	unsigned int tx_pbl;
 	unsigned int tx_osp_mode;
 	unsigned int tx_max_fifo_size;
 
 	/* Rx settings */
 	unsigned int rx_sf_mode;
 	unsigned int rx_threshold;
-	unsigned int rx_pbl;
 	unsigned int rx_max_fifo_size;
 
 	/* Tx coalescing settings */

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v1 14/14] amd-xgbe: Adjust register settings to improve performance
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (12 preceding siblings ...)
  2017-06-28 18:43 ` [PATCH net-next v1 13/14] amd-xgbe: Simplify the burst length settings Tom Lendacky
@ 2017-06-28 18:43 ` Tom Lendacky
  2017-06-29 19:16 ` [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2017-06-28 18:43 UTC (permalink / raw)
  To: netdev; +Cc: David Miller

Add support to change some general performance settings and to provide
some performance settings based on the device that is probed.

This includes:

- Setting the maximum read/write outstanding request limit
- Reducing the AXI interface burst length size
- Selectively setting the Tx and Rx descriptor pre-fetch threshold
- Selectively setting additional cache coherency controls

Tested and verified on all versions of the hardware.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 drivers/net/ethernet/amd/xgbe/xgbe-common.h |   13 +++++++++++++
 drivers/net/ethernet/amd/xgbe/xgbe-dev.c    |   26 +++++++++++++++++++++++---
 drivers/net/ethernet/amd/xgbe/xgbe-main.c   |    5 ++++-
 drivers/net/ethernet/amd/xgbe/xgbe-pci.c    |    9 +++++++--
 drivers/net/ethernet/amd/xgbe/xgbe.h        |   11 +++++++++++
 5 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index 6b5c72d..9795419 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -123,8 +123,11 @@
 #define DMA_ISR				0x3008
 #define DMA_AXIARCR			0x3010
 #define DMA_AXIAWCR			0x3018
+#define DMA_AXIAWARCR			0x301c
 #define DMA_DSR0			0x3020
 #define DMA_DSR1			0x3024
+#define DMA_TXEDMACR			0x3040
+#define DMA_RXEDMACR			0x3044
 
 /* DMA register entry bit positions and sizes */
 #define DMA_ISR_MACIS_INDEX		17
@@ -135,12 +138,22 @@
 #define DMA_MR_INTM_WIDTH		2
 #define DMA_MR_SWR_INDEX		0
 #define DMA_MR_SWR_WIDTH		1
+#define DMA_RXEDMACR_RDPS_INDEX		0
+#define DMA_RXEDMACR_RDPS_WIDTH		3
+#define DMA_SBMR_AAL_INDEX		12
+#define DMA_SBMR_AAL_WIDTH		1
 #define DMA_SBMR_EAME_INDEX		11
 #define DMA_SBMR_EAME_WIDTH		1
 #define DMA_SBMR_BLEN_INDEX		1
 #define DMA_SBMR_BLEN_WIDTH		7
+#define DMA_SBMR_RD_OSR_LMT_INDEX	16
+#define DMA_SBMR_RD_OSR_LMT_WIDTH	6
 #define DMA_SBMR_UNDEF_INDEX		0
 #define DMA_SBMR_UNDEF_WIDTH		1
+#define DMA_SBMR_WR_OSR_LMT_INDEX	24
+#define DMA_SBMR_WR_OSR_LMT_WIDTH	6
+#define DMA_TXEDMACR_TDPS_INDEX		0
+#define DMA_TXEDMACR_TDPS_WIDTH		3
 
 /* DMA register values */
 #define DMA_SBMR_BLEN_256		256
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index a51ece5..06f953e 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -2114,18 +2114,38 @@ static int xgbe_flush_tx_queues(struct xgbe_prv_data *pdata)
 
 static void xgbe_config_dma_bus(struct xgbe_prv_data *pdata)
 {
+	unsigned int sbmr;
+
+	sbmr = XGMAC_IOREAD(pdata, DMA_SBMR);
+
 	/* Set enhanced addressing mode */
-	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, EAME, 1);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, EAME, 1);
 
 	/* Set the System Bus mode */
-	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, UNDEF, 1);
-	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, BLEN, pdata->blen >> 2);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, UNDEF, 1);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, BLEN, pdata->blen >> 2);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, AAL, pdata->aal);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, RD_OSR_LMT, pdata->rd_osr_limit - 1);
+	XGMAC_SET_BITS(sbmr, DMA_SBMR, WR_OSR_LMT, pdata->wr_osr_limit - 1);
+
+	XGMAC_IOWRITE(pdata, DMA_SBMR, sbmr);
+
+	/* Set descriptor fetching threshold */
+	if (pdata->vdata->tx_desc_prefetch)
+		XGMAC_IOWRITE_BITS(pdata, DMA_TXEDMACR, TDPS,
+				   pdata->vdata->tx_desc_prefetch);
+
+	if (pdata->vdata->rx_desc_prefetch)
+		XGMAC_IOWRITE_BITS(pdata, DMA_RXEDMACR, RDPS,
+				   pdata->vdata->rx_desc_prefetch);
 }
 
 static void xgbe_config_dma_cache(struct xgbe_prv_data *pdata)
 {
 	XGMAC_IOWRITE(pdata, DMA_AXIARCR, pdata->arcr);
 	XGMAC_IOWRITE(pdata, DMA_AXIAWCR, pdata->awcr);
+	if (pdata->awarcr)
+		XGMAC_IOWRITE(pdata, DMA_AXIAWARCR, pdata->awarcr);
 }
 
 static void xgbe_config_mtl_mode(struct xgbe_prv_data *pdata)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index 8eec9f5..500147d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -140,8 +140,11 @@ static void xgbe_default_config(struct xgbe_prv_data *pdata)
 {
 	DBGPR("-->xgbe_default_config\n");
 
-	pdata->blen = DMA_SBMR_BLEN_256;
+	pdata->blen = DMA_SBMR_BLEN_64;
 	pdata->pbl = DMA_PBL_128;
+	pdata->aal = 1;
+	pdata->rd_osr_limit = 8;
+	pdata->wr_osr_limit = 8;
 	pdata->tx_sf_mode = MTL_TSF_ENABLE;
 	pdata->tx_threshold = MTL_TX_THRESHOLD_64;
 	pdata->tx_osp_mode = DMA_OSP_ENABLE;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
index 1e73768..1e56ad7 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
@@ -327,8 +327,9 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
 	/* Set the DMA coherency values */
 	pdata->coherent = 1;
-	pdata->arcr = XGBE_DMA_OS_ARCR;
-	pdata->awcr = XGBE_DMA_OS_AWCR;
+	pdata->arcr = XGBE_DMA_PCI_ARCR;
+	pdata->awcr = XGBE_DMA_PCI_AWCR;
+	pdata->awarcr = XGBE_DMA_PCI_AWARCR;
 
 	/* Set the maximum channels and queues */
 	reg = XP_IOREAD(pdata, XP_PROP_1);
@@ -447,6 +448,8 @@ static int xgbe_pci_resume(struct pci_dev *pdev)
 	.ecc_support			= 1,
 	.i2c_support			= 1,
 	.irq_reissue_support		= 1,
+	.tx_desc_prefetch		= 5,
+	.rx_desc_prefetch		= 5,
 };
 
 static const struct xgbe_version_data xgbe_v2b = {
@@ -459,6 +462,8 @@ static int xgbe_pci_resume(struct pci_dev *pdev)
 	.ecc_support			= 1,
 	.i2c_support			= 1,
 	.irq_reissue_support		= 1,
+	.tx_desc_prefetch		= 5,
+	.rx_desc_prefetch		= 5,
 };
 
 static const struct pci_device_id xgbe_pci_table[] = {
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 4bf82eb..0938294 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -171,6 +171,11 @@
 #define XGBE_DMA_SYS_ARCR	0x00303030
 #define XGBE_DMA_SYS_AWCR	0x30303030
 
+/* DMA cache settings - PCI device */
+#define XGBE_DMA_PCI_ARCR	0x00000003
+#define XGBE_DMA_PCI_AWCR	0x13131313
+#define XGBE_DMA_PCI_AWARCR	0x00000313
+
 /* DMA channel interrupt modes */
 #define XGBE_IRQ_MODE_EDGE	0
 #define XGBE_IRQ_MODE_LEVEL	1
@@ -921,6 +926,8 @@ struct xgbe_version_data {
 	unsigned int ecc_support;
 	unsigned int i2c_support;
 	unsigned int irq_reissue_support;
+	unsigned int tx_desc_prefetch;
+	unsigned int rx_desc_prefetch;
 };
 
 struct xgbe_prv_data {
@@ -1000,6 +1007,7 @@ struct xgbe_prv_data {
 	unsigned int coherent;
 	unsigned int arcr;
 	unsigned int awcr;
+	unsigned int awarcr;
 
 	/* Service routine support */
 	struct workqueue_struct *dev_workqueue;
@@ -1024,6 +1032,9 @@ struct xgbe_prv_data {
 	/* Tx/Rx common settings */
 	unsigned int blen;
 	unsigned int pbl;
+	unsigned int aal;
+	unsigned int rd_osr_limit;
+	unsigned int wr_osr_limit;
 
 	/* Tx settings */
 	unsigned int tx_sf_mode;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28
  2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
                   ` (13 preceding siblings ...)
  2017-06-28 18:43 ` [PATCH net-next v1 14/14] amd-xgbe: Adjust register settings to improve performance Tom Lendacky
@ 2017-06-29 19:16 ` David Miller
  14 siblings, 0 replies; 16+ messages in thread
From: David Miller @ 2017-06-29 19:16 UTC (permalink / raw)
  To: thomas.lendacky; +Cc: netdev

From: Tom Lendacky <thomas.lendacky@amd.com>
Date: Wed, 28 Jun 2017 13:41:23 -0500

> The following updates and fixes are included in this driver update series:
> 
> - Simplify mailbox interface code
> - Fix SFP supported and advertising settings
> - Fix PTP initialization register usage
> - Insure there is timestamp skb present before using it
> - Add a timeout to timestamp register updates
> - Handle return code from software reset function
> - Some fixes for handling 2.5Gbps rates
> - Limit I2C error messages
> - Fix non-DMA interrupt handling through tasklet usage
> - Add NUMA affinity support for memory allocations
> - Add NUMA affinity support for interrupts
> - Prepare for more fine-grained cache coherency controls
> - Simplify setting the DMA burst length programming
> - Performance improvements
> 
> This patch series is based on net-next.

Series applied, thanks Tom.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-06-29 19:16 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-28 18:41 [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 Tom Lendacky
2017-06-28 18:41 ` [PATCH net-next v1 01/14] amd-xgbe: Simplify mailbox interface rate change code Tom Lendacky
2017-06-28 18:41 ` [PATCH net-next v1 02/14] amd-xgbe: Fix SFP PHY supported/advertised settings Tom Lendacky
2017-06-28 18:41 ` [PATCH net-next v1 03/14] amd-xgbe: Use the proper register during PTP initialization Tom Lendacky
2017-06-28 18:41 ` [PATCH net-next v1 04/14] amd-xgbe: Add a check for an skb in the timestamp path Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 05/14] amd-xgbe: Prevent looping forever if timestamp update fails Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 06/14] amd-xgbe: Handle return code from software reset function Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 07/14] amd-xgbe: Fixes for working with PHYs that support 2.5GbE Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 08/14] amd-xgbe: Limit the I2C error messages that are output Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 09/14] amd-xgbe: Re-issue interrupt if interrupt status not cleared Tom Lendacky
2017-06-28 18:42 ` [PATCH net-next v1 10/14] amd-xgbe: Add NUMA affinity support for memory allocations Tom Lendacky
2017-06-28 18:43 ` [PATCH net-next v1 11/14] amd-xgbe: Add NUMA affinity support for IRQ hints Tom Lendacky
2017-06-28 18:43 ` [PATCH net-next v1 12/14] amd-xgbe: Prepare for more fine grained cache coherency controls Tom Lendacky
2017-06-28 18:43 ` [PATCH net-next v1 13/14] amd-xgbe: Simplify the burst length settings Tom Lendacky
2017-06-28 18:43 ` [PATCH net-next v1 14/14] amd-xgbe: Adjust register settings to improve performance Tom Lendacky
2017-06-29 19:16 ` [PATCH net-next v1 00/14] amd-xgbe: AMD XGBE driver updates 2016-06-28 David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).