netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver
@ 2022-05-11  3:26 Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

This patch contains the support of linux base driver for wangxun 10
gigabit PCI express adapter which is named TXGBE.

This patch is meant to control the basic functionality of the PF driver,
and it would be incrementally enhanced with more features.

Jiawen Wu (14):
  net: txgbe: Add build support for txgbe ethernet driver
  net: txgbe: Add hardware initialization
  net: txgbe: Add operations to interact with firmware
  net: txgbe: Add PHY interface support
  net: txgbe: Add interrupt support
  net: txgbe: Support to receive and tranmit packets
  net: txgbe: Support flow control
  net: txgbe: Support flow director
  net: txgbe: Support PTP
  net: txgbe: Add ethtool support
  net: txgbe: Support PCIe recovery
  net: txgbe: Support power management
  net: txgbe: Support debug filesystem
  net: txgbe: Support sysfs file system

 .../device_drivers/ethernet/index.rst         |    1 +
 .../device_drivers/ethernet/wangxun/txgbe.rst |  238 +
 MAINTAINERS                                   |    7 +
 drivers/net/ethernet/Kconfig                  |    1 +
 drivers/net/ethernet/Makefile                 |    1 +
 drivers/net/ethernet/wangxun/Kconfig          |   41 +
 drivers/net/ethernet/wangxun/Makefile         |    6 +
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   15 +
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  829 ++
 .../ethernet/wangxun/txgbe/txgbe_debugfs.c    |  582 ++
 .../ethernet/wangxun/txgbe/txgbe_ethtool.c    | 3188 ++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c | 5785 ++++++++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |  238 +
 .../net/ethernet/wangxun/txgbe/txgbe_lib.c    |  531 ++
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 6748 +++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_pcierr.c |  236 +
 .../net/ethernet/wangxun/txgbe/txgbe_pcierr.h |    8 +
 .../net/ethernet/wangxun/txgbe/txgbe_phy.c    |  439 ++
 .../net/ethernet/wangxun/txgbe/txgbe_phy.h    |   65 +
 .../net/ethernet/wangxun/txgbe/txgbe_ptp.c    |  840 ++
 .../net/ethernet/wangxun/txgbe/txgbe_sysfs.c  |  203 +
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   | 2837 +++++++
 22 files changed, 22839 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
 create mode 100644 drivers/net/ethernet/wangxun/Kconfig
 create mode 100644 drivers/net/ethernet/wangxun/Makefile
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/Makefile
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe.h
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_debugfs.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.h
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_ptp.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_sysfs.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_type.h

-- 
2.27.0




^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12  0:38   ` kernel test robot
  2022-05-12 15:52   ` Andrew Lunn
  2022-05-11  3:26 ` [PATCH net-next 02/14] net: txgbe: Add hardware initialization Jiawen Wu
                   ` (13 subsequent siblings)
  14 siblings, 2 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add doc build infrastructure for txgbe driver.
Initialize PCI memory space for WangXun 10 Gigabit Ethernet devices.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 .../device_drivers/ethernet/index.rst         |   1 +
 .../device_drivers/ethernet/wangxun/txgbe.rst | 238 ++++++++++++
 MAINTAINERS                                   |   7 +
 drivers/net/ethernet/Kconfig                  |   1 +
 drivers/net/ethernet/Makefile                 |   1 +
 drivers/net/ethernet/wangxun/Kconfig          |  41 +++
 drivers/net/ethernet/wangxun/Makefile         |   6 +
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   9 +
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  77 ++++
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 341 ++++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  87 +++++
 11 files changed, 809 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
 create mode 100644 drivers/net/ethernet/wangxun/Kconfig
 create mode 100644 drivers/net/ethernet/wangxun/Makefile
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/Makefile
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe.h
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_type.h

diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst
index 6b5dc203da2b..4766ac9d260e 100644
--- a/Documentation/networking/device_drivers/ethernet/index.rst
+++ b/Documentation/networking/device_drivers/ethernet/index.rst
@@ -52,6 +52,7 @@ Contents:
    ti/am65_nuss_cpsw_switchdev
    ti/tlan
    toshiba/spider_net
+   wangxun/txgbe
 
 .. only::  subproject and html
 
diff --git a/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst b/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
new file mode 100644
index 000000000000..dc9b34acd6cf
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
@@ -0,0 +1,238 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================================================
+Linux Base Driver for WangXun(R) 10 Gigabit PCI Express Adapters
+================================================================
+
+WangXun 10 Gigabit Linux driver.
+Copyright (c) 2015 - 2022 Beijing WangXun Technology Co., Ltd.
+
+
+Contents
+========
+
+- Important Note
+- Identifying Your Adapter
+- Additional Features and Configurations
+- Known Issues/Troubleshooting
+- Support
+
+
+Important Notes
+===============
+
+Disable LRO if enabling ip forwarding or bridging
+-------------------------------------------------
+WARNING: The txgbe driver supports the Large Receive Offload (LRO) feature.
+This option offers the lowest CPU utilization for receives but is completely
+incompatible with *routing/ip forwarding* and *bridging*. If enabling ip
+forwarding or bridging is a requirement, it is necessary to disable LRO using
+compile time options as noted in the LRO section later in this document. The
+result of not disabling LRO when combined with ip forwarding or bridging can be
+low throughput or even a kernel panic.
+
+
+Identifying Your Adapter
+========================
+The driver is compatible with WangXun Sapphire Dual ports Ethernet Adapters.
+
+SFP+ Devices with Pluggable Optics
+----------------------------------
+The following is a list of 3rd party SFP+ modules that have been tested and verified.
+
++----------+----------------------+----------------------+
+| Supplier | Type                 | Part Numbers         |
++==========+======================+======================+
+| Avago	   | SFP+                 | AFBR-709SMZ          |
++----------+----------------------+----------------------+
+| F-tone   | SFP+                 | FTCS-851X-02D        |
++----------+----------------------+----------------------+
+| Finisar  | SFP+                 | FTLX8574D3BCL        |
++----------+----------------------+----------------------+
+| Hasense  | SFP+                 | AFBR-709SMZ          |
++----------+----------------------+----------------------+
+| HGTECH   | SFP+                 | MTRS-01X11-G         |
++----------+----------------------+----------------------+
+| HP       | SFP+                 | SR SFP+ 456096-001   |
++----------+----------------------+----------------------+
+| Huawei   | SFP+                 | AFBR-709SMZ          |
++----------+----------------------+----------------------+
+| Intel    | SFP+                 | FTLX8571D3BCV-IT     |
++----------+----------------------+----------------------+
+| JDSU     | SFP+                 | PLRXPL-SC-S43        |
++----------+----------------------+----------------------+
+| SONT     | SFP+                 | XP-8G10-01           |
++----------+----------------------+----------------------+
+| Trixon   | SFP+                 | TPS-TGM3-85DCR       |
++----------+----------------------+----------------------+
+| WTD      | SFP+                 | RTXM228-551          |
++----------+----------------------+----------------------+
+
+Laser turns off for SFP+ when ifconfig ethX down
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+"ifconfig ethX down" turns off the laser for SFP+ fiber adapters.
+"ifconfig ethX up" turns on the laser.
+
+
+Additional Features and Configurations
+======================================
+
+Jumbo Frames
+------------
+Jumbo Frames support is enabled by changing the Maximum Transmission Unit
+(MTU) to a value larger than the default value of 1500.
+
+Use the ifconfig command to increase the MTU size. For example, enter the
+following where <x> is the interface number::
+
+  ifconfig eth<x> mtu 9000 up
+
+NOTES:
+- The maximum MTU setting for Jumbo Frames is 9710. This value coincides
+  with the maximum Jumbo Frames size of 9728 bytes.
+- This driver will attempt to use multiple page sized buffers to receive
+  each jumbo packet. This should help to avoid buffer starvation issues
+  when allocating receive packets.
+- For network connections, if you are enabling jumbo frames in a
+  virtual function (VF), jumbo frames must first be enabled in the physical
+  function (PF). The VF MTU setting cannot be larger than the PF MTU.
+
+ethtool
+-------
+The driver utilizes the ethtool interface for driver configuration and
+diagnostics, as well as displaying statistical information. The latest
+ethtool version is required for this functionality. Download it at
+http://ftp.kernel.org/pub/software/network/ethtool/
+
+Hardware Receive Side Coalescing (HW RSC)
+-----------------------------------------
+Sapphire adapters support HW RSC, which can merge multiple
+frames from the same IPv4 TCP/IP flow into a single structure that can span
+one or more descriptors. It works similarly to Software Large Receive Offload
+technique.
+
+You can verify that the driver is using HW RSC by looking at the counter in
+ethtool:
+
+- hw_rsc_count. This counts the total number of Ethernet packets that were
+  being combined.
+
+MAC and VLAN anti-spoofing feature
+----------------------------------
+When a malicious driver attempts to send a spoofed packet, it is dropped by
+the hardware and not transmitted.
+
+An interrupt is sent to the PF driver notifying it of the spoof attempt.
+When a spoofed packet is detected, the PF driver will send the following
+message to the system log (displayed by the "dmesg" command):
+txgbe ethX: txgbe_spoof_check: n spoofed packets detected
+where "X" is the PF interface number; and "n" is number of spoofed packets.
+
+NOTE: This feature can be disabled for a specific Virtual Function (VF)::
+
+  ip link set <pf dev> vf <vf id> spoofchk {off|on}
+
+IPRoute2 Tool for setting MAC address, VLAN and rate limit
+----------------------------------------------------------
+You can set a MAC address of a Virtual Function (VF), a default VLAN and the
+rate limit using the IProute2 tool. Download the latest version of the
+iproute2 tool from Sourceforge if your version does not have all the features
+you require.
+
+Wake on LAN Support (WoL)
+-------------------------
+Some adapters do not support Wake on LAN. To determine if your adapter
+supports Wake on LAN, run::
+
+  ethtool ethX
+
+IEEE 1588 Precision Time Protocol (PTP) Hardware Clock (PHC)
+------------------------------------------------------------
+Precision Time Protocol (PTP) is used to synchronize clocks in a computer
+network and is supported in the txgbe driver.
+
+VXLAN Overlay HW Offloading
+---------------------------
+Virtual Extensible LAN (VXLAN) allows you to extend an L2 network over an L3
+network, which may be useful in a virtualized or cloud environment. Some WangXun(R)
+Ethernet Network devices perform VXLAN processing, offloading it from the
+operating system. This reduces CPU utilization.
+
+VXLAN offloading is controlled by the tx and rx checksum offload options
+provided by ethtool. That is, if tx checksum offload is enabled, and the adapter
+has the capability, VXLAN offloading is also enabled. If rx checksum offload is
+enabled, then the VXLAN packets rx checksum will be offloaded, unless the module
+parameter vxlan_rx=0,0 was used to specifically disable the VXLAN rx offload.
+
+VXLAN Overlay HW Offloading is enabled by default. To view and configure VXLAN
+on a VXLAN-overlay offload enabled device, use the following command::
+
+  # ethtool -k ethX
+
+This command displays the offloads and their current state.
+
+Virtual Function (VF) TX Rate Limit
+-----------------------------------
+Virtual Function (VF) TX rate limit is configured with an ip command from the
+PF interface::
+
+  # ip link set eth0 vf 0 rate 1000
+
+This command sets TX Rate Limit of 1000Mbps for VF 0.
+Note that the limit is set per queue and not for the entire VF interface.
+
+
+Known Issues/Troubleshooting
+============================
+
+UDP Stress Test Dropped Packet Issue
+------------------------------------
+Under small packet UDP stress with the txgbe driver, the system may
+drop UDP packets due to socket buffers being full. Setting the driver Flow
+Control variables to the minimum may resolve the issue. You may also try
+increasing the kernel's default buffer sizes by changing the values::
+
+  cat /proc/sys/net/core/rmem_default
+  cat /proc/sys/net/core/rmem_max
+
+DCB: Generic segmentation offload on causes bandwidth allocation issues
+-----------------------------------------------------------------------
+In order for DCB to work correctly, Generic Segmentation Offload (GSO), also
+known as software TSO, must be disabled using ethtool. Since the hardware
+supports TSO (hardware offload of segmentation), GSO will not be running by
+default. The GSO state can be queried with ethtool using::
+
+  ethtool -k ethX
+
+Txgbe driver supports 64 queues on a platform with more than 64 cores.
+
+Due to known hardware limitations, RSS can only filter in a maximum of 64
+receive queues.
+
+Disable GRO when routing/bridging
+---------------------------------
+Due to a known kernel issue, GRO must be turned off when routing/bridging. GRO
+can be turned off via ethtool::
+
+  ethtool -K ethX gro off
+
+where ethX is the ethernet interface being modified.
+
+Lower than expected performance
+-------------------------------
+Some PCIe x8 slots are actually configured as x4 slots. These slots have
+insufficient bandwidth for full line rate with dual port and quad port
+devices. In addition, if you put a PCIe Generation 3-capable adapter
+into a PCIe Generation 2 slot, you cannot get full bandwidth. The driver
+detects this situation and writes the following message in the system log:
+
+"PCI-Express bandwidth available for this card is not sufficient for optimal
+performance. For optimal performance a x8 PCI-Express slot is required."
+
+If this error occurs, moving your adapter to a true PCIe Generation 3 x8 slot
+will resolve the issue.
+
+
+Support
+=======
+If you got any problem, contact Wangxun support team via support@trustnetic.com
diff --git a/MAINTAINERS b/MAINTAINERS
index d91f6c6e3d3b..b5b0d5b956ef 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -21170,6 +21170,13 @@ L:	linux-input@vger.kernel.org
 S:	Maintained
 F:	drivers/input/tablet/wacom_serial4.c
 
+WANGXUN ETHERNET DRIVER
+M:	Jiawen Wu <jiawenwu@trustnetic.com>
+L:	netdev@vger.kernel.org
+S:	Maintained
+F:	Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
+F:	drivers/net/ethernet/wangxun/
+
 WATCHDOG DEVICE DRIVERS
 M:	Wim Van Sebroeck <wim@linux-watchdog.org>
 M:	Guenter Roeck <linux@roeck-us.net>
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index bd4cb9d7c35d..12f5e7727162 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -86,6 +86,7 @@ source "drivers/net/ethernet/i825xx/Kconfig"
 source "drivers/net/ethernet/ibm/Kconfig"
 source "drivers/net/ethernet/intel/Kconfig"
 source "drivers/net/ethernet/microsoft/Kconfig"
+source "drivers/net/ethernet/wangxun/Kconfig"
 source "drivers/net/ethernet/xscale/Kconfig"
 
 config JME
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index 8ef43e0c33c0..82db3b15e421 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -96,6 +96,7 @@ obj-$(CONFIG_NET_VENDOR_TOSHIBA) += toshiba/
 obj-$(CONFIG_NET_VENDOR_TUNDRA) += tundra/
 obj-$(CONFIG_NET_VENDOR_VERTEXCOM) += vertexcom/
 obj-$(CONFIG_NET_VENDOR_VIA) += via/
+obj-$(CONFIG_NET_VENDOR_WANGXUN) += wangxun/
 obj-$(CONFIG_NET_VENDOR_WIZNET) += wiznet/
 obj-$(CONFIG_NET_VENDOR_XILINX) += xilinx/
 obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/
diff --git a/drivers/net/ethernet/wangxun/Kconfig b/drivers/net/ethernet/wangxun/Kconfig
new file mode 100644
index 000000000000..3705c539d035
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/Kconfig
@@ -0,0 +1,41 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Wangxun network device configuration
+#
+
+config NET_VENDOR_WANGXUN
+	bool "Wangxun devices"
+	default y
+	help
+	  If you have a network (Ethernet) card belonging to this class, say Y.
+
+	  Note that the answer to this question doesn't directly affect the
+	  kernel: saying N will just cause the configurator to skip all
+	  the questions about Intel cards. If you say Y, you will be asked for
+	  your specific card in the following questions.
+
+if NET_VENDOR_WANGXUN
+
+config TXGBE
+	tristate "Wangxun(R) 10GbE PCI Express adapters support"
+	depends on PCI
+	depends on PTP_1588_CLOCK_OPTIONAL
+	select PHYLIB
+	help
+	  This driver supports Wangxun(R) 10GbE PCI Express family of
+	  adapters.
+
+	  More specific information on configuring the driver is in
+	  <file:Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst>.
+
+	  To compile this driver as a module, choose M here. The module
+	  will be called txgbe.
+
+config TXGBE_PCI_RECOVER
+	bool "TXGBE PCIe Recover Switch"
+	default n
+	depends on TXGBE && PCI
+	help
+	  Say Y here if you want to do recover PCIe when a PCIe error occurs.
+
+endif # NET_VENDOR_WANGXUN
diff --git a/drivers/net/ethernet/wangxun/Makefile b/drivers/net/ethernet/wangxun/Makefile
new file mode 100644
index 000000000000..c34db1bead25
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Wangxun network device drivers.
+#
+
+obj-$(CONFIG_TXGBE) += txgbe/
diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
new file mode 100644
index 000000000000..725aa1f721f6
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+#
+# Makefile for the Wangxun(R) 10GbE PCI Express ethernet driver
+#
+
+obj-$(CONFIG_TXGBE) += txgbe.o
+
+txgbe-objs := txgbe_main.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
new file mode 100644
index 000000000000..dd6b6c03e998
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_H_
+#define _TXGBE_H_
+
+#include "txgbe_type.h"
+
+#ifndef MAX_REQUEST_SIZE
+#define MAX_REQUEST_SIZE 256
+#endif
+
+#define TXGBE_MAX_FDIR_INDICES          63
+
+#define MAX_TX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
+
+/* board specific private data structure */
+struct txgbe_adapter {
+	/* OS defined structs */
+	struct net_device *netdev;
+	struct pci_dev *pdev;
+
+	unsigned long state;
+
+	/* structs defined in txgbe_hw.h */
+	struct txgbe_hw hw;
+	u16 msg_enable;
+	struct work_struct service_task;
+
+	u8 __iomem *io_addr;    /* Mainly for iounmap use */
+};
+
+enum txgbe_state_t {
+	__TXGBE_TESTING,
+	__TXGBE_RESETTING,
+	__TXGBE_DOWN,
+	__TXGBE_HANGING,
+	__TXGBE_DISABLED,
+	__TXGBE_REMOVING,
+	__TXGBE_SERVICE_SCHED,
+	__TXGBE_SERVICE_INITED,
+	__TXGBE_IN_SFP_INIT,
+	__TXGBE_PTP_RUNNING,
+	__TXGBE_PTP_TX_IN_PROGRESS,
+};
+
+#define TXGBE_NAME "txgbe"
+
+static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
+{
+	return &pdev->dev;
+}
+
+#define txgbe_dev_info(format, arg...) \
+	dev_info(&adapter->pdev->dev, format, ## arg)
+#define txgbe_dev_warn(format, arg...) \
+	dev_warn(&adapter->pdev->dev, format, ## arg)
+#define txgbe_dev_err(format, arg...) \
+	dev_err(&adapter->pdev->dev, format, ## arg)
+#define txgbe_dev_notice(format, arg...) \
+	dev_notice(&adapter->pdev->dev, format, ## arg)
+#define txgbe_dbg(msglvl, format, arg...) \
+	netif_dbg(adapter, msglvl, adapter->netdev, format, ## arg)
+#define txgbe_info(msglvl, format, arg...) \
+	netif_info(adapter, msglvl, adapter->netdev, format, ## arg)
+#define txgbe_err(msglvl, format, arg...) \
+	netif_err(adapter, msglvl, adapter->netdev, format, ## arg)
+#define txgbe_warn(msglvl, format, arg...) \
+	netif_warn(adapter, msglvl, adapter->netdev, format, ## arg)
+#define txgbe_crit(msglvl, format, arg...) \
+	netif_crit(adapter, msglvl, adapter->netdev, format, ## arg)
+
+#define TXGBE_FAILED_READ_CFG_DWORD 0xffffffffU
+#define TXGBE_FAILED_READ_CFG_WORD  0xffffU
+#define TXGBE_FAILED_READ_CFG_BYTE  0xffU
+
+#endif /* _TXGBE_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
new file mode 100644
index 000000000000..54d73679e7c9
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -0,0 +1,341 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/string.h>
+#include <linux/aer.h>
+#include <linux/etherdevice.h>
+
+#include "txgbe.h"
+
+char txgbe_driver_name[32] = TXGBE_NAME;
+static const char txgbe_driver_string[] =
+			"WangXun 10 Gigabit PCI Express Network Driver";
+
+static const char txgbe_copyright[] =
+	"Copyright (c) 2015 -2017 Beijing WangXun Technology Co., Ltd";
+
+/* txgbe_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ *   Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id txgbe_pci_tbl[] = {
+	{ PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_SP1000), 0},
+	{ PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_WX1820), 0},
+	/* required last entry */
+	{ .device = 0 }
+};
+MODULE_DEVICE_TABLE(pci, txgbe_pci_tbl);
+
+MODULE_AUTHOR("Beijing WangXun Technology Co., Ltd, <software@trustnetic.com>");
+MODULE_DESCRIPTION("WangXun(R) 10 Gigabit PCI Express Network Driver");
+MODULE_LICENSE("GPL");
+
+#define DEFAULT_DEBUG_LEVEL_SHIFT 3
+
+static struct workqueue_struct *txgbe_wq;
+
+static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev);
+
+void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
+{
+	if (!test_bit(__TXGBE_DOWN, &adapter->state) &&
+	    !test_bit(__TXGBE_REMOVING, &adapter->state) &&
+	    !test_and_set_bit(__TXGBE_SERVICE_SCHED, &adapter->state))
+		queue_work(txgbe_wq, &adapter->service_task);
+}
+
+static void txgbe_remove_adapter(struct txgbe_hw *hw)
+{
+	struct txgbe_adapter *adapter = hw->back;
+
+	if (!hw->hw_addr)
+		return;
+	hw->hw_addr = NULL;
+	txgbe_dev_err("Adapter removed\n");
+	if (test_bit(__TXGBE_SERVICE_INITED, &adapter->state))
+		txgbe_service_event_schedule(adapter);
+}
+
+/**
+ * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
+ * @adapter: board private structure to initialize
+ *
+ * txgbe_sw_init initializes the Adapter private data structure.
+ * Fields are initialized based on PCI device information and
+ * OS network device settings (MTU size).
+ **/
+static int txgbe_sw_init(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct pci_dev *pdev = adapter->pdev;
+	int err;
+
+	/* PCI config space info */
+	hw->vendor_id = pdev->vendor;
+	hw->device_id = pdev->device;
+	pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
+	if (hw->revision_id == TXGBE_FAILED_READ_CFG_BYTE &&
+	    txgbe_check_cfg_remove(hw, pdev)) {
+		txgbe_err(probe, "read of revision id failed\n");
+		err = -ENODEV;
+		goto out;
+	}
+	hw->subsystem_vendor_id = pdev->subsystem_vendor;
+	hw->subsystem_device_id = pdev->subsystem_device;
+
+	pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id);
+	if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) {
+		txgbe_err(probe, "read of subsystem id failed\n");
+		err = -ENODEV;
+		goto out;
+	}
+
+	set_bit(__TXGBE_DOWN, &adapter->state);
+out:
+	return err;
+}
+
+static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
+{
+	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+	struct net_device *netdev = adapter->netdev;
+
+	netif_device_detach(netdev);
+
+	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
+		pci_disable_device(pdev);
+
+	return 0;
+}
+
+static void txgbe_shutdown(struct pci_dev *pdev)
+{
+	bool wake;
+
+	__txgbe_shutdown(pdev, &wake);
+
+	if (system_state == SYSTEM_POWER_OFF) {
+		pci_wake_from_d3(pdev, wake);
+		pci_set_power_state(pdev, PCI_D3hot);
+	}
+}
+
+/**
+ * txgbe_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @ent: entry in txgbe_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ *
+ * txgbe_probe initializes an adapter identified by a pci_dev structure.
+ * The OS initialization, configuring of the adapter private structure,
+ * and a hardware reset occur.
+ **/
+static int txgbe_probe(struct pci_dev *pdev,
+				const struct pci_device_id __always_unused *ent)
+{
+	struct net_device *netdev;
+	struct txgbe_adapter *adapter = NULL;
+	struct txgbe_hw *hw = NULL;
+	int err, pci_using_dac;
+	unsigned int indices = MAX_TX_QUEUES;
+	bool disable_dev = false;
+
+	err = pci_enable_device_mem(pdev);
+	if (err)
+		return err;
+
+	if (!dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64)) &&
+		!dma_set_coherent_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64))) {
+		pci_using_dac = 1;
+	} else {
+		err = dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(32));
+		if (err) {
+			err = dma_set_coherent_mask(pci_dev_to_dev(pdev),
+						    DMA_BIT_MASK(32));
+			if (err) {
+				dev_err(pci_dev_to_dev(pdev),
+					"No usable DMA configuration, aborting\n");
+				goto err_dma;
+			}
+		}
+		pci_using_dac = 0;
+	}
+
+	err = pci_request_selected_regions(pdev, pci_select_bars(pdev,
+					   IORESOURCE_MEM), txgbe_driver_name);
+	if (err) {
+		dev_err(pci_dev_to_dev(pdev),
+			"pci_request_selected_regions failed 0x%x\n", err);
+		goto err_pci_reg;
+	}
+
+	hw = vmalloc(sizeof(struct txgbe_hw));
+	if (!hw)
+		return -ENOMEM;
+
+	hw->vendor_id = pdev->vendor;
+	hw->device_id = pdev->device;
+	vfree(hw);
+
+	pci_enable_pcie_error_reporting(pdev);
+	pci_set_master(pdev);
+	/* errata 16 */
+	if (MAX_REQUEST_SIZE == 512) {
+		pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL,
+							PCI_EXP_DEVCTL_READRQ,
+							0x2000);
+	} else {
+		pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL,
+							PCI_EXP_DEVCTL_READRQ,
+							0x1000);
+	}
+
+	netdev = alloc_etherdev_mq(sizeof(struct txgbe_adapter), indices);
+	if (!netdev) {
+		err = -ENOMEM;
+		goto err_alloc_etherdev;
+	}
+
+	SET_NETDEV_DEV(netdev, pci_dev_to_dev(pdev));
+
+	adapter = netdev_priv(netdev);
+	adapter->netdev = netdev;
+	adapter->pdev = pdev;
+	hw = &adapter->hw;
+	hw->back = adapter;
+	adapter->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+
+	hw->hw_addr = ioremap(pci_resource_start(pdev, 0),
+			      pci_resource_len(pdev, 0));
+	adapter->io_addr = hw->hw_addr;
+	if (!hw->hw_addr) {
+		err = -EIO;
+		goto err_ioremap;
+	}
+
+	strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+
+	/* setup the private structure */
+	err = txgbe_sw_init(adapter);
+	if (err)
+		goto err_sw_init;
+
+err_sw_init:
+	iounmap(adapter->io_addr);
+err_ioremap:
+	disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
+	free_netdev(netdev);
+err_alloc_etherdev:
+	pci_release_selected_regions(pdev,
+				     pci_select_bars(pdev, IORESOURCE_MEM));
+err_pci_reg:
+err_dma:
+	if (!adapter || disable_dev)
+		pci_disable_device(pdev);
+	return err;
+}
+
+/**
+ * txgbe_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * txgbe_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device.  The could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void txgbe_remove(struct pci_dev *pdev)
+{
+	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+	struct net_device *netdev;
+	bool disable_dev;
+
+	/* if !adapter then we already cleaned up in probe */
+	if (!adapter)
+		return;
+
+	netdev = adapter->netdev;
+	set_bit(__TXGBE_REMOVING, &adapter->state);
+	cancel_work_sync(&adapter->service_task);
+
+	iounmap(adapter->io_addr);
+	pci_release_selected_regions(pdev,
+				     pci_select_bars(pdev, IORESOURCE_MEM));
+
+	disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
+	free_netdev(netdev);
+
+	pci_disable_pcie_error_reporting(pdev);
+
+	if (disable_dev)
+		pci_disable_device(pdev);
+}
+
+static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev)
+{
+	u16 value;
+
+	pci_read_config_word(pdev, PCI_VENDOR_ID, &value);
+	if (value == TXGBE_FAILED_READ_CFG_WORD) {
+		txgbe_remove_adapter(hw);
+		return true;
+	}
+	return false;
+}
+
+static struct pci_driver txgbe_driver = {
+	.name     = txgbe_driver_name,
+	.id_table = txgbe_pci_tbl,
+	.probe    = txgbe_probe,
+	.remove   = txgbe_remove,
+	.shutdown = txgbe_shutdown,
+};
+
+/**
+ * txgbe_init_module - Driver Registration Routine
+ *
+ * txgbe_init_module is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ **/
+static int __init txgbe_init_module(void)
+{
+	int ret;
+
+	pr_info("%s\n", txgbe_driver_string);
+	pr_info("%s\n", txgbe_copyright);
+
+	txgbe_wq = create_singlethread_workqueue(txgbe_driver_name);
+	if (!txgbe_wq) {
+		pr_err("%s: Failed to create workqueue\n", txgbe_driver_name);
+		return -ENOMEM;
+	}
+
+	ret = pci_register_driver(&txgbe_driver);
+	return ret;
+}
+
+module_init(txgbe_init_module);
+
+/**
+ * txgbe_exit_module - Driver Exit Cleanup Routine
+ *
+ * txgbe_exit_module is called just before the driver is removed
+ * from memory.
+ **/
+static void __exit txgbe_exit_module(void)
+{
+	pci_unregister_driver(&txgbe_driver);
+	if (txgbe_wq)
+		destroy_workqueue(txgbe_wq);
+}
+
+module_exit(txgbe_exit_module);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
new file mode 100644
index 000000000000..ba9306982317
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_TYPE_H_
+#define _TXGBE_TYPE_H_
+
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+/* Little Endian defines */
+#ifndef __le16
+#define __le16  u16
+#endif
+#ifndef __le32
+#define __le32  u32
+#endif
+#ifndef __le64
+#define __le64  u64
+
+#endif
+#ifndef __be16
+/* Big Endian defines */
+#define __be16  u16
+#define __be32  u32
+#define __be64  u64
+
+#endif
+
+/************ txgbe_register.h ************/
+/* Vendor ID */
+#ifndef PCI_VENDOR_ID_TRUSTNETIC
+#define PCI_VENDOR_ID_TRUSTNETIC                0x8088
+#endif
+
+/* Device IDs */
+#define TXGBE_DEV_ID_SP1000                     0x1001
+#define TXGBE_DEV_ID_WX1820                     0x2001
+
+/* Subsystem IDs */
+/* SFP */
+#define TXGBE_ID_SP1000_SFP                     0x0000
+#define TXGBE_ID_WX1820_SFP                     0x2000
+#define TXGBE_ID_SFP                            0x00
+
+/* copper */
+#define TXGBE_ID_SP1000_XAUI                    0x1010
+#define TXGBE_ID_WX1820_XAUI                    0x2010
+#define TXGBE_ID_XAUI                           0x10
+#define TXGBE_ID_SP1000_SGMII                   0x1020
+#define TXGBE_ID_WX1820_SGMII                   0x2020
+#define TXGBE_ID_SGMII                          0x20
+/* backplane */
+#define TXGBE_ID_SP1000_KR_KX_KX4               0x1030
+#define TXGBE_ID_WX1820_KR_KX_KX4               0x2030
+#define TXGBE_ID_KR_KX_KX4                      0x30
+/* MAC Interface */
+#define TXGBE_ID_SP1000_MAC_XAUI                0x1040
+#define TXGBE_ID_WX1820_MAC_XAUI                0x2040
+#define TXGBE_ID_MAC_XAUI                       0x40
+#define TXGBE_ID_SP1000_MAC_SGMII               0x1060
+#define TXGBE_ID_WX1820_MAC_SGMII               0x2060
+#define TXGBE_ID_MAC_SGMII                      0x60
+
+#define TXGBE_NCSI_SUP                          0x8000
+#define TXGBE_NCSI_MASK                         0x8000
+#define TXGBE_WOL_SUP                           0x4000
+#define TXGBE_WOL_MASK                          0x4000
+#define TXGBE_DEV_MASK                          0xf0
+
+/* Combined interface*/
+#define TXGBE_ID_SFI_XAUI			0x50
+
+/* Revision ID */
+#define TXGBE_SP_MPW  1
+
+struct txgbe_hw {
+	u8 __iomem *hw_addr;
+	void *back;
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u16 subsystem_id;
+};
+
+#endif /* _TXGBE_TYPE_H_ */
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 02/14] net: txgbe: Add hardware initialization
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12  5:15   ` kernel test robot
  2022-05-11  3:26 ` [PATCH net-next 03/14] net: txgbe: Add operations to interact with firmware Jiawen Wu
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Initialize the hardware by configuring the MAC layer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |    3 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  153 ++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c |  789 ++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   36 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  422 +++-
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   | 1728 +++++++++++++++++
 6 files changed, 3129 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index 725aa1f721f6..eaf1d46693bc 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -6,4 +6,5 @@
 
 obj-$(CONFIG_TXGBE) += txgbe.o
 
-txgbe-objs := txgbe_main.o
+txgbe-objs := txgbe_main.o \
+              txgbe_hw.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index dd6b6c03e998..48e6f69857ad 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -4,16 +4,40 @@
 #ifndef _TXGBE_H_
 #define _TXGBE_H_
 
+#include <net/ip.h>
+#include <linux/pci.h>
+#include <linux/vmalloc.h>
+#include <linux/etherdevice.h>
+#include <linux/timecounter.h>
+#include <linux/clocksource.h>
+#include <linux/aer.h>
+
 #include "txgbe_type.h"
 
 #ifndef MAX_REQUEST_SIZE
 #define MAX_REQUEST_SIZE 256
 #endif
 
+struct txgbe_ring {
+	u8 reg_idx;
+} ____cacheline_internodealigned_in_smp;
+
 #define TXGBE_MAX_FDIR_INDICES          63
 
 #define MAX_TX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
 
+#define TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE       64
+
+struct txgbe_mac_addr {
+	u8 addr[ETH_ALEN];
+	u16 state; /* bitmask */
+	u64 pools;
+};
+
+#define TXGBE_MAC_STATE_DEFAULT         0x1
+#define TXGBE_MAC_STATE_MODIFIED        0x2
+#define TXGBE_MAC_STATE_IN_USE          0x4
+
 /* board specific private data structure */
 struct txgbe_adapter {
 	/* OS defined structs */
@@ -22,12 +46,37 @@ struct txgbe_adapter {
 
 	unsigned long state;
 
+	/* Some features need tri-state capability,
+	 * thus the additional *_CAPABLE flags.
+	 */
+	u32 flags2;
+	/* Tx fast path data */
+	int num_tx_queues;
+	u16 tx_itr_setting;
+
+	/* Rx fast path data */
+	int num_rx_queues;
+	u16 rx_itr_setting;
+
+	/* TX */
+	struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
+
+	int max_q_vectors;      /* upper limit of q_vectors for device */
+
 	/* structs defined in txgbe_hw.h */
 	struct txgbe_hw hw;
 	u16 msg_enable;
+
+	struct timer_list service_timer;
 	struct work_struct service_task;
+	u32 atr_sample_rate;
 
 	u8 __iomem *io_addr;    /* Mainly for iounmap use */
+
+	bool netdev_registered;
+
+	struct txgbe_mac_addr *mac_table;
+
 };
 
 enum txgbe_state_t {
@@ -44,13 +93,83 @@ enum txgbe_state_t {
 	__TXGBE_PTP_TX_IN_PROGRESS,
 };
 
+void txgbe_down(struct txgbe_adapter *adapter);
+
+/**
+ * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
+ * disable a interrupt by writing to PX_IMS with the corresponding bit=1
+ * enable a interrupt by writing to PX_IMC with the corresponding bit=1
+ * trigger a interrupt by writing to PX_ICS with the corresponding bit=1
+ **/
+#define TXGBE_INTR_ALL (~0ULL)
+#define TXGBE_INTR_MISC(A) (1ULL << (A)->num_q_vectors)
+#define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1)
+#define TXGBE_INTR_Q(i) (1ULL << (i))
+static inline void txgbe_intr_enable(struct txgbe_hw *hw, u64 qmask)
+{
+	u32 mask;
+
+	mask = (qmask & 0xFFFFFFFF);
+	if (mask)
+		wr32(hw, TXGBE_PX_IMC(0), mask);
+	mask = (qmask >> 32);
+	if (mask)
+		wr32(hw, TXGBE_PX_IMC(1), mask);
+
+	/* skip the flush */
+}
+
+static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask)
+{
+	u32 mask;
+
+	mask = (qmask & 0xFFFFFFFF);
+	if (mask)
+		wr32(hw, TXGBE_PX_IMS(0), mask);
+	mask = (qmask >> 32);
+	if (mask)
+		wr32(hw, TXGBE_PX_IMS(1), mask);
+
+	/* skip the flush */
+}
+
+#define msec_delay(_x) msleep(_x)
+#define usec_delay(_x) udelay(_x)
+
 #define TXGBE_NAME "txgbe"
 
+struct txgbe_msg {
+	u16 msg_enable;
+};
+
+__maybe_unused static struct net_device *txgbe_hw_to_netdev(const struct txgbe_hw *hw)
+{
+	return ((struct txgbe_adapter *)hw->back)->netdev;
+}
+
+__maybe_unused static struct txgbe_msg *txgbe_hw_to_msg(const struct txgbe_hw *hw)
+{
+	struct txgbe_adapter *adapter =
+		container_of(hw, struct txgbe_adapter, hw);
+	return (struct txgbe_msg *)&adapter->msg_enable;
+}
+
 static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
 {
 	return &pdev->dev;
 }
 
+#define txgbe_hw_dbg(fmt, arg...) \
+	netdev_dbg(txgbe_hw_to_netdev(hw), fmt, ##arg)
+
+#define DEBUGOUT(S)             txgbe_hw_dbg(S)
+#define DEBUGOUT1(S, A...)      txgbe_hw_dbg(S, ## A)
+#define DEBUGOUT2(S, A...)      txgbe_hw_dbg(S, ## A)
+#define DEBUGOUT3(S, A...)      txgbe_hw_dbg(S, ## A)
+#define DEBUGOUT4(S, A...)      txgbe_hw_dbg(S, ## A)
+#define DEBUGOUT5(S, A...)      txgbe_hw_dbg(S, ## A)
+#define DEBUGOUT6(S, A...)      txgbe_hw_dbg(S, ## A)
+
 #define txgbe_dev_info(format, arg...) \
 	dev_info(&adapter->pdev->dev, format, ## arg)
 #define txgbe_dev_warn(format, arg...) \
@@ -74,4 +193,38 @@ static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
 #define TXGBE_FAILED_READ_CFG_WORD  0xffffU
 #define TXGBE_FAILED_READ_CFG_BYTE  0xffU
 
+extern u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg);
+
+enum {
+	TXGBE_ERROR_SOFTWARE,
+	TXGBE_ERROR_POLLING,
+	TXGBE_ERROR_INVALID_STATE,
+	TXGBE_ERROR_UNSUPPORTED,
+	TXGBE_ERROR_ARGUMENT,
+	TXGBE_ERROR_CAUTION,
+};
+
+#define ERROR_REPORT(level, format, arg...) do {                               \
+	switch (level) {                                                       \
+	case TXGBE_ERROR_SOFTWARE:                                             \
+	case TXGBE_ERROR_CAUTION:                                              \
+	case TXGBE_ERROR_POLLING:                                              \
+		netif_warn(txgbe_hw_to_msg(hw), drv, txgbe_hw_to_netdev(hw),   \
+			   format, ## arg);                                    \
+		break;                                                         \
+	case TXGBE_ERROR_INVALID_STATE:                                        \
+	case TXGBE_ERROR_UNSUPPORTED:                                          \
+	case TXGBE_ERROR_ARGUMENT:                                             \
+		netif_err(txgbe_hw_to_msg(hw), hw, txgbe_hw_to_netdev(hw),     \
+			  format, ## arg);                                     \
+		break;                                                         \
+	default:                                                               \
+		break;                                                         \
+	}                                                                      \
+} while (0)
+
+#define ERROR_REPORT1 ERROR_REPORT
+#define ERROR_REPORT2 ERROR_REPORT
+#define ERROR_REPORT3 ERROR_REPORT
+
 #endif /* _TXGBE_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
new file mode 100644
index 000000000000..e4ebaf007da9
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -0,0 +1,789 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe_type.h"
+#include "txgbe_hw.h"
+#include "txgbe.h"
+
+#define TXGBE_SP_MAX_TX_QUEUES  128
+#define TXGBE_SP_MAX_RX_QUEUES  128
+#define TXGBE_SP_RAR_ENTRIES    128
+
+s32 txgbe_init_hw(struct txgbe_hw *hw)
+{
+	s32 status;
+
+	/* Reset the hardware */
+	status = TCALL(hw, mac.ops.reset_hw);
+
+	if (status == 0) {
+		/* Start the HW */
+		status = TCALL(hw, mac.ops.start_hw);
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_get_mac_addr - Generic get MAC address
+ *  @hw: pointer to hardware structure
+ *  @mac_addr: Adapter MAC address
+ *
+ *  Reads the adapter's MAC address from first Receive Address Register (RAR0)
+ *  A reset of the adapter must be performed prior to calling this function
+ *  in order for the MAC address to have been loaded from the EEPROM into RAR0
+ **/
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr)
+{
+	u32 rar_high;
+	u32 rar_low;
+	u16 i;
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 0);
+	rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H);
+	rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L);
+
+	for (i = 0; i < 2; i++)
+		mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
+
+	for (i = 0; i < 4; i++)
+		mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8);
+
+	return 0;
+}
+
+/**
+ *  txgbe_set_pci_config_data - Generic store PCI bus info
+ *  @hw: pointer to hardware structure
+ *  @link_status: the link status returned by the PCI config space
+ *
+ *  Stores the PCI bus info (speed, width, type) within the txgbe_hw structure
+ **/
+void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status)
+{
+	if (hw->bus.type == txgbe_bus_type_unknown)
+		hw->bus.type = txgbe_bus_type_pci_express;
+
+	switch (link_status & TXGBE_PCI_LINK_WIDTH) {
+	case TXGBE_PCI_LINK_WIDTH_1:
+		hw->bus.width = txgbe_bus_width_pcie_x1;
+		break;
+	case TXGBE_PCI_LINK_WIDTH_2:
+		hw->bus.width = txgbe_bus_width_pcie_x2;
+		break;
+	case TXGBE_PCI_LINK_WIDTH_4:
+		hw->bus.width = txgbe_bus_width_pcie_x4;
+		break;
+	case TXGBE_PCI_LINK_WIDTH_8:
+		hw->bus.width = txgbe_bus_width_pcie_x8;
+		break;
+	default:
+		hw->bus.width = txgbe_bus_width_unknown;
+		break;
+	}
+
+	switch (link_status & TXGBE_PCI_LINK_SPEED) {
+	case TXGBE_PCI_LINK_SPEED_2500:
+		hw->bus.speed = txgbe_bus_speed_2500;
+		break;
+	case TXGBE_PCI_LINK_SPEED_5000:
+		hw->bus.speed = txgbe_bus_speed_5000;
+		break;
+	case TXGBE_PCI_LINK_SPEED_8000:
+		hw->bus.speed = txgbe_bus_speed_8000;
+		break;
+	default:
+		hw->bus.speed = txgbe_bus_speed_unknown;
+		break;
+	}
+}
+
+/**
+ *  txgbe_get_bus_info - Generic set PCI bus info
+ *  @hw: pointer to hardware structure
+ *
+ *  Gets the PCI bus info (speed, width, type) then calls helper function to
+ *  store this data within the txgbe_hw structure.
+ **/
+s32 txgbe_get_bus_info(struct txgbe_hw *hw)
+{
+	u16 link_status;
+
+	/* Get the negotiated link width and speed from PCI config space */
+	link_status = txgbe_read_pci_cfg_word(hw, TXGBE_PCI_LINK_STATUS);
+
+	txgbe_set_pci_config_data(hw, link_status);
+
+	return 0;
+}
+
+/**
+ *  txgbe_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
+ *  @hw: pointer to the HW structure
+ *
+ *  Determines the LAN function id by reading memory-mapped registers
+ *  and swaps the port value if requested.
+ **/
+void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw)
+{
+	struct txgbe_bus_info *bus = &hw->bus;
+	u32 reg;
+
+	reg = rd32(hw, TXGBE_CFG_PORT_ST);
+	bus->lan_id = TXGBE_CFG_PORT_ST_LAN_ID(reg);
+
+	/* check for a port swap */
+	reg = rd32(hw, TXGBE_MIS_PWR);
+	if (TXGBE_MIS_PWR_LAN_ID(reg) == TXGBE_MIS_PWR_LAN_ID_1)
+		bus->func = 0;
+	else
+		bus->func = bus->lan_id;
+}
+
+/**
+ *  txgbe_stop_adapter - Generic stop Tx/Rx units
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the adapter_stopped flag within txgbe_hw struct. Clears interrupts,
+ *  disables transmit and receive units. The adapter_stopped flag is used by
+ *  the shared code and drivers to determine if the adapter is in a stopped
+ *  state and should not touch the hardware.
+ **/
+s32 txgbe_stop_adapter(struct txgbe_hw *hw)
+{
+	u16 i;
+
+	/* Set the adapter_stopped flag so other driver functions stop touching
+	 * the hardware
+	 */
+	hw->adapter_stopped = true;
+
+	/* Disable the receive unit */
+	TCALL(hw, mac.ops.disable_rx);
+
+	/* Set interrupt mask to stop interrupts from being generated */
+	txgbe_intr_disable(hw, TXGBE_INTR_ALL);
+
+	/* Clear any pending interrupts, flush previous writes */
+	wr32(hw, TXGBE_PX_MISC_IC, 0xffffffff);
+	wr32(hw, TXGBE_BME_CTL, 0x3);
+
+	/* Disable the transmit unit.  Each queue must be disabled. */
+	for (i = 0; i < hw->mac.max_tx_queues; i++) {
+		wr32m(hw, TXGBE_PX_TR_CFG(i),
+		      TXGBE_PX_TR_CFG_SWFLSH | TXGBE_PX_TR_CFG_ENABLE,
+		      TXGBE_PX_TR_CFG_SWFLSH);
+	}
+
+	/* Disable the receive unit by stopping each queue */
+	for (i = 0; i < hw->mac.max_rx_queues; i++) {
+		wr32m(hw, TXGBE_PX_RR_CFG(i),
+		      TXGBE_PX_RR_CFG_RR_EN, 0);
+	}
+
+	/* flush all queues disables */
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* Prevent the PCI-E bus from hanging by disabling PCI-E master
+	 * access and verify no pending requests
+	 */
+	return txgbe_disable_pcie_master(hw);
+}
+
+/**
+ *  txgbe_set_rar - Set Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *  @addr: Address to put into receive address register
+ *  @vmdq: VMDq "set" or "pool" index
+ *  @enable_addr: set flag that address is active
+ *
+ *  Puts an ethernet address into a receive address register.
+ **/
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
+		  u32 enable_addr)
+{
+	u32 rar_low, rar_high;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+			      "RAR index %d is out of range.\n", index);
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/* select the MAC address */
+	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index);
+
+	/* setup VMDq pool mapping */
+	wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF);
+	wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, pools >> 32);
+
+	/* HW expects these in little endian so we reverse the byte
+	 * order from network order (big endian) to little endian
+	 *
+	 * Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	rar_low = ((u32)addr[5] |
+		  ((u32)addr[4] << 8) |
+		  ((u32)addr[3] << 16) |
+		  ((u32)addr[2] << 24));
+	rar_high = ((u32)addr[1] |
+		   ((u32)addr[0] << 8));
+	if (enable_addr != 0)
+		rar_high |= TXGBE_PSR_MAC_SWC_AD_H_AV;
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, rar_low);
+	wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H,
+	      (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) |
+	       TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+	       TXGBE_PSR_MAC_SWC_AD_H_AV),
+	      rar_high);
+
+	return 0;
+}
+
+/**
+ *  txgbe_clear_rar - Remove Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *
+ *  Clears an ethernet address from a receive address register.
+ **/
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index)
+{
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+			      "RAR index %d is out of range.\n", index);
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/* Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index);
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0);
+	wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0);
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0);
+	wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H,
+	      (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) |
+	       TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+	       TXGBE_PSR_MAC_SWC_AD_H_AV),
+	      0);
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_rx_addrs - Initializes receive address filters.
+ *  @hw: pointer to hardware structure
+ *
+ *  Places the MAC address in receive address register 0 and clears the rest
+ *  of the receive address registers. Clears the multicast table. Assumes
+ *  the receiver is in reset when the routine is called.
+ **/
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw)
+{
+	u32 i;
+	u32 rar_entries = hw->mac.num_rar_entries;
+	u32 psrctl;
+
+	/* If the current mac address is valid, assume it is a software override
+	 * to the permanent address.
+	 * Otherwise, use the permanent address from the eeprom.
+	 */
+	if (!is_valid_ether_addr(hw->mac.addr)) {
+		/* Get the MAC address from the RAR0 for later reference */
+		TCALL(hw, mac.ops.get_mac_addr, hw->mac.addr);
+
+		DEBUGOUT3("Keeping Current RAR0 Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2], hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+	} else {
+		/* Setup the receive address. */
+		DEBUGOUT("Overriding MAC Address in RAR[0]\n");
+		DEBUGOUT3("New MAC Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2], hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+
+		TCALL(hw, mac.ops.set_rar, 0, hw->mac.addr, 0,
+		      TXGBE_PSR_MAC_SWC_AD_H_AV);
+
+		/* clear VMDq pool/queue selection for RAR 0 */
+		TCALL(hw, mac.ops.clear_vmdq, 0, TXGBE_CLEAR_VMDQ_ALL);
+	}
+	hw->addr_ctrl.overflow_promisc = 0;
+
+	hw->addr_ctrl.rar_used_count = 1;
+
+	/* Zero out the other receive addresses. */
+	DEBUGOUT1("Clearing RAR[1-%d]\n", rar_entries - 1);
+	for (i = 1; i < rar_entries; i++) {
+		wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i);
+		wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0);
+		wr32(hw, TXGBE_PSR_MAC_SWC_AD_H, 0);
+	}
+
+	/* Clear the MTA */
+	hw->addr_ctrl.mta_in_use = 0;
+	psrctl = rd32(hw, TXGBE_PSR_CTL);
+	psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+	psrctl |= hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT;
+	wr32(hw, TXGBE_PSR_CTL, psrctl);
+	DEBUGOUT(" Clearing MTA\n");
+	for (i = 0; i < hw->mac.mcft_size; i++)
+		wr32(hw, TXGBE_PSR_MC_TBL(i), 0);
+
+	TCALL(hw, mac.ops.init_uta_tables);
+
+	return 0;
+}
+
+/**
+ *  txgbe_disable_pcie_master - Disable PCI-express master access
+ *  @hw: pointer to hardware structure
+ *
+ *  Disables PCI-Express master access and verifies there are no pending
+ *  requests. TXGBE_ERR_MASTER_REQUESTS_PENDING is returned if master disable
+ *  bit hasn't caused the master requests to be disabled, else 0
+ *  is returned signifying master requests disabled.
+ **/
+s32 txgbe_disable_pcie_master(struct txgbe_hw *hw)
+{
+	s32 status = 0;
+	u32 i;
+
+	/* Always set this bit to ensure any future transactions are blocked */
+	pci_clear_master(((struct txgbe_adapter *)hw->back)->pdev);
+
+	/* Exit if master requests are blocked */
+	if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING)) ||
+	    TXGBE_REMOVED(hw->hw_addr))
+		goto out;
+
+	/* Poll for master request bit to clear */
+	for (i = 0; i < TXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) {
+		usec_delay(100);
+		if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING)))
+			goto out;
+	}
+
+	ERROR_REPORT1(TXGBE_ERROR_POLLING,
+		      "PCIe transaction pending bit did not clear.\n");
+	status = TXGBE_ERR_MASTER_REQUESTS_PENDING;
+out:
+	return status;
+}
+
+/**
+ *  txgbe_get_san_mac_addr - SAN MAC address retrieval from the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @san_mac_addr: SAN MAC address
+ *
+ *  Reads the SAN MAC address.
+ **/
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
+{
+	u8 i;
+
+	/* No addresses available in this EEPROM.  It's not an
+	 * error though, so just wipe the local address and return.
+	 */
+	for (i = 0; i < 6; i++)
+		san_mac_addr[i] = 0xFF;
+	return 0;
+}
+
+/**
+ *  txgbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
+ *  @hw: pointer to hardware struct
+ *  @rar: receive address register index to disassociate
+ *  @vmdq: VMDq pool index to remove from the rar
+ **/
+s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 __maybe_unused vmdq)
+{
+	u32 mpsar_lo, mpsar_hi;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	/* Make sure we are using a valid rar index range */
+	if (rar >= rar_entries) {
+		ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+			      "RAR index %d is out of range.\n", rar);
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar);
+	mpsar_lo = rd32(hw, TXGBE_PSR_MAC_SWC_VM_L);
+	mpsar_hi = rd32(hw, TXGBE_PSR_MAC_SWC_VM_H);
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		goto done;
+
+	if (!mpsar_lo && !mpsar_hi)
+		goto done;
+
+	/* was that the last pool using this rar? */
+	if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0)
+		TCALL(hw, mac.ops.clear_rar, rar);
+done:
+	return 0;
+}
+
+/**
+ *  This function should only be involved in the IOV mode.
+ *  In IOV mode, Default pool is next pool after the number of
+ *  VFs advertized and not 0.
+ *  MPSAR table needs to be updated for SAN_MAC RAR [hw->mac.san_mac_rar_index]
+ *
+ *  txgbe_set_vmdq_san_mac - Associate default VMDq pool index with a rx address
+ *  @hw: pointer to hardware struct
+ *  @vmdq: VMDq pool index
+ **/
+s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq)
+{
+	u32 rar = hw->mac.san_mac_rar_index;
+
+	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar);
+	if (vmdq < 32) {
+		wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 1 << vmdq);
+		wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0);
+	} else {
+		wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0);
+		wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 1 << (vmdq - 32));
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_uta_tables - Initialize the Unicast Table Array
+ *  @hw: pointer to hardware structure
+ **/
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw)
+{
+	int i;
+
+	DEBUGOUT(" Clearing UTA\n");
+
+	for (i = 0; i < 128; i++)
+		wr32(hw, TXGBE_PSR_UC_TBL(i), 0);
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
+ *  @hw: pointer to hardware structure
+ *
+ *  Inits the thermal sensor thresholds according to the NVM map
+ *  and save off the threshold and location values into mac.thermal_sensor_data
+ **/
+s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw)
+{
+	s32 status = 0;
+
+	struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+	memset(data, 0, sizeof(struct txgbe_thermal_sensor_data));
+
+	/* Only support thermal sensors attached to SP physical port 0 */
+	if (hw->bus.lan_id)
+		return TXGBE_NOT_IMPLEMENTED;
+
+	wr32(hw, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD);
+	wr32(hw, TXGBE_TS_INT_EN,
+	     TXGBE_TS_INT_EN_ALARM_INT_EN | TXGBE_TS_INT_EN_DALARM_INT_EN);
+	wr32(hw, TXGBE_TS_EN, TXGBE_TS_EN_ENA);
+
+	data->sensor.alarm_thresh = 100;
+	wr32(hw, TXGBE_TS_ALARM_THRE, 677);
+	data->sensor.dalarm_thresh = 90;
+	wr32(hw, TXGBE_TS_DALARM_THRE, 614);
+
+	return status;
+}
+
+void txgbe_disable_rx(struct txgbe_hw *hw)
+{
+	u32 pfdtxgswc;
+	u32 rxctrl;
+
+	rxctrl = rd32(hw, TXGBE_RDB_PB_CTL);
+	if (rxctrl & TXGBE_RDB_PB_CTL_RXEN) {
+		pfdtxgswc = rd32(hw, TXGBE_PSR_CTL);
+		if (pfdtxgswc & TXGBE_PSR_CTL_SW_EN) {
+			pfdtxgswc &= ~TXGBE_PSR_CTL_SW_EN;
+			wr32(hw, TXGBE_PSR_CTL, pfdtxgswc);
+			hw->mac.set_lben = true;
+		} else {
+			hw->mac.set_lben = false;
+		}
+		rxctrl &= ~TXGBE_RDB_PB_CTL_RXEN;
+		wr32(hw, TXGBE_RDB_PB_CTL, rxctrl);
+
+		if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+		      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+			/* disable mac receiver */
+			wr32m(hw, TXGBE_MAC_RX_CFG,
+			      TXGBE_MAC_RX_CFG_RE, 0);
+		}
+	}
+}
+
+int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
+{
+	u32 i = 0;
+	u32 reg = 0;
+	int err = 0;
+	/* if there's flash existing */
+	if (!(rd32(hw, TXGBE_SPI_STATUS) &
+	      TXGBE_SPI_STATUS_FLASH_BYPASS)) {
+		/* wait hw load flash done */
+		for (i = 0; i < TXGBE_MAX_FLASH_LOAD_POLL_TIME; i++) {
+			reg = rd32(hw, TXGBE_SPI_ILDR_STATUS);
+			if (!(reg & check_bit)) {
+				/* done */
+				break;
+			}
+			msleep(200);
+		}
+		if (i == TXGBE_MAX_FLASH_LOAD_POLL_TIME)
+			err = TXGBE_ERR_FLASH_LOADING_FAILED;
+	}
+	return err;
+}
+
+/**
+ *  txgbe_init_ops - Inits func ptrs and MAC type
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize the function pointers and assign the MAC type for sapphire.
+ *  Does not touch the hardware.
+ **/
+
+s32 txgbe_init_ops(struct txgbe_hw *hw)
+{
+	struct txgbe_mac_info *mac = &hw->mac;
+
+	/* MAC */
+	mac->ops.init_hw = txgbe_init_hw;
+	mac->ops.get_mac_addr = txgbe_get_mac_addr;
+	mac->ops.stop_adapter = txgbe_stop_adapter;
+	mac->ops.get_bus_info = txgbe_get_bus_info;
+	mac->ops.set_lan_id = txgbe_set_lan_id_multi_port_pcie;
+	mac->ops.reset_hw = txgbe_reset_hw;
+	mac->ops.start_hw = txgbe_start_hw;
+	mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr;
+
+	/* RAR */
+	mac->ops.set_rar = txgbe_set_rar;
+	mac->ops.clear_rar = txgbe_clear_rar;
+	mac->ops.init_rx_addrs = txgbe_init_rx_addrs;
+	mac->ops.init_uta_tables = txgbe_init_uta_tables;
+
+	mac->num_rar_entries    = TXGBE_SP_RAR_ENTRIES;
+	mac->max_rx_queues      = TXGBE_SP_MAX_RX_QUEUES;
+	mac->max_tx_queues      = TXGBE_SP_MAX_TX_QUEUES;
+
+	/* Manageability interface */
+	mac->ops.init_thermal_sensor_thresh =
+				      txgbe_init_thermal_sensor_thresh;
+
+	return 0;
+}
+
+int txgbe_reset_misc(struct txgbe_hw *hw)
+{
+	int i;
+
+	/* receive packets that size > 2048 */
+	wr32m(hw, TXGBE_MAC_RX_CFG,
+	      TXGBE_MAC_RX_CFG_JE, TXGBE_MAC_RX_CFG_JE);
+
+	/* clear counters on read */
+	wr32m(hw, TXGBE_MMC_CONTROL,
+	      TXGBE_MMC_CONTROL_RSTONRD, TXGBE_MMC_CONTROL_RSTONRD);
+
+	wr32m(hw, TXGBE_MAC_RX_FLOW_CTRL,
+	      TXGBE_MAC_RX_FLOW_CTRL_RFE, TXGBE_MAC_RX_FLOW_CTRL_RFE);
+
+	wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+
+	wr32m(hw, TXGBE_MIS_RST_ST,
+	      TXGBE_MIS_RST_ST_RST_INIT, 0x1E00);
+
+	/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+	wr32(hw, TXGBE_PSR_MNG_FLEX_SEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_L(i), 0);
+		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_H(i), 0);
+		wr32(hw, TXGBE_PSR_MNG_FLEX_MSK(i), 0);
+	}
+	wr32(hw, TXGBE_PSR_LAN_FLEX_SEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_L(i), 0);
+		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_H(i), 0);
+		wr32(hw, TXGBE_PSR_LAN_FLEX_MSK(i), 0);
+	}
+
+	/* set pause frame dst mac addr */
+	wr32(hw, TXGBE_RDB_PFCMACDAL, 0xC2000001);
+	wr32(hw, TXGBE_RDB_PFCMACDAH, 0x0180);
+
+	txgbe_init_thermal_sensor_thresh(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_reset_hw - Perform hardware reset
+ *  @hw: pointer to hardware structure
+ *
+ *  Resets the hardware by resetting the transmit and receive units, masks
+ *  and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ *  reset.
+ **/
+s32 txgbe_reset_hw(struct txgbe_hw *hw)
+{
+	s32 status;
+	u32 reset = 0;
+	u32 i;
+
+	u32 reset_status = 0;
+	u32 rst_delay = 0;
+
+	/* Call adapter stop to disable tx/rx and clear interrupts */
+	status = TCALL(hw, mac.ops.stop_adapter);
+	if (status != 0)
+		goto reset_hw_out;
+
+	/* Issue global reset to the MAC.  Needs to be SW reset if link is up.
+	 * If link reset is used when link is up, it might reset the PHY when
+	 * mng is using it.  If link is down or the flag to force full link
+	 * reset is set, then perform link reset.
+	 */
+	if (hw->force_full_reset) {
+		rst_delay = (rd32(hw, TXGBE_MIS_RST_ST) &
+			     TXGBE_MIS_RST_ST_RST_INIT) >>
+			     TXGBE_MIS_RST_ST_RST_INI_SHIFT;
+		if (hw->reset_type == TXGBE_SW_RESET) {
+			for (i = 0; i < rst_delay + 20; i++) {
+				reset_status =
+					rd32(hw, TXGBE_MIS_RST_ST);
+				if (!(reset_status &
+				    TXGBE_MIS_RST_ST_DEV_RST_ST_MASK))
+					break;
+				msleep(100);
+			}
+
+			if (reset_status & TXGBE_MIS_RST_ST_DEV_RST_ST_MASK) {
+				status = TXGBE_ERR_RESET_FAILED;
+				DEBUGOUT("Global reset polling failed to complete.\n");
+				goto reset_hw_out;
+			}
+			status = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_SW_RESET);
+			if (status != 0)
+				goto reset_hw_out;
+		} else if (hw->reset_type == TXGBE_GLOBAL_RESET) {
+			struct txgbe_adapter *adapter =
+					(struct txgbe_adapter *)hw->back;
+			msleep(100 * rst_delay + 2000);
+			pci_restore_state(adapter->pdev);
+			pci_save_state(adapter->pdev);
+			pci_wake_from_d3(adapter->pdev, false);
+		}
+	} else {
+		if (hw->bus.lan_id == 0)
+			reset = TXGBE_MIS_RST_LAN0_RST;
+		else
+			reset = TXGBE_MIS_RST_LAN1_RST;
+
+		wr32(hw, TXGBE_MIS_RST,
+		     reset | rd32(hw, TXGBE_MIS_RST));
+		TXGBE_WRITE_FLUSH(hw);
+		usec_delay(10);
+
+		if (hw->bus.lan_id == 0)
+			status = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST);
+		else
+			status = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST);
+
+		if (status != 0)
+			goto reset_hw_out;
+	}
+
+	status = txgbe_reset_misc(hw);
+	if (status != 0)
+		goto reset_hw_out;
+
+	/* Store the permanent mac address */
+	TCALL(hw, mac.ops.get_mac_addr, hw->mac.perm_addr);
+
+	/* Store MAC address from RAR0, clear receive address registers, and
+	 * clear the multicast table.  Also reset num_rar_entries to 128,
+	 * since we modify this value when programming the SAN MAC address.
+	 */
+	hw->mac.num_rar_entries = 128;
+	TCALL(hw, mac.ops.init_rx_addrs);
+
+	/* Store the permanent SAN mac address */
+	TCALL(hw, mac.ops.get_san_mac_addr, hw->mac.san_addr);
+
+	/* Add the SAN MAC address to the RAR only if it's a valid address */
+	if (is_valid_ether_addr(hw->mac.san_addr)) {
+		TCALL(hw, mac.ops.set_rar, hw->mac.num_rar_entries - 1,
+		      hw->mac.san_addr, 0, TXGBE_PSR_MAC_SWC_AD_H_AV);
+
+		/* Save the SAN MAC RAR index */
+		hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1;
+
+		/* Reserve the last RAR for the SAN MAC address */
+		hw->mac.num_rar_entries--;
+	}
+
+	pci_set_master(((struct txgbe_adapter *)hw->back)->pdev);
+
+reset_hw_out:
+	return status;
+}
+
+/**
+ *  txgbe_start_hw - Prepare hardware for Tx/Rx
+ *  @hw: pointer to hardware structure
+ *
+ *  Starts the hardware using the generic start_hw function
+ *  and the generation start_hw function.
+ *  Then performs revision-specific operations, if any.
+ **/
+s32 txgbe_start_hw(struct txgbe_hw *hw)
+{
+	int ret_val = 0;
+	u32 i;
+
+	/* Clear the rate limiters */
+	for (i = 0; i < hw->mac.max_tx_queues; i++) {
+		wr32(hw, TXGBE_TDM_RP_IDX, i);
+		wr32(hw, TXGBE_TDM_RP_RATE, 0);
+	}
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* Clear adapter stopped flag */
+	hw->adapter_stopped = false;
+
+	/* We need to run link autotry after the driver loads */
+	hw->mac.autotry_restart = true;
+
+	return ret_val;
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
new file mode 100644
index 000000000000..318c7f0dc5b9
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_HW_H_
+#define _TXGBE_HW_H_
+
+s32 txgbe_init_hw(struct txgbe_hw *hw);
+s32 txgbe_start_hw(struct txgbe_hw *hw);
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
+s32 txgbe_get_bus_info(struct txgbe_hw *hw);
+void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status);
+void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw);
+s32 txgbe_stop_adapter(struct txgbe_hw *hw);
+
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
+		  u32 enable_addr);
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw);
+
+s32 txgbe_disable_pcie_master(struct txgbe_hw *hw);
+
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
+
+s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq);
+s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq);
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw);
+
+s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
+void txgbe_disable_rx(struct txgbe_hw *hw);
+int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit);
+
+int txgbe_reset_misc(struct txgbe_hw *hw);
+s32 txgbe_reset_hw(struct txgbe_hw *hw);
+s32 txgbe_init_ops(struct txgbe_hw *hw);
+
+#endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 54d73679e7c9..a1441af5b46b 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -5,11 +5,14 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include <linux/netdevice.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
 #include <linux/string.h>
 #include <linux/aer.h>
 #include <linux/etherdevice.h>
 
 #include "txgbe.h"
+#include "txgbe_hw.h"
 
 char txgbe_driver_name[32] = TXGBE_NAME;
 static const char txgbe_driver_string[] =
@@ -44,6 +47,54 @@ static struct workqueue_struct *txgbe_wq;
 
 static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev);
 
+static void txgbe_check_minimum_link(struct txgbe_adapter *adapter,
+				     int expected_gts)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct pci_dev *pdev;
+
+	/* Some devices are not connected over PCIe and thus do not negotiate
+	 * speed. These devices do not have valid bus info, and thus any report
+	 * we generate may not be correct.
+	 */
+	if (hw->bus.type == txgbe_bus_type_internal)
+		return;
+
+	pdev = adapter->pdev;
+	pcie_print_link_status(pdev);
+}
+
+/**
+ * txgbe_enumerate_functions - Get the number of ports this device has
+ * @adapter: adapter structure
+ *
+ * This function enumerates the phsyical functions co-located on a single slot,
+ * in order to determine how many ports a device has. This is most useful in
+ * determining the required GT/s of PCIe bandwidth necessary for optimal
+ * performance.
+ **/
+static inline int txgbe_enumerate_functions(struct txgbe_adapter *adapter)
+{
+	struct pci_dev *entry, *pdev = adapter->pdev;
+	int physfns = 0;
+
+	list_for_each_entry(entry, &pdev->bus->devices, bus_list) {
+		/* When the devices on the bus don't all match our device ID,
+		 * we can't reliably determine the correct number of
+		 * functions. This can occur if a function has been direct
+		 * attached to a virtual machine using VT-d, for example. In
+		 * this case, simply return -1 to indicate this.
+		 */
+		if (entry->vendor != pdev->vendor ||
+		    entry->device != pdev->device)
+			return -1;
+
+		physfns++;
+	}
+
+	return physfns;
+}
+
 void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
 {
 	if (!test_bit(__TXGBE_DOWN, &adapter->state) &&
@@ -52,6 +103,15 @@ void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
 		queue_work(txgbe_wq, &adapter->service_task);
 }
 
+static void txgbe_service_event_complete(struct txgbe_adapter *adapter)
+{
+	BUG_ON(!test_bit(__TXGBE_SERVICE_SCHED, &adapter->state));
+
+	/* flush memory to make sure state is correct before next watchdog */
+	smp_mb__before_atomic();
+	clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
+}
+
 static void txgbe_remove_adapter(struct txgbe_hw *hw)
 {
 	struct txgbe_adapter *adapter = hw->back;
@@ -64,6 +124,151 @@ static void txgbe_remove_adapter(struct txgbe_hw *hw)
 		txgbe_service_event_schedule(adapter);
 }
 
+static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
+			if (adapter->mac_table[i].state &
+					TXGBE_MAC_STATE_IN_USE) {
+				TCALL(hw, mac.ops.set_rar, i,
+				      adapter->mac_table[i].addr,
+				      adapter->mac_table[i].pools,
+				      TXGBE_PSR_MAC_SWC_AD_H_AV);
+			} else {
+				TCALL(hw, mac.ops.clear_rar, i);
+			}
+			adapter->mac_table[i].state &=
+				~(TXGBE_MAC_STATE_MODIFIED);
+		}
+	}
+}
+
+/* this function destroys the first RAR entry */
+static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
+					 u8 *addr)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
+	adapter->mac_table[0].pools = 1ULL;
+	adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
+				       TXGBE_MAC_STATE_IN_USE);
+	TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr,
+	      adapter->mac_table[0].pools,
+	      TXGBE_PSR_MAC_SWC_AD_H_AV);
+}
+
+static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
+{
+	u32 i;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+		adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+		memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
+		adapter->mac_table[i].pools = 0;
+	}
+	txgbe_sync_mac_table(adapter);
+}
+
+void txgbe_reset(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct net_device *netdev = adapter->netdev;
+	int err;
+	u8 old_addr[ETH_ALEN];
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return;
+
+	err = TCALL(hw, mac.ops.init_hw);
+	switch (err) {
+	case 0:
+		break;
+	case TXGBE_ERR_MASTER_REQUESTS_PENDING:
+		txgbe_dev_err("master disable timed out\n");
+		break;
+	default:
+		txgbe_dev_err("Hardware Error: %d\n", err);
+	}
+
+	/* do not flush user set addresses */
+	memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
+	txgbe_flush_sw_mac_table(adapter);
+	txgbe_mac_set_default_filter(adapter, old_addr);
+
+	/* update SAN MAC vmdq pool selection */
+	TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
+}
+
+void txgbe_disable_device(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i;
+
+	/* signal that we are down to the interrupt handler */
+	if (test_and_set_bit(__TXGBE_DOWN, &adapter->state))
+		return; /* do nothing if already down */
+
+	txgbe_disable_pcie_master(hw);
+	/* disable receives */
+	TCALL(hw, mac.ops.disable_rx);
+
+	/* call carrier off first to avoid false dev_watchdog timeouts */
+	netif_carrier_off(netdev);
+	netif_tx_disable(netdev);
+
+	del_timer_sync(&adapter->service_timer);
+
+	if (hw->bus.lan_id == 0)
+		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN0_UP, 0);
+	else if (hw->bus.lan_id == 1)
+		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN1_UP, 0);
+	else
+		txgbe_dev_err("%s: invalid bus lan id %d\n", __func__,
+			      hw->bus.lan_id);
+
+	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+	      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+		/* disable mac transmiter */
+		wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, 0);
+	}
+	/* disable transmits in the hardware now that interrupts are off */
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		u8 reg_idx = adapter->tx_ring[i]->reg_idx;
+
+		wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
+	}
+
+	/* Disable the Tx DMA engine */
+	wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
+}
+
+void txgbe_down(struct txgbe_adapter *adapter)
+{
+	txgbe_disable_device(adapter);
+	txgbe_reset(adapter);
+}
+
+/**
+ *  txgbe_init_shared_code - Initialize the shared code
+ *  @hw: pointer to hardware structure
+ *
+ *  This will assign function pointers and assign the MAC type and PHY code.
+ **/
+s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+{
+	s32 status;
+
+	status = txgbe_init_ops(hw);
+	return status;
+}
+
 /**
  * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
  * @adapter: board private structure to initialize
@@ -98,6 +303,28 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 		goto out;
 	}
 
+	err = txgbe_init_shared_code(hw);
+	if (err) {
+		txgbe_err(probe, "init_shared_code failed: %d\n", err);
+		goto out;
+	}
+	adapter->mac_table = kzalloc(sizeof(*adapter->mac_table) *
+				     hw->mac.num_rar_entries,
+				     GFP_ATOMIC);
+	if (!adapter->mac_table) {
+		err = TXGBE_ERR_OUT_OF_MEM;
+		txgbe_err(probe, "mac_table allocation failed: %d\n", err);
+		goto out;
+	}
+
+	/* enable itr by default in dynamic mode */
+	adapter->rx_itr_setting = 1;
+	adapter->tx_itr_setting = 1;
+
+	adapter->atr_sample_rate = 20;
+
+	adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
+
 	set_bit(__TXGBE_DOWN, &adapter->state);
 out:
 	return err;
@@ -128,6 +355,91 @@ static void txgbe_shutdown(struct pci_dev *pdev)
 	}
 }
 
+/**
+ * txgbe_service_timer - Timer Call-back
+ * @data: pointer to adapter cast into an unsigned long
+ **/
+static void txgbe_service_timer(struct timer_list *t)
+{
+	struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
+	unsigned long next_event_offset;
+
+	next_event_offset = HZ * 2;
+
+	/* Reset the timer */
+	mod_timer(&adapter->service_timer, next_event_offset + jiffies);
+
+	txgbe_service_event_schedule(adapter);
+}
+
+/**
+ * txgbe_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void txgbe_service_task(struct work_struct *work)
+{
+	struct txgbe_adapter *adapter = container_of(work,
+						     struct txgbe_adapter,
+						     service_task);
+	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+			rtnl_lock();
+			txgbe_down(adapter);
+			rtnl_unlock();
+		}
+		txgbe_service_event_complete(adapter);
+		return;
+	}
+
+	txgbe_service_event_complete(adapter);
+}
+
+/**
+ * txgbe_add_sanmac_netdev - Add the SAN MAC address to the corresponding
+ * netdev->dev_addr_list
+ * @netdev: network interface device structure
+ *
+ * Returns non-zero on failure
+ **/
+static int txgbe_add_sanmac_netdev(struct net_device *dev)
+{
+	int err = 0;
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (is_valid_ether_addr(hw->mac.san_addr)) {
+		rtnl_lock();
+		err = dev_addr_add(dev, hw->mac.san_addr,
+				   NETDEV_HW_ADDR_T_SAN);
+		rtnl_unlock();
+
+		/* update SAN MAC vmdq pool selection */
+		TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
+	}
+	return err;
+}
+
+/**
+ * txgbe_del_sanmac_netdev - Removes the SAN MAC address to the corresponding
+ * netdev->dev_addr_list
+ * @netdev: network interface device structure
+ *
+ * Returns non-zero on failure
+ **/
+static int txgbe_del_sanmac_netdev(struct net_device *dev)
+{
+	int err = 0;
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	struct txgbe_mac_info *mac = &adapter->hw.mac;
+
+	if (is_valid_ether_addr(mac->san_addr)) {
+		rtnl_lock();
+		err = dev_addr_del(dev, mac->san_addr, NETDEV_HW_ADDR_T_SAN);
+		rtnl_unlock();
+	}
+	return err;
+}
+
 /**
  * txgbe_probe - Device Initialization Routine
  * @pdev: PCI device information struct
@@ -145,7 +457,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 	struct net_device *netdev;
 	struct txgbe_adapter *adapter = NULL;
 	struct txgbe_hw *hw = NULL;
-	int err, pci_using_dac;
+	int err, pci_using_dac, expected_gts;
 	unsigned int indices = MAX_TX_QUEUES;
 	bool disable_dev = false;
 
@@ -222,6 +534,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 		goto err_ioremap;
 	}
 
+	netdev->watchdog_timeo = 5 * HZ;
 	strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
 
 	/* setup the private structure */
@@ -229,7 +542,91 @@ static int txgbe_probe(struct pci_dev *pdev,
 	if (err)
 		goto err_sw_init;
 
+	TCALL(hw, mac.ops.set_lan_id);
+
+	/* check if flash load is done after hw power up */
+	err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PERST);
+	if (err)
+		goto err_sw_init;
+	err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PWRRST);
+	if (err)
+		goto err_sw_init;
+
+	err = TCALL(hw, mac.ops.reset_hw);
+	if (err) {
+		txgbe_dev_err("HW Init failed: %d\n", err);
+		goto err_sw_init;
+	}
+
+	eth_hw_addr_set(netdev, hw->mac.perm_addr);
+
+	if (!is_valid_ether_addr(netdev->dev_addr)) {
+		txgbe_dev_err("invalid MAC address\n");
+		err = -EIO;
+		goto err_sw_init;
+	}
+
+	txgbe_mac_set_default_filter(adapter, hw->mac.perm_addr);
+
+	timer_setup(&adapter->service_timer, txgbe_service_timer, 0);
+
+	if (TXGBE_REMOVED(hw->hw_addr)) {
+		err = -EIO;
+		goto err_sw_init;
+	}
+	INIT_WORK(&adapter->service_task, txgbe_service_task);
+	set_bit(__TXGBE_SERVICE_INITED, &adapter->state);
+	clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
+
+	/* reset the hardware with the new settings */
+	err = TCALL(hw, mac.ops.start_hw);
+	if (err) {
+		txgbe_dev_err("HW init failed\n");
+		goto err_register;
+	}
+
+	/* pick up the PCI bus settings for reporting later */
+	TCALL(hw, mac.ops.get_bus_info);
+
+	strcpy(netdev->name, "eth%d");
+	err = register_netdev(netdev);
+	if (err)
+		goto err_register;
+
+	pci_set_drvdata(pdev, adapter);
+	adapter->netdev_registered = true;
+
+	/* carrier off reporting is important to ethtool even BEFORE open */
+	netif_carrier_off(netdev);
+
+	/* calculate the expected PCIe bandwidth required for optimal
+	 * performance. Note that some older parts will never have enough
+	 * bandwidth due to being older generation PCIe parts. We clamp these
+	 * parts to ensure that no warning is displayed, as this could confuse
+	 * users otherwise.
+	 */
+	expected_gts = txgbe_enumerate_functions(adapter) * 10;
+
+	/* don't check link if we failed to enumerate functions */
+	if (expected_gts > 0)
+		txgbe_check_minimum_link(adapter, expected_gts);
+
+	if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)
+		txgbe_info(probe, "NCSI : support");
+	else
+		txgbe_info(probe, "NCSI : unsupported");
+
+	txgbe_dev_info("%02x:%02x:%02x:%02x:%02x:%02x\n",
+		       netdev->dev_addr[0], netdev->dev_addr[1],
+		       netdev->dev_addr[2], netdev->dev_addr[3],
+		       netdev->dev_addr[4], netdev->dev_addr[5]);
+
+	/* add san mac addr to netdev */
+	txgbe_add_sanmac_netdev(netdev);
+
+err_register:
 err_sw_init:
+	kfree(adapter->mac_table);
 	iounmap(adapter->io_addr);
 err_ioremap:
 	disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
@@ -267,10 +664,19 @@ static void txgbe_remove(struct pci_dev *pdev)
 	set_bit(__TXGBE_REMOVING, &adapter->state);
 	cancel_work_sync(&adapter->service_task);
 
+	/* remove the added san mac */
+	txgbe_del_sanmac_netdev(netdev);
+
+	if (adapter->netdev_registered) {
+		unregister_netdev(netdev);
+		adapter->netdev_registered = false;
+	}
+
 	iounmap(adapter->io_addr);
 	pci_release_selected_regions(pdev,
 				     pci_select_bars(pdev, IORESOURCE_MEM));
 
+	kfree(adapter->mac_table);
 	disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
 	free_netdev(netdev);
 
@@ -292,6 +698,20 @@ static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev)
 	return false;
 }
 
+u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg)
+{
+	struct txgbe_adapter *adapter = hw->back;
+	u16 value;
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return TXGBE_FAILED_READ_CFG_WORD;
+	pci_read_config_word(adapter->pdev, reg, &value);
+	if (value == TXGBE_FAILED_READ_CFG_WORD &&
+	    txgbe_check_cfg_remove(hw, adapter->pdev))
+		return TXGBE_FAILED_READ_CFG_WORD;
+	return value;
+}
+
 static struct pci_driver txgbe_driver = {
 	.name     = txgbe_driver_name,
 	.id_table = txgbe_pci_tbl,
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index ba9306982317..60b1c3a2ac50 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -73,15 +73,1743 @@
 /* Revision ID */
 #define TXGBE_SP_MPW  1
 
+/**************** Global Registers ****************************/
+/* chip control Registers */
+#define TXGBE_MIS_RST                   0x1000C
+#define TXGBE_MIS_PWR                   0x10000
+#define TXGBE_MIS_CTL                   0x10004
+#define TXGBE_MIS_PF_SM                 0x10008
+#define TXGBE_MIS_PRB_CTL               0x10010
+#define TXGBE_MIS_ST                    0x10028
+#define TXGBE_MIS_SWSM                  0x1002C
+#define TXGBE_MIS_RST_ST                0x10030
+
+#define TXGBE_MIS_RST_SW_RST            0x00000001U
+#define TXGBE_MIS_RST_LAN0_RST          0x00000002U
+#define TXGBE_MIS_RST_LAN1_RST          0x00000004U
+#define TXGBE_MIS_RST_LAN0_CHG_ETH_MODE 0x20000000U
+#define TXGBE_MIS_RST_LAN1_CHG_ETH_MODE 0x40000000U
+#define TXGBE_MIS_RST_GLOBAL_RST        0x80000000U
+#define TXGBE_MIS_RST_MASK      (TXGBE_MIS_RST_SW_RST | \
+				 TXGBE_MIS_RST_LAN0_RST | \
+				 TXGBE_MIS_RST_LAN1_RST)
+#define TXGBE_MIS_PWR_LAN_ID(_r)        ((0xC0000000U & (_r)) >> 30)
+#define TXGBE_MIS_PWR_LAN_ID_0          (1)
+#define TXGBE_MIS_PWR_LAN_ID_1          (2)
+#define TXGBE_MIS_PWR_LAN_ID_A          (3)
+#define TXGBE_MIS_ST_MNG_INIT_DN        0x00000001U
+#define TXGBE_MIS_ST_MNG_VETO           0x00000100U
+#define TXGBE_MIS_ST_LAN0_ECC           0x00010000U
+#define TXGBE_MIS_ST_LAN1_ECC           0x00020000U
+#define TXGBE_MIS_ST_MNG_ECC            0x00040000U
+#define TXGBE_MIS_ST_PCORE_ECC          0x00080000U
+#define TXGBE_MIS_ST_PCIWRP_ECC         0x00100000U
+#define TXGBE_MIS_SWSM_SMBI             1
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_DONE        0x00000000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_REQ         0x00080000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_INPROGRESS  0x00100000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_MASK        0x00180000U
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK      0x00070000U
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT     16
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST    0x3
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST 0x5
+#define TXGBE_MIS_RST_ST_RST_INIT       0x0000FF00U
+#define TXGBE_MIS_RST_ST_RST_INI_SHIFT  8
+#define TXGBE_MIS_RST_ST_RST_TIM        0x000000FFU
+#define TXGBE_MIS_PF_SM_SM              1
+#define TXGBE_MIS_PRB_CTL_LAN0_UP       0x2
+#define TXGBE_MIS_PRB_CTL_LAN1_UP       0x1
+
+/* Sensors for PVT(Process Voltage Temperature) */
+#define TXGBE_TS_CTL                    0x10300
+#define TXGBE_TS_EN                     0x10304
+#define TXGBE_TS_ST                     0x10308
+#define TXGBE_TS_ALARM_THRE             0x1030C
+#define TXGBE_TS_DALARM_THRE            0x10310
+#define TXGBE_TS_INT_EN                 0x10314
+#define TXGBE_TS_ALARM_ST               0x10318
+#define TXGBE_TS_ALARM_ST_DALARM        0x00000002U
+#define TXGBE_TS_ALARM_ST_ALARM         0x00000001U
+
+#define TXGBE_TS_CTL_EVAL_MD            0x80000000U
+#define TXGBE_TS_EN_ENA                 0x00000001U
+#define TXGBE_TS_ST_DATA_OUT_MASK       0x000003FFU
+#define TXGBE_TS_ALARM_THRE_MASK        0x000003FFU
+#define TXGBE_TS_DALARM_THRE_MASK       0x000003FFU
+#define TXGBE_TS_INT_EN_DALARM_INT_EN   0x00000002U
+#define TXGBE_TS_INT_EN_ALARM_INT_EN    0x00000001U
+
+struct txgbe_thermal_diode_data {
+	s16 temp;
+	s16 alarm_thresh;
+	s16 dalarm_thresh;
+};
+
+struct txgbe_thermal_sensor_data {
+	struct txgbe_thermal_diode_data sensor;
+};
+
+/* FMGR Registers */
+#define TXGBE_SPI_ILDR_STATUS           0x10120
+#define TXGBE_SPI_ILDR_STATUS_PERST     0x00000001U /* PCIE_PERST is done */
+#define TXGBE_SPI_ILDR_STATUS_PWRRST    0x00000002U /* Power on reset is done */
+#define TXGBE_SPI_ILDR_STATUS_SW_RESET  0x00000080U /* software reset is done */
+#define TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST 0x00000200U /* lan0 soft reset done */
+#define TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST 0x00000400U /* lan1 soft reset done */
+
+#define TXGBE_MAX_FLASH_LOAD_POLL_TIME  10
+
+#define TXGBE_SPI_CMD                   0x10104
+#define TXGBE_SPI_CMD_CMD(_v)           (((_v) & 0x7) << 28)
+#define TXGBE_SPI_CMD_CLK(_v)           (((_v) & 0x7) << 25)
+#define TXGBE_SPI_CMD_ADDR(_v)          (((_v) & 0xFFFFFF))
+#define TXGBE_SPI_DATA                  0x10108
+#define TXGBE_SPI_DATA_BYPASS           ((0x1) << 31)
+#define TXGBE_SPI_DATA_STATUS(_v)       (((_v) & 0xFF) << 16)
+#define TXGBE_SPI_DATA_OP_DONE          ((0x1))
+
+#define TXGBE_SPI_STATUS                0x1010C
+#define TXGBE_SPI_STATUS_OPDONE         ((0x1))
+#define TXGBE_SPI_STATUS_FLASH_BYPASS   ((0x1) << 31)
+
+#define TXGBE_SPI_USR_CMD               0x10110
+#define TXGBE_SPI_CMDCFG0               0x10114
+#define TXGBE_SPI_CMDCFG1               0x10118
+#define TXGBE_SPI_ECC_CTL               0x10130
+#define TXGBE_SPI_ECC_INJ               0x10134
+#define TXGBE_SPI_ECC_ST                0x10138
+#define TXGBE_SPI_ILDR_SWPTR            0x10124
+
+/************************* Port Registers ************************************/
+/* I2C registers */
+#define TXGBE_I2C_CON                   0x14900 /* I2C Control */
+#define TXGBE_I2C_CON_SLAVE_DISABLE     ((1 << 6))
+#define TXGBE_I2C_CON_RESTART_EN        ((1 << 5))
+#define TXGBE_I2C_CON_10BITADDR_MASTER  ((1 << 4))
+#define TXGBE_I2C_CON_10BITADDR_SLAVE   ((1 << 3))
+#define TXGBE_I2C_CON_SPEED(_v)         (((_v) & 0x3) << 1)
+#define TXGBE_I2C_CON_MASTER_MODE       ((1 << 0))
+#define TXGBE_I2C_TAR                   0x14904 /* I2C Target Address */
+#define TXGBE_I2C_DATA_CMD              0x14910 /* I2C Rx/Tx Data Buf and Cmd */
+#define TXGBE_I2C_DATA_CMD_STOP         ((1 << 9))
+#define TXGBE_I2C_DATA_CMD_READ         ((1 << 8) | TXGBE_I2C_DATA_CMD_STOP)
+#define TXGBE_I2C_DATA_CMD_WRITE        ((0 << 8) | TXGBE_I2C_DATA_CMD_STOP)
+#define TXGBE_I2C_SS_SCL_HCNT           0x14914 /* Standard speed I2C Clock SCL High Count */
+#define TXGBE_I2C_SS_SCL_LCNT           0x14918 /* Standard speed I2C Clock SCL Low Count */
+#define TXGBE_I2C_FS_SCL_HCNT           0x1491C
+#define TXGBE_I2C_FS_SCL_LCNT           0x14920
+#define TXGBE_I2C_HS_SCL_HCNT           0x14924 /* High speed I2C Clock SCL High Count */
+#define TXGBE_I2C_HS_SCL_LCNT           0x14928 /* High speed I2C Clock SCL Low Count */
+#define TXGBE_I2C_INTR_STAT             0x1492C /* I2C Interrupt Status */
+#define TXGBE_I2C_RAW_INTR_STAT         0x14934 /* I2C Raw Interrupt Status */
+#define TXGBE_I2C_INTR_STAT_RX_FULL     ((0x1) << 2)
+#define TXGBE_I2C_INTR_STAT_TX_EMPTY    ((0x1) << 4)
+#define TXGBE_I2C_INTR_MASK             0x14930 /* I2C Interrupt Mask */
+#define TXGBE_I2C_RX_TL                 0x14938 /* I2C Receive FIFO Threshold */
+#define TXGBE_I2C_TX_TL                 0x1493C /* I2C TX FIFO Threshold */
+#define TXGBE_I2C_CLR_INTR              0x14940 /* Clear Combined and Individual Int */
+#define TXGBE_I2C_CLR_RX_UNDER          0x14944 /* Clear RX_UNDER Interrupt */
+#define TXGBE_I2C_CLR_RX_OVER           0x14948 /* Clear RX_OVER Interrupt */
+#define TXGBE_I2C_CLR_TX_OVER           0x1494C /* Clear TX_OVER Interrupt */
+#define TXGBE_I2C_CLR_RD_REQ            0x14950 /* Clear RD_REQ Interrupt */
+#define TXGBE_I2C_CLR_TX_ABRT           0x14954 /* Clear TX_ABRT Interrupt */
+#define TXGBE_I2C_CLR_RX_DONE           0x14958 /* Clear RX_DONE Interrupt */
+#define TXGBE_I2C_CLR_ACTIVITY          0x1495C /* Clear ACTIVITY Interrupt */
+#define TXGBE_I2C_CLR_STOP_DET          0x14960 /* Clear STOP_DET Interrupt */
+#define TXGBE_I2C_CLR_START_DET         0x14964 /* Clear START_DET Interrupt */
+#define TXGBE_I2C_CLR_GEN_CALL          0x14968 /* Clear GEN_CALL Interrupt */
+#define TXGBE_I2C_ENABLE                0x1496C /* I2C Enable */
+#define TXGBE_I2C_STATUS                0x14970 /* I2C Status register */
+#define TXGBE_I2C_STATUS_MST_ACTIVITY   ((1U << 5))
+#define TXGBE_I2C_TXFLR                 0x14974 /* Transmit FIFO Level Reg */
+#define TXGBE_I2C_RXFLR                 0x14978 /* Receive FIFO Level Reg */
+#define TXGBE_I2C_SDA_HOLD              0x1497C /* SDA hold time length reg */
+#define TXGBE_I2C_TX_ABRT_SOURCE        0x14980 /* I2C TX Abort Status Reg */
+#define TXGBE_I2C_SDA_SETUP             0x14994 /* I2C SDA Setup Register */
+#define TXGBE_I2C_ENABLE_STATUS         0x1499C /* I2C Enable Status Register */
+#define TXGBE_I2C_FS_SPKLEN             0x149A0 /* ISS and FS spike suppression limit */
+#define TXGBE_I2C_HS_SPKLEN             0x149A4 /* HS spike suppression limit */
+#define TXGBE_I2C_SCL_STUCK_TIMEOUT     0x149AC /* I2C SCL stuck at low timeout register */
+#define TXGBE_I2C_SDA_STUCK_TIMEOUT     0x149B0 /*I2C SDA Stuck at Low Timeout*/
+#define TXGBE_I2C_CLR_SCL_STUCK_DET     0x149B4 /* Clear SCL Stuck at Low Detect Interrupt */
+#define TXGBE_I2C_DEVICE_ID             0x149b8 /* I2C Device ID */
+#define TXGBE_I2C_COMP_PARAM_1          0x149f4 /* Component Parameter Reg */
+#define TXGBE_I2C_COMP_VERSION          0x149f8 /* Component Version ID */
+#define TXGBE_I2C_COMP_TYPE             0x149fc /* DesignWare Component Type Reg */
+
+#define TXGBE_I2C_SLAVE_ADDR            (0xA0 >> 1)
+#define TXGBE_I2C_THERMAL_SENSOR_ADDR   0xF8
+
+/* port cfg Registers */
+#define TXGBE_CFG_PORT_CTL              0x14400
+#define TXGBE_CFG_PORT_ST               0x14404
+#define TXGBE_CFG_EX_VTYPE              0x14408
+#define TXGBE_CFG_LED_CTL               0x14424
+#define TXGBE_CFG_VXLAN                 0x14410
+#define TXGBE_CFG_VXLAN_GPE             0x14414
+#define TXGBE_CFG_GENEVE                0x14418
+#define TXGBE_CFG_TEREDO                0x1441C
+#define TXGBE_CFG_TCP_TIME              0x14420
+#define TXGBE_CFG_TAG_TPID(_i)          (0x14430 + ((_i) * 4))
+/* port cfg bit */
+#define TXGBE_CFG_PORT_CTL_PFRSTD       0x00004000U /* Phy Function Reset Done */
+#define TXGBE_CFG_PORT_CTL_D_VLAN       0x00000001U /* double vlan*/
+#define TXGBE_CFG_PORT_CTL_ETAG_ETYPE_VLD 0x00000002U
+#define TXGBE_CFG_PORT_CTL_QINQ         0x00000004U
+#define TXGBE_CFG_PORT_CTL_DRV_LOAD     0x00000008U
+#define TXGBE_CFG_PORT_CTL_FORCE_LKUP   0x00000010U /* force link up */
+#define TXGBE_CFG_PORT_CTL_DCB_EN       0x00000400U /* dcb enabled */
+#define TXGBE_CFG_PORT_CTL_NUM_TC_MASK  0x00000800U /* number of TCs */
+#define TXGBE_CFG_PORT_CTL_NUM_TC_4     0x00000000U
+#define TXGBE_CFG_PORT_CTL_NUM_TC_8     0x00000800U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_MASK  0x00003000U /* number of TVs */
+#define TXGBE_CFG_PORT_CTL_NUM_VT_NONE  0x00000000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_16    0x00001000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_32    0x00002000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_64    0x00003000U
+/* Status Bit */
+#define TXGBE_CFG_PORT_ST_LINK_UP       0x00000001U
+#define TXGBE_CFG_PORT_ST_LINK_10G      0x00000002U
+#define TXGBE_CFG_PORT_ST_LINK_1G       0x00000004U
+#define TXGBE_CFG_PORT_ST_LINK_100M     0x00000008U
+#define TXGBE_CFG_PORT_ST_LAN_ID(_r)    ((0x00000100U & (_r)) >> 8)
+#define TXGBE_LINK_UP_TIME              90
+/* LED CTL Bit */
+#define TXGBE_CFG_LED_CTL_LINK_BSY_SEL  0x00000010U
+#define TXGBE_CFG_LED_CTL_LINK_100M_SEL 0x00000008U
+#define TXGBE_CFG_LED_CTL_LINK_1G_SEL   0x00000004U
+#define TXGBE_CFG_LED_CTL_LINK_10G_SEL  0x00000002U
+#define TXGBE_CFG_LED_CTL_LINK_UP_SEL   0x00000001U
+#define TXGBE_CFG_LED_CTL_LINK_OD_SHIFT 16
+/* LED modes */
+#define TXGBE_LED_LINK_UP               TXGBE_CFG_LED_CTL_LINK_UP_SEL
+#define TXGBE_LED_LINK_10G              TXGBE_CFG_LED_CTL_LINK_10G_SEL
+#define TXGBE_LED_LINK_ACTIVE           TXGBE_CFG_LED_CTL_LINK_BSY_SEL
+#define TXGBE_LED_LINK_1G               TXGBE_CFG_LED_CTL_LINK_1G_SEL
+#define TXGBE_LED_LINK_100M             TXGBE_CFG_LED_CTL_LINK_100M_SEL
+
+/* GPIO Registers */
+#define TXGBE_GPIO_DR                   0x14800
+#define TXGBE_GPIO_DDR                  0x14804
+#define TXGBE_GPIO_CTL                  0x14808
+#define TXGBE_GPIO_INTEN                0x14830
+#define TXGBE_GPIO_INTMASK              0x14834
+#define TXGBE_GPIO_INTTYPE_LEVEL        0x14838
+#define TXGBE_GPIO_INTSTATUS            0x14844
+#define TXGBE_GPIO_EOI                  0x1484C
+/*GPIO bit */
+#define TXGBE_GPIO_DR_0         0x00000001U /* SDP0 Data Value */
+#define TXGBE_GPIO_DR_1         0x00000002U /* SDP1 Data Value */
+#define TXGBE_GPIO_DR_2         0x00000004U /* SDP2 Data Value */
+#define TXGBE_GPIO_DR_3         0x00000008U /* SDP3 Data Value */
+#define TXGBE_GPIO_DR_4         0x00000010U /* SDP4 Data Value */
+#define TXGBE_GPIO_DR_5         0x00000020U /* SDP5 Data Value */
+#define TXGBE_GPIO_DR_6         0x00000040U /* SDP6 Data Value */
+#define TXGBE_GPIO_DR_7         0x00000080U /* SDP7 Data Value */
+#define TXGBE_GPIO_DDR_0        0x00000001U /* SDP0 IO direction */
+#define TXGBE_GPIO_DDR_1        0x00000002U /* SDP1 IO direction */
+#define TXGBE_GPIO_DDR_2        0x00000004U /* SDP1 IO direction */
+#define TXGBE_GPIO_DDR_3        0x00000008U /* SDP3 IO direction */
+#define TXGBE_GPIO_DDR_4        0x00000010U /* SDP4 IO direction */
+#define TXGBE_GPIO_DDR_5        0x00000020U /* SDP5 IO direction */
+#define TXGBE_GPIO_DDR_6        0x00000040U /* SDP6 IO direction */
+#define TXGBE_GPIO_DDR_7        0x00000080U /* SDP7 IO direction */
+#define TXGBE_GPIO_CTL_SW_MODE  0x00000000U /* SDP software mode */
+#define TXGBE_GPIO_INTEN_1      0x00000002U /* SDP1 interrupt enable */
+#define TXGBE_GPIO_INTEN_2      0x00000004U /* SDP2 interrupt enable */
+#define TXGBE_GPIO_INTEN_3      0x00000008U /* SDP3 interrupt enable */
+#define TXGBE_GPIO_INTEN_5      0x00000020U /* SDP5 interrupt enable */
+#define TXGBE_GPIO_INTEN_6      0x00000040U /* SDP6 interrupt enable */
+#define TXGBE_GPIO_INTTYPE_LEVEL_2 0x00000004U /* SDP2 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_3 0x00000008U /* SDP3 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_5 0x00000020U /* SDP5 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_6 0x00000040U /* SDP6 interrupt type level */
+#define TXGBE_GPIO_INTSTATUS_1  0x00000002U /* SDP1 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_2  0x00000004U /* SDP2 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_3  0x00000008U /* SDP3 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_5  0x00000020U /* SDP5 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_6  0x00000040U /* SDP6 interrupt status */
+#define TXGBE_GPIO_EOI_2        0x00000004U /* SDP2 interrupt clear */
+#define TXGBE_GPIO_EOI_3        0x00000008U /* SDP3 interrupt clear */
+#define TXGBE_GPIO_EOI_5        0x00000020U /* SDP5 interrupt clear */
+#define TXGBE_GPIO_EOI_6        0x00000040U /* SDP6 interrupt clear */
+
+/* TPH registers */
+#define TXGBE_CFG_TPH_TDESC     0x14F00 /* TPH conf for Tx desc write back */
+#define TXGBE_CFG_TPH_RDESC     0x14F04 /* TPH conf for Rx desc write back */
+#define TXGBE_CFG_TPH_RHDR      0x14F08 /* TPH conf for writing Rx pkt header */
+#define TXGBE_CFG_TPH_RPL       0x14F0C /* TPH conf for payload write access */
+/* TPH bit */
+#define TXGBE_CFG_TPH_TDESC_EN  0x80000000U
+#define TXGBE_CFG_TPH_TDESC_PH_SHIFT 29
+#define TXGBE_CFG_TPH_TDESC_ST_SHIFT 16
+#define TXGBE_CFG_TPH_RDESC_EN  0x80000000U
+#define TXGBE_CFG_TPH_RDESC_PH_SHIFT 29
+#define TXGBE_CFG_TPH_RDESC_ST_SHIFT 16
+#define TXGBE_CFG_TPH_RHDR_EN   0x00008000U
+#define TXGBE_CFG_TPH_RHDR_PH_SHIFT 13
+#define TXGBE_CFG_TPH_RHDR_ST_SHIFT 0
+#define TXGBE_CFG_TPH_RPL_EN    0x80000000U
+#define TXGBE_CFG_TPH_RPL_PH_SHIFT 29
+#define TXGBE_CFG_TPH_RPL_ST_SHIFT 16
+
+/*********************** Transmit DMA registers **************************/
+/* transmit global control */
+#define TXGBE_TDM_CTL           0x18000
+#define TXGBE_TDM_VF_TE(_i)     (0x18004 + ((_i) * 4))
+#define TXGBE_TDM_PB_THRE(_i)   (0x18020 + ((_i) * 4)) /* 8 of these 0 - 7 */
+#define TXGBE_TDM_LLQ(_i)       (0x18040 + ((_i) * 4)) /* 4 of these (0-3) */
+#define TXGBE_TDM_ETYPE_LB_L    0x18050
+#define TXGBE_TDM_ETYPE_LB_H    0x18054
+#define TXGBE_TDM_ETYPE_AS_L    0x18058
+#define TXGBE_TDM_ETYPE_AS_H    0x1805C
+#define TXGBE_TDM_MAC_AS_L      0x18060
+#define TXGBE_TDM_MAC_AS_H      0x18064
+#define TXGBE_TDM_VLAN_AS_L     0x18070
+#define TXGBE_TDM_VLAN_AS_H     0x18074
+#define TXGBE_TDM_TCP_FLG_L     0x18078
+#define TXGBE_TDM_TCP_FLG_H     0x1807C
+#define TXGBE_TDM_VLAN_INS(_i)  (0x18100 + ((_i) * 4)) /* 64 of these 0 - 63 */
+/* TDM CTL BIT */
+#define TXGBE_TDM_CTL_TE        0x1 /* Transmit Enable */
+#define TXGBE_TDM_CTL_PADDING   0x2 /* Padding byte number for ipsec ESP */
+#define TXGBE_TDM_CTL_VT_SHIFT  16  /* VLAN EtherType */
+/* Per VF Port VLAN insertion rules */
+#define TXGBE_TDM_VLAN_INS_VLANA_DEFAULT 0x40000000U /*Always use default VLAN*/
+#define TXGBE_TDM_VLAN_INS_VLANA_NEVER   0x80000000U /* Never insert VLAN tag */
+
+#define TXGBE_TDM_RP_CTL        0x18400
+#define TXGBE_TDM_RP_CTL_RST    ((0x1) << 0)
+#define TXGBE_TDM_RP_CTL_RPEN   ((0x1) << 2)
+#define TXGBE_TDM_RP_CTL_RLEN   ((0x1) << 3)
+#define TXGBE_TDM_RP_IDX        0x1820C
+#define TXGBE_TDM_RP_RATE       0x18404
+#define TXGBE_TDM_RP_RATE_MIN(v) ((0x3FFF & (v)))
+#define TXGBE_TDM_RP_RATE_MAX(v) ((0x3FFF & (v)) << 16)
+
+/* qos */
+#define TXGBE_TDM_PBWARB_CTL    0x18200
+#define TXGBE_TDM_PBWARB_CFG(_i) (0x18220 + ((_i) * 4)) /* 8 of these (0-7) */
+#define TXGBE_TDM_MMW           0x18208
+#define TXGBE_TDM_VM_CREDIT(_i) (0x18500 + ((_i) * 4))
+#define TXGBE_TDM_VM_CREDIT_VAL(v) (0x3FF & (v))
+/* fcoe */
+#define TXGBE_TDM_FC_EOF        0x18384
+#define TXGBE_TDM_FC_SOF        0x18380
+/* etag */
+#define TXGBE_TDM_ETAG_INS(_i)  (0x18700 + ((_i) * 4)) /* 64 of these 0 - 63 */
+/* statistic */
+#define TXGBE_TDM_SEC_DRP       0x18304
+#define TXGBE_TDM_PKT_CNT       0x18308
+#define TXGBE_TDM_OS2BMC_CNT    0x18314
+
+/**************************** Receive DMA registers **************************/
+/* receive control */
+#define TXGBE_RDM_ARB_CTL       0x12000
+#define TXGBE_RDM_VF_RE(_i)     (0x12004 + ((_i) * 4))
+#define TXGBE_RDM_RSC_CTL       0x1200C
+#define TXGBE_RDM_ARB_CFG(_i)   (0x12040 + ((_i) * 4)) /* 8 of these (0-7) */
+#define TXGBE_RDM_PF_QDE(_i)    (0x12080 + ((_i) * 4))
+#define TXGBE_RDM_PF_HIDE(_i)   (0x12090 + ((_i) * 4))
+/* VFRE bitmask */
+#define TXGBE_RDM_VF_RE_ENABLE_ALL  0xFFFFFFFFU
+
+/* FCoE DMA Context Registers */
+#define TXGBE_RDM_FCPTRL            0x12410
+#define TXGBE_RDM_FCPTRH            0x12414
+#define TXGBE_RDM_FCBUF             0x12418
+#define TXGBE_RDM_FCBUF_VALID       ((0x1)) /* DMA Context Valid */
+#define TXGBE_RDM_FCBUF_SIZE(_v)    (((_v) & 0x3) << 3) /* User Buffer Size */
+#define TXGBE_RDM_FCBUF_COUNT(_v)   (((_v) & 0xFF) << 8) /* Num of User Buf */
+#define TXGBE_RDM_FCBUF_OFFSET(_v)  (((_v) & 0xFFFF) << 16) /* User Buf Offset*/
+#define TXGBE_RDM_FCRW              0x12420
+#define TXGBE_RDM_FCRW_FCSEL(_v)    (((_v) & 0x1FF))  /* FC X_ID: 11 bits */
+#define TXGBE_RDM_FCRW_WE           ((0x1) << 14)   /* Write enable */
+#define TXGBE_RDM_FCRW_RE           ((0x1) << 15)   /* Read enable */
+#define TXGBE_RDM_FCRW_LASTSIZE(_v) (((_v) & 0xFFFF) << 16)
+
+/* statistic */
+#define TXGBE_RDM_DRP_PKT           0x12500
+#define TXGBE_RDM_BMC2OS_CNT        0x12510
+
+/***************************** RDB registers *********************************/
+/* Flow Control Registers */
+#define TXGBE_RDB_RFCV(_i)          (0x19200 + ((_i) * 4)) /* 4 of these (0-3)*/
+#define TXGBE_RDB_RFCL(_i)          (0x19220 + ((_i) * 4)) /* 8 of these (0-7)*/
+#define TXGBE_RDB_RFCH(_i)          (0x19260 + ((_i) * 4)) /* 8 of these (0-7)*/
+#define TXGBE_RDB_RFCRT             0x192A0
+#define TXGBE_RDB_RFCC              0x192A4
+/* receive packet buffer */
+#define TXGBE_RDB_PB_WRAP           0x19004
+#define TXGBE_RDB_PB_SZ(_i)         (0x19020 + ((_i) * 4))
+#define TXGBE_RDB_PB_CTL            0x19000
+#define TXGBE_RDB_UP2TC             0x19008
+#define TXGBE_RDB_PB_SZ_SHIFT       10
+#define TXGBE_RDB_PB_SZ_MASK        0x000FFC00U
+/* lli interrupt */
+#define TXGBE_RDB_LLI_THRE          0x19080
+#define TXGBE_RDB_LLI_THRE_SZ(_v)   ((0xFFF & (_v)))
+#define TXGBE_RDB_LLI_THRE_UP(_v)   ((0x7 & (_v)) << 16)
+#define TXGBE_RDB_LLI_THRE_UP_SHIFT 16
+
+/* ring assignment */
+#define TXGBE_RDB_PL_CFG(_i)    (0x19300 + ((_i) * 4))
+#define TXGBE_RDB_RSSTBL(_i)    (0x19400 + ((_i) * 4))
+#define TXGBE_RDB_RSSRK(_i)     (0x19480 + ((_i) * 4))
+#define TXGBE_RDB_RSS_TC        0x194F0
+#define TXGBE_RDB_RA_CTL        0x194F4
+#define TXGBE_RDB_5T_SA(_i)     (0x19600 + ((_i) * 4)) /* Src Addr Q Filter */
+#define TXGBE_RDB_5T_DA(_i)     (0x19800 + ((_i) * 4)) /* Dst Addr Q Filter */
+#define TXGBE_RDB_5T_SDP(_i)    (0x19A00 + ((_i) * 4)) /*Src Dst Addr Q Filter*/
+#define TXGBE_RDB_5T_CTL0(_i)   (0x19C00 + ((_i) * 4)) /* Five Tuple Q Filter */
+#define TXGBE_RDB_ETYPE_CLS(_i) (0x19100 + ((_i) * 4)) /* EType Q Select */
+#define TXGBE_RDB_SYN_CLS       0x19130
+#define TXGBE_RDB_5T_CTL1(_i)   (0x19E00 + ((_i) * 4)) /*128 of these (0-127)*/
+/* Flow Director registers */
+#define TXGBE_RDB_FDIR_CTL          0x19500
+#define TXGBE_RDB_FDIR_HKEY         0x19568
+#define TXGBE_RDB_FDIR_SKEY         0x1956C
+#define TXGBE_RDB_FDIR_DA4_MSK      0x1953C
+#define TXGBE_RDB_FDIR_SA4_MSK      0x19540
+#define TXGBE_RDB_FDIR_TCP_MSK      0x19544
+#define TXGBE_RDB_FDIR_UDP_MSK      0x19548
+#define TXGBE_RDB_FDIR_SCTP_MSK     0x19560
+#define TXGBE_RDB_FDIR_IP6_MSK      0x19574
+#define TXGBE_RDB_FDIR_OTHER_MSK    0x19570
+#define TXGBE_RDB_FDIR_FLEX_CFG(_i) (0x19580 + ((_i) * 4))
+/* Flow Director Stats registers */
+#define TXGBE_RDB_FDIR_FREE         0x19538
+#define TXGBE_RDB_FDIR_LEN          0x1954C
+#define TXGBE_RDB_FDIR_USE_ST       0x19550
+#define TXGBE_RDB_FDIR_FAIL_ST      0x19554
+#define TXGBE_RDB_FDIR_MATCH        0x19558
+#define TXGBE_RDB_FDIR_MISS         0x1955C
+/* Flow Director Programming registers */
+#define TXGBE_RDB_FDIR_IP6(_i)      (0x1950C + ((_i) * 4)) /* 3 of these (0-2)*/
+#define TXGBE_RDB_FDIR_SA           0x19518
+#define TXGBE_RDB_FDIR_DA           0x1951C
+#define TXGBE_RDB_FDIR_PORT         0x19520
+#define TXGBE_RDB_FDIR_FLEX         0x19524
+#define TXGBE_RDB_FDIR_HASH         0x19528
+#define TXGBE_RDB_FDIR_CMD          0x1952C
+/* VM RSS */
+#define TXGBE_RDB_VMRSSRK(_i, _p)   (0x1A000 + ((_i) * 4) + ((_p) * 0x40))
+#define TXGBE_RDB_VMRSSTBL(_i, _p)  (0x1B000 + ((_i) * 4) + ((_p) * 0x40))
+/* FCoE Redirection */
+#define TXGBE_RDB_FCRE_TBL_SIZE     (8) /* Max entries in FCRETA */
+#define TXGBE_RDB_FCRE_CTL          0x19140
+#define TXGBE_RDB_FCRE_CTL_ENA      ((0x1)) /* FCoE Redir Table Enable */
+#define TXGBE_RDB_FCRE_TBL(_i)      (0x19160 + ((_i) * 4))
+#define TXGBE_RDB_FCRE_TBL_RING(_v) (((_v) & 0x7F)) /* output queue number */
+/* statistic */
+#define TXGBE_RDB_MPCNT(_i)         (0x19040 + ((_i) * 4)) /* 8 of 3FA0-3FBC*/
+#define TXGBE_RDB_LXONTXC           0x1921C
+#define TXGBE_RDB_LXOFFTXC          0x19218
+#define TXGBE_RDB_PXON2OFFCNT(_i)   (0x19280 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_RDB_PXONTXC(_i)       (0x192E0 + ((_i) * 4)) /* 8 of 3F00-3F1C*/
+#define TXGBE_RDB_PXOFFTXC(_i)      (0x192C0 + ((_i) * 4)) /* 8 of 3F20-3F3C*/
+#define TXGBE_RDB_PFCMACDAL         0x19210
+#define TXGBE_RDB_PFCMACDAH         0x19214
+#define TXGBE_RDB_TXSWERR           0x1906C
+#define TXGBE_RDB_TXSWERR_TB_FREE   0x3FF
+/* rdb_pl_cfg reg mask */
+#define TXGBE_RDB_PL_CFG_L4HDR          0x2
+#define TXGBE_RDB_PL_CFG_L3HDR          0x4
+#define TXGBE_RDB_PL_CFG_L2HDR          0x8
+#define TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR 0x20
+#define TXGBE_RDB_PL_CFG_TUN_TUNHDR     0x10
+#define TXGBE_RDB_PL_CFG_RSS_PL_MASK    0x7
+#define TXGBE_RDB_PL_CFG_RSS_PL_SHIFT   29
+/* RQTC Bit Masks and Shifts */
+#define TXGBE_RDB_RSS_TC_SHIFT_TC(_i)   ((_i) * 4)
+#define TXGBE_RDB_RSS_TC_TC0_MASK       (0x7 << 0)
+#define TXGBE_RDB_RSS_TC_TC1_MASK       (0x7 << 4)
+#define TXGBE_RDB_RSS_TC_TC2_MASK       (0x7 << 8)
+#define TXGBE_RDB_RSS_TC_TC3_MASK       (0x7 << 12)
+#define TXGBE_RDB_RSS_TC_TC4_MASK       (0x7 << 16)
+#define TXGBE_RDB_RSS_TC_TC5_MASK       (0x7 << 20)
+#define TXGBE_RDB_RSS_TC_TC6_MASK       (0x7 << 24)
+#define TXGBE_RDB_RSS_TC_TC7_MASK       (0x7 << 28)
+/* Packet Buffer Initialization */
+#define TXGBE_MAX_PACKET_BUFFERS        8
+#define TXGBE_RDB_PB_SZ_48KB    0x00000030U /* 48KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_64KB    0x00000040U /* 64KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_80KB    0x00000050U /* 80KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_128KB   0x00000080U /* 128KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_MAX     0x00000200U /* 512KB Packet Buffer */
+
+/* Packet buffer allocation strategies */
+enum {
+	PBA_STRATEGY_EQUAL      = 0, /* Distribute PB space equally */
+#define PBA_STRATEGY_EQUAL      PBA_STRATEGY_EQUAL
+	PBA_STRATEGY_WEIGHTED   = 1, /* Weight front half of TCs */
+#define PBA_STRATEGY_WEIGHTED   PBA_STRATEGY_WEIGHTED
+};
+
+/* FCRTL Bit Masks */
+#define TXGBE_RDB_RFCL_XONE             0x80000000U /* XON enable */
+#define TXGBE_RDB_RFCH_XOFFE            0x80000000U /* Packet buffer fc enable */
+/* FCCFG Bit Masks */
+#define TXGBE_RDB_RFCC_RFCE_802_3X      0x00000008U /* Tx link FC enable */
+#define TXGBE_RDB_RFCC_RFCE_PRIORITY    0x00000010U /* Tx priority FC enable */
+
+/* Immediate Interrupt Rx (A.K.A. Low Latency Interrupt) */
+#define TXGBE_RDB_5T_CTL1_SIZE_BP       0x00001000U /* Packet size bypass */
+#define TXGBE_RDB_5T_CTL1_LLI           0x00100000U /* Enables low latency Int */
+#define TXGBE_RDB_LLI_THRE_PRIORITY_MASK 0x00070000U /* VLAN priority mask */
+#define TXGBE_RDB_LLI_THRE_PRIORITY_EN  0x00080000U /* VLAN priority enable */
+#define TXGBE_RDB_LLI_THRE_CMN_EN       0x00100000U /* cmn packet receiveed */
+
+#define TXGBE_MAX_RDB_5T_CTL0_FILTERS           128
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_MASK         0x00000003U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_TCP          0x00000000U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_UDP          0x00000001U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_SCTP         2
+#define TXGBE_RDB_5T_CTL0_PRIORITY_MASK         0x00000007U
+#define TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT        2
+#define TXGBE_RDB_5T_CTL0_POOL_MASK             0x0000003FU
+#define TXGBE_RDB_5T_CTL0_POOL_SHIFT            8
+#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_MASK      0x0000001FU
+#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT     25
+#define TXGBE_RDB_5T_CTL0_SOURCE_ADDR_MASK      0x1E
+#define TXGBE_RDB_5T_CTL0_DEST_ADDR_MASK        0x1D
+#define TXGBE_RDB_5T_CTL0_SOURCE_PORT_MASK      0x1B
+#define TXGBE_RDB_5T_CTL0_DEST_PORT_MASK        0x17
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_COMP_MASK    0x0F
+#define TXGBE_RDB_5T_CTL0_POOL_MASK_EN          0x40000000U
+#define TXGBE_RDB_5T_CTL0_QUEUE_ENABLE          0x80000000U
+
+#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE            0x007F0000U /* bits 22:16 */
+#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE_SHIFT      16
+#define TXGBE_RDB_ETYPE_CLS_LLI                 0x20000000U /* bit 29 */
+#define TXGBE_RDB_ETYPE_CLS_QUEUE_EN            0x80000000U /* bit 31 */
+
+/* Receive Config masks */
+#define TXGBE_RDB_PB_CTL_RXEN           (0x80000000) /* Enable Receiver */
+#define TXGBE_RDB_PB_CTL_DISABLED       0x1
+
+#define TXGBE_RDB_RA_CTL_RSS_EN         0x00000004U /* RSS Enable */
+#define TXGBE_RDB_RA_CTL_RSS_MASK       0xFFFF0000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4_TCP   0x00010000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4       0x00020000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6       0x00100000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6_TCP   0x00200000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4_UDP   0x00400000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6_UDP   0x00800000U
+
+enum txgbe_fdir_pballoc_type {
+	TXGBE_FDIR_PBALLOC_NONE = 0,
+	TXGBE_FDIR_PBALLOC_64K  = 1,
+	TXGBE_FDIR_PBALLOC_128K = 2,
+	TXGBE_FDIR_PBALLOC_256K = 3,
+};
+
+/* Flow Director register values */
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_64K          0x00000001U
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_128K         0x00000002U
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_256K         0x00000003U
+#define TXGBE_RDB_FDIR_CTL_INIT_DONE            0x00000008U
+#define TXGBE_RDB_FDIR_CTL_PERFECT_MATCH        0x00000010U
+#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS        0x00000020U
+#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS_ALWAYS 0x00000080U
+#define TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT         8
+#define TXGBE_RDB_FDIR_CTL_FILTERMODE_SHIFT     21
+#define TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT     24
+#define TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT      20
+#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_MASK     0xF0000000U
+#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT    28
+
+#define TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT     16
+#define TXGBE_RDB_FDIR_UDP_MSK_DPORTM_SHIFT     16
+#define TXGBE_RDB_FDIR_IP6_MSK_DIPM_SHIFT       16
+#define TXGBE_RDB_FDIR_OTHER_MSK_POOL           0x00000004U
+#define TXGBE_RDB_FDIR_OTHER_MSK_L4P            0x00000008U
+#define TXGBE_RDB_FDIR_OTHER_MSK_L3P            0x00000010U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_TYPE       0x00000020U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_OUTIP      0x00000040U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN            0x00000080U
+
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC        0x00000000U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_IP         0x00000001U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_HDR     0x00000002U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_PAYLOAD 0x00000003U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK        0x00000003U
+#define TXGBE_RDB_FDIR_FLEX_CFG_MSK             0x00000004U
+#define TXGBE_RDB_FDIR_FLEX_CFG_OFST            0x000000F8U
+#define TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT      3
+#define TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT        8
+
+#define TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT   16
+#define TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT          16
+#define TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT  15
+#define TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT  16
+
+#define TXGBE_RDB_FDIR_CMD_CMD_MASK             0x00000003U
+#define TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW         0x00000001U
+#define TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW      0x00000002U
+#define TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT   0x00000003U
+#define TXGBE_RDB_FDIR_CMD_FILTER_VALID         0x00000004U
+#define TXGBE_RDB_FDIR_CMD_FILTER_UPDATE        0x00000008U
+#define TXGBE_RDB_FDIR_CMD_IPv6DMATCH           0x00000010U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_UDP           0x00000020U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_TCP           0x00000040U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_SCTP          0x00000060U
+#define TXGBE_RDB_FDIR_CMD_IPV6                 0x00000080U
+#define TXGBE_RDB_FDIR_CMD_CLEARHT              0x00000100U
+#define TXGBE_RDB_FDIR_CMD_DROP                 0x00000200U
+#define TXGBE_RDB_FDIR_CMD_INT                  0x00000400U
+#define TXGBE_RDB_FDIR_CMD_LAST                 0x00000800U
+#define TXGBE_RDB_FDIR_CMD_COLLISION            0x00001000U
+#define TXGBE_RDB_FDIR_CMD_QUEUE_EN             0x00008000U
+#define TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT      5
+#define TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT       16
+#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER_SHIFT  23
+#define TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT        24
+#define TXGBE_RDB_FDIR_INIT_DONE_POLL           10
+#define TXGBE_RDB_FDIR_CMD_CMD_POLL             10
+#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER        0x00800000U
+#define TXGBE_RDB_FDIR_DROP_QUEUE               127
+#define TXGBE_FDIR_INIT_DONE_POLL               10
+
+/******************************* PSR Registers *******************************/
+/* psr control */
+#define TXGBE_PSR_CTL                   0x15000
+#define TXGBE_PSR_VLAN_CTL              0x15088
+#define TXGBE_PSR_VM_CTL                0x151B0
+/* Header split receive */
+#define TXGBE_PSR_CTL_SW_EN             0x00040000U
+#define TXGBE_PSR_CTL_RSC_DIS           0x00010000U
+#define TXGBE_PSR_CTL_RSC_ACK           0x00020000U
+#define TXGBE_PSR_CTL_PCSD              0x00002000U
+#define TXGBE_PSR_CTL_IPPCSE            0x00001000U
+#define TXGBE_PSR_CTL_BAM               0x00000400U
+#define TXGBE_PSR_CTL_UPE               0x00000200U
+#define TXGBE_PSR_CTL_MPE               0x00000100U
+#define TXGBE_PSR_CTL_MFE               0x00000080U
+#define TXGBE_PSR_CTL_MO                0x00000060U
+#define TXGBE_PSR_CTL_TPE               0x00000010U
+#define TXGBE_PSR_CTL_MO_SHIFT          5
+/* VT_CTL bitmasks */
+#define TXGBE_PSR_VM_CTL_DIS_DEFPL      0x20000000U /* disable default pool */
+#define TXGBE_PSR_VM_CTL_REPLEN         0x40000000U /* replication enabled */
+#define TXGBE_PSR_VM_CTL_POOL_SHIFT     7
+#define TXGBE_PSR_VM_CTL_POOL_MASK      (0x3F << TXGBE_PSR_VM_CTL_POOL_SHIFT)
+/* VLAN Control Bit Masks */
+#define TXGBE_PSR_VLAN_CTL_VET          0x0000FFFFU  /* bits 0-15 */
+#define TXGBE_PSR_VLAN_CTL_CFI          0x10000000U  /* bit 28 */
+#define TXGBE_PSR_VLAN_CTL_CFIEN        0x20000000U  /* bit 29 */
+#define TXGBE_PSR_VLAN_CTL_VFE          0x40000000U  /* bit 30 */
+
+/* vm L2 contorl */
+#define TXGBE_PSR_VM_L2CTL(_i)          (0x15600 + ((_i) * 4))
+/* VMOLR bitmasks */
+#define TXGBE_PSR_VM_L2CTL_LBDIS        0x00000002U /* disable loopback */
+#define TXGBE_PSR_VM_L2CTL_LLB          0x00000004U /* local pool loopback */
+#define TXGBE_PSR_VM_L2CTL_UPE          0x00000010U /* unicast promiscuous */
+#define TXGBE_PSR_VM_L2CTL_TPE          0x00000020U /* ETAG promiscuous */
+#define TXGBE_PSR_VM_L2CTL_VACC         0x00000040U /* accept nomatched vlan */
+#define TXGBE_PSR_VM_L2CTL_VPE          0x00000080U /* vlan promiscuous mode */
+#define TXGBE_PSR_VM_L2CTL_AUPE         0x00000100U /* accept untagged packets */
+#define TXGBE_PSR_VM_L2CTL_ROMPE        0x00000200U /*accept packets in MTA tbl*/
+#define TXGBE_PSR_VM_L2CTL_ROPE         0x00000400U /* accept packets in UC tbl*/
+#define TXGBE_PSR_VM_L2CTL_BAM          0x00000800U /* accept broadcast packets*/
+#define TXGBE_PSR_VM_L2CTL_MPE          0x00001000U /* multicast promiscuous */
+
+/* etype switcher 1st stage */
+#define TXGBE_PSR_ETYPE_SWC(_i) (0x15128 + ((_i) * 4)) /* EType Queue Filter */
+/* ETYPE Queue Filter/Select Bit Masks */
+#define TXGBE_MAX_PSR_ETYPE_SWC_FILTERS         8
+#define TXGBE_PSR_ETYPE_SWC_FCOE                0x08000000U /* bit 27 */
+#define TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF        0x20000000U /* bit 29 */
+#define TXGBE_PSR_ETYPE_SWC_1588                0x40000000U /* bit 30 */
+#define TXGBE_PSR_ETYPE_SWC_FILTER_EN           0x80000000U /* bit 31 */
+#define TXGBE_PSR_ETYPE_SWC_POOL_ENABLE         (1 << 26) /* bit 26 */
+#define TXGBE_PSR_ETYPE_SWC_POOL_SHIFT          20
+/* ETQF filter list: one static filter per filter consumer. This is
+ *                 to avoid filter collisions later. Add new filters
+ *                 here!!
+ *
+ * Current filters:
+ *      EAPOL 802.1x (0x888e): Filter 0
+ *      FCoE (0x8906):   Filter 2
+ *      1588 (0x88f7):   Filter 3
+ *      FIP  (0x8914):   Filter 4
+ *      LLDP (0x88CC):   Filter 5
+ *      LACP (0x8809):   Filter 6
+ *      FC   (0x8808):   Filter 7
+ */
+#define TXGBE_PSR_ETYPE_SWC_FILTER_EAPOL        0
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FCOE         2
+#define TXGBE_PSR_ETYPE_SWC_FILTER_1588         3
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FIP          4
+#define TXGBE_PSR_ETYPE_SWC_FILTER_LLDP         5
+#define TXGBE_PSR_ETYPE_SWC_FILTER_LACP         6
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FC           7
+
+/* mcasst/ucast overflow tbl */
+#define TXGBE_PSR_MC_TBL(_i)    (0x15200  + ((_i) * 4))
+#define TXGBE_PSR_UC_TBL(_i)    (0x15400 + ((_i) * 4))
+
+/* vlan tbl */
+#define TXGBE_PSR_VLAN_TBL(_i)  (0x16000 + ((_i) * 4))
+
+/* mac switcher */
+#define TXGBE_PSR_MAC_SWC_AD_L  0x16200
+#define TXGBE_PSR_MAC_SWC_AD_H  0x16204
+#define TXGBE_PSR_MAC_SWC_VM_L  0x16208
+#define TXGBE_PSR_MAC_SWC_VM_H  0x1620C
+#define TXGBE_PSR_MAC_SWC_IDX   0x16210
+/* RAH */
+#define TXGBE_PSR_MAC_SWC_AD_H_AD(v)       (((v) & 0xFFFF))
+#define TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(v)   (((v) & 0x1) << 30)
+#define TXGBE_PSR_MAC_SWC_AD_H_AV       0x80000000U
+#define TXGBE_CLEAR_VMDQ_ALL            0xFFFFFFFFU
+
+/* vlan switch */
+#define TXGBE_PSR_VLAN_SWC      0x16220
+#define TXGBE_PSR_VLAN_SWC_VM_L 0x16224
+#define TXGBE_PSR_VLAN_SWC_VM_H 0x16228
+#define TXGBE_PSR_VLAN_SWC_IDX  0x16230         /* 64 vlan entries */
+/* VLAN pool filtering masks */
+#define TXGBE_PSR_VLAN_SWC_VIEN         0x80000000U  /* filter is valid */
+#define TXGBE_PSR_VLAN_SWC_ENTRIES      64
+#define TXGBE_PSR_VLAN_SWC_VLANID_MASK  0x00000FFFU
+#define TXGBE_ETHERNET_IEEE_VLAN_TYPE   0x8100  /* 802.1q protocol */
+
+/* cloud switch */
+#define TXGBE_PSR_CL_SWC_DST0    0x16240
+#define TXGBE_PSR_CL_SWC_DST1    0x16244
+#define TXGBE_PSR_CL_SWC_DST2    0x16248
+#define TXGBE_PSR_CL_SWC_DST3    0x1624c
+#define TXGBE_PSR_CL_SWC_KEY     0x16250
+#define TXGBE_PSR_CL_SWC_CTL     0x16254
+#define TXGBE_PSR_CL_SWC_VM_L    0x16258
+#define TXGBE_PSR_CL_SWC_VM_H    0x1625c
+#define TXGBE_PSR_CL_SWC_IDX     0x16260
+
+#define TXGBE_PSR_CL_SWC_CTL_VLD        0x80000000U
+#define TXGBE_PSR_CL_SWC_CTL_DST_MSK    0x00000002U
+#define TXGBE_PSR_CL_SWC_CTL_KEY_MSK    0x00000001U
+
+/* FCoE SOF/EOF */
+#define TXGBE_PSR_FC_EOF        0x15158
+#define TXGBE_PSR_FC_SOF        0x151F8
+/* FCoE Filter Context Registers */
+#define TXGBE_PSR_FC_FLT_CTXT           0x15108
+#define TXGBE_PSR_FC_FLT_CTXT_VALID     ((0x1)) /* Filter Context Valid */
+#define TXGBE_PSR_FC_FLT_CTXT_FIRST     ((0x1) << 1) /* Filter First */
+#define TXGBE_PSR_FC_FLT_CTXT_WR        ((0x1) << 2) /* Write/Read Context */
+#define TXGBE_PSR_FC_FLT_CTXT_SEQID(_v) (((_v) & 0xFF) << 8) /* Sequence ID */
+#define TXGBE_PSR_FC_FLT_CTXT_SEQCNT(_v) (((_v) & 0xFFFF) << 16) /* Seq Count */
+
+#define TXGBE_PSR_FC_FLT_RW             0x15110
+#define TXGBE_PSR_FC_FLT_RW_FCSEL(_v)   (((_v) & 0x1FF)) /* FC OX_ID: 11 bits */
+#define TXGBE_PSR_FC_FLT_RW_RVALDT      ((0x1) << 13)  /* Fast Re-Validation */
+#define TXGBE_PSR_FC_FLT_RW_WE          ((0x1) << 14)  /* Write Enable */
+#define TXGBE_PSR_FC_FLT_RW_RE          ((0x1) << 15)  /* Read Enable */
+
+#define TXGBE_PSR_FC_PARAM              0x151D8
+
+/* FCoE Receive Control */
+#define TXGBE_PSR_FC_CTL                0x15100
+#define TXGBE_PSR_FC_CTL_FCOELLI        ((0x1))   /* Low latency interrupt */
+#define TXGBE_PSR_FC_CTL_SAVBAD         ((0x1) << 1) /* Save Bad Frames */
+#define TXGBE_PSR_FC_CTL_FRSTRDH        ((0x1) << 2) /* EN 1st Read Header */
+#define TXGBE_PSR_FC_CTL_LASTSEQH       ((0x1) << 3) /* EN Last Header in Seq */
+#define TXGBE_PSR_FC_CTL_ALLH           ((0x1) << 4) /* EN All Headers */
+#define TXGBE_PSR_FC_CTL_FRSTSEQH       ((0x1) << 5) /* EN 1st Seq. Header */
+#define TXGBE_PSR_FC_CTL_ICRC           ((0x1) << 6) /* Ignore Bad FC CRC */
+#define TXGBE_PSR_FC_CTL_FCCRCBO        ((0x1) << 7) /* FC CRC Byte Ordering */
+#define TXGBE_PSR_FC_CTL_FCOEVER(_v)    (((_v) & 0xF) << 8) /* FCoE Version */
+
+/* Management */
+#define TXGBE_PSR_MNG_FIT_CTL           0x15820
+/* Management Bit Fields and Masks */
+#define TXGBE_PSR_MNG_FIT_CTL_MPROXYE    0x40000000U /* Management Proxy Enable*/
+#define TXGBE_PSR_MNG_FIT_CTL_RCV_TCO_EN 0x00020000U /* Rcv TCO packet enable */
+#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS  0x10000000U /* Ena BMC2OS and OS2BMC traffic */
+#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS_SHIFT   28
+
+#define TXGBE_PSR_MNG_FLEX_SEL  0x1582C
+#define TXGBE_PSR_MNG_FLEX_DW_L(_i) (0x15A00 + ((_i) * 16))
+#define TXGBE_PSR_MNG_FLEX_DW_H(_i) (0x15A04 + ((_i) * 16))
+#define TXGBE_PSR_MNG_FLEX_MSK(_i)  (0x15A08 + ((_i) * 16))
+
+/* mirror */
+#define TXGBE_PSR_MR_CTL(_i)    (0x15B00 + ((_i) * 4))
+#define TXGBE_PSR_MR_VLAN_L(_i) (0x15B10 + ((_i) * 8))
+#define TXGBE_PSR_MR_VLAN_H(_i) (0x15B14 + ((_i) * 8))
+#define TXGBE_PSR_MR_VM_L(_i)   (0x15B30 + ((_i) * 8))
+#define TXGBE_PSR_MR_VM_H(_i)   (0x15B34 + ((_i) * 8))
+
+/* 1588 */
+#define TXGBE_PSR_1588_CTL      0x15188 /* Rx Time Sync Control register - RW */
+#define TXGBE_PSR_1588_STMPL    0x151E8 /* Rx timestamp Low - RO */
+#define TXGBE_PSR_1588_STMPH    0x151A4 /* Rx timestamp High - RO */
+#define TXGBE_PSR_1588_ATTRL    0x151A0 /* Rx timestamp attribute low - RO */
+#define TXGBE_PSR_1588_ATTRH    0x151A8 /* Rx timestamp attribute high - RO */
+#define TXGBE_PSR_1588_MSGTYPE  0x15120 /* RX message type register low - RW */
+/* 1588 CTL Bit */
+#define TXGBE_PSR_1588_CTL_VALID            0x00000001U /* Rx timestamp valid */
+#define TXGBE_PSR_1588_CTL_TYPE_MASK        0x0000000EU /* Rx type mask */
+#define TXGBE_PSR_1588_CTL_TYPE_L2_V2       0x00
+#define TXGBE_PSR_1588_CTL_TYPE_L4_V1       0x02
+#define TXGBE_PSR_1588_CTL_TYPE_L2_L4_V2    0x04
+#define TXGBE_PSR_1588_CTL_TYPE_EVENT_V2    0x0A
+#define TXGBE_PSR_1588_CTL_ENABLED          0x00000010U /* Rx Timestamp enabled*/
+/* 1588 msg type bit */
+#define TXGBE_PSR_1588_MSGTYPE_V1_CTRLT_MASK            0x000000FFU
+#define TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG              0x00
+#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG         0x01
+#define TXGBE_PSR_1588_MSGTYPE_V1_FOLLOWUP_MSG          0x02
+#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_RESP_MSG        0x03
+#define TXGBE_PSR_1588_MSGTYPE_V1_MGMT_MSG              0x04
+#define TXGBE_PSR_1588_MSGTYPE_V2_MSGID_MASK            0x0000FF00U
+#define TXGBE_PSR_1588_MSGTYPE_V2_SYNC_MSG              0x0000
+#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_REQ_MSG         0x0100
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_REQ_MSG        0x0200
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_RESP_MSG       0x0300
+#define TXGBE_PSR_1588_MSGTYPE_V2_FOLLOWUP_MSG          0x0800
+#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_RESP_MSG        0x0900
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_FOLLOWUP_MSG   0x0A00
+#define TXGBE_PSR_1588_MSGTYPE_V2_ANNOUNCE_MSG          0x0B00
+#define TXGBE_PSR_1588_MSGTYPE_V2_SIGNALLING_MSG        0x0C00
+#define TXGBE_PSR_1588_MSGTYPE_V2_MGMT_MSG              0x0D00
+
+/* Wake up registers */
+#define TXGBE_PSR_WKUP_CTL      0x15B80
+#define TXGBE_PSR_WKUP_IPV      0x15B84
+#define TXGBE_PSR_LAN_FLEX_SEL  0x15B8C
+#define TXGBE_PSR_WKUP_IP4TBL(_i)       (0x15BC0 + ((_i) * 4))
+#define TXGBE_PSR_WKUP_IP6TBL(_i)       (0x15BE0 + ((_i) * 4))
+#define TXGBE_PSR_LAN_FLEX_DW_L(_i)     (0x15C00 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_DW_H(_i)     (0x15C04 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_MSK(_i)      (0x15C08 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_CTL  0x15CFC
+/* Wake Up Filter Control Bit */
+#define TXGBE_PSR_WKUP_CTL_LNKC 0x00000001U /* Link Status Change Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_MAG  0x00000002U /* Magic Packet Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_EX   0x00000004U /* Directed Exact Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_MC   0x00000008U /* Directed Multicast Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_BC   0x00000010U /* Broadcast Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_ARP  0x00000020U /* ARP Request Packet Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_IPV4 0x00000040U /* Directed IPv4 Pkt Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_IPV6 0x00000080U /* Directed IPv6 Pkt Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_IGNORE_TCO   0x00008000U /* Ignore WakeOn TCO pkts */
+#define TXGBE_PSR_WKUP_CTL_FLX0         0x00010000U /* Flexible Filter 0 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX1         0x00020000U /* Flexible Filter 1 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX2         0x00040000U /* Flexible Filter 2 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX3         0x00080000U /* Flexible Filter 3 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX4         0x00100000U /* Flexible Filter 4 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX5         0x00200000U /* Flexible Filter 5 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS  0x000F0000U /* Mask for 4 flex filters */
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_6 0x003F0000U /* Mask for 6 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_8 0x00FF0000U /* Mask for 8 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FW_RST_WK    0x80000000U /* Ena wake on FW reset assertion */
+/* Mask for Ext. flex filters */
+#define TXGBE_PSR_WKUP_CTL_EXT_FLX_FILTERS  0x00300000U
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS   0x000F00FFU /* Mask all 4 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_6 0x003F00FFU /* Mask all 6 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_8 0x00FF00FFU /* Mask all 8 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FLX_OFFSET    16 /* Offset to the Flex Filters bits*/
+
+#define TXGBE_PSR_MAX_SZ                0x15020
+
+/****************************** TDB ******************************************/
+#define TXGBE_TDB_RFCS                  0x1CE00
+#define TXGBE_TDB_PB_SZ(_i)             (0x1CC00 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_TDB_MNG_TC                0x1CD10
+#define TXGBE_TDB_PRB_CTL               0x17010
+#define TXGBE_TDB_PBRARB_CTL            0x1CD00
+#define TXGBE_TDB_UP2TC                 0x1C800
+#define TXGBE_TDB_PBRARB_CFG(_i)        (0x1CD20 + ((_i) * 4)) /* 8 of (0-7) */
+
+#define TXGBE_TDB_PB_SZ_20KB    0x00005000U /* 20KB Packet Buffer */
+#define TXGBE_TDB_PB_SZ_40KB    0x0000A000U /* 40KB Packet Buffer */
+#define TXGBE_TDB_PB_SZ_MAX     0x00028000U /* 160KB Packet Buffer */
+#define TXGBE_TXPKT_SIZE_MAX    0xA /* Max Tx Packet size */
+#define TXGBE_MAX_PB            8
+
+/****************************** TSEC *****************************************/
+/* Security Control Registers */
+#define TXGBE_TSC_CTL                   0x1D000
+#define TXGBE_TSC_ST                    0x1D004
+#define TXGBE_TSC_BUF_AF                0x1D008
+#define TXGBE_TSC_BUF_AE                0x1D00C
+#define TXGBE_TSC_PRB_CTL               0x1D010
+#define TXGBE_TSC_MIN_IFG               0x1D020
+/* Security Bit Fields and Masks */
+#define TXGBE_TSC_CTL_SECTX_DIS         0x00000001U
+#define TXGBE_TSC_CTL_TX_DIS            0x00000002U
+#define TXGBE_TSC_CTL_STORE_FORWARD     0x00000004U
+#define TXGBE_TSC_CTL_IV_MSK_EN         0x00000008U
+#define TXGBE_TSC_ST_SECTX_RDY          0x00000001U
+#define TXGBE_TSC_ST_OFF_DIS            0x00000002U
+#define TXGBE_TSC_ST_ECC_TXERR          0x00000004U
+
+/* LinkSec (MacSec) Registers */
+#define TXGBE_TSC_LSEC_CAP              0x1D200
+#define TXGBE_TSC_LSEC_CTL              0x1D204
+#define TXGBE_TSC_LSEC_SCI_L            0x1D208
+#define TXGBE_TSC_LSEC_SCI_H            0x1D20C
+#define TXGBE_TSC_LSEC_SA               0x1D210
+#define TXGBE_TSC_LSEC_PKTNUM0          0x1D214
+#define TXGBE_TSC_LSEC_PKTNUM1          0x1D218
+#define TXGBE_TSC_LSEC_KEY0(_n)         0x1D21C
+#define TXGBE_TSC_LSEC_KEY1(_n)         0x1D22C
+#define TXGBE_TSC_LSEC_UNTAG_PKT        0x1D23C
+#define TXGBE_TSC_LSEC_ENC_PKT          0x1D240
+#define TXGBE_TSC_LSEC_PROT_PKT         0x1D244
+#define TXGBE_TSC_LSEC_ENC_OCTET        0x1D248
+#define TXGBE_TSC_LSEC_PROT_OCTET       0x1D24C
+
+/* IpSec Registers */
+#define TXGBE_TSC_IPS_IDX               0x1D100
+#define TXGBE_TSC_IPS_IDX_WT        0x80000000U
+#define TXGBE_TSC_IPS_IDX_RD        0x40000000U
+#define TXGBE_TSC_IPS_IDX_SD_IDX    0x0U /* */
+#define TXGBE_TSC_IPS_IDX_EN        0x00000001U
+#define TXGBE_TSC_IPS_SALT              0x1D104
+#define TXGBE_TSC_IPS_KEY(i)            (0x1D108 + ((i) * 4))
+
+/* 1588 */
+#define TXGBE_TSC_1588_CTL              0x1D400 /* Tx Time Sync Control reg */
+#define TXGBE_TSC_1588_STMPL            0x1D404 /* Tx timestamp value Low */
+#define TXGBE_TSC_1588_STMPH            0x1D408 /* Tx timestamp value High */
+#define TXGBE_TSC_1588_SYSTIML          0x1D40C /* System time register Low */
+#define TXGBE_TSC_1588_SYSTIMH          0x1D410 /* System time register High */
+#define TXGBE_TSC_1588_INC              0x1D414 /* Increment attributes reg */
+#define TXGBE_TSC_1588_INC_IV(v)   (((v) & 0xFFFFFF))
+#define TXGBE_TSC_1588_INC_IP(v)   (((v) & 0xFF) << 24)
+#define TXGBE_TSC_1588_INC_IVP(v, p)  \
+				(((v) & 0xFFFFFF) | TXGBE_TSC_1588_INC_IP(p))
+
+#define TXGBE_TSC_1588_ADJL         0x1D418 /* Time Adjustment Offset reg Low */
+#define TXGBE_TSC_1588_ADJH         0x1D41C /* Time Adjustment Offset reg High*/
+/* 1588 fields */
+#define TXGBE_TSC_1588_CTL_VALID    0x00000001U /* Tx timestamp valid */
+#define TXGBE_TSC_1588_CTL_ENABLED  0x00000010U /* Tx timestamping enabled */
+
+/********************************* RSEC **************************************/
+/* general rsec */
+#define TXGBE_RSC_CTL                   0x17000
+#define TXGBE_RSC_ST                    0x17004
+/* general rsec fields */
+#define TXGBE_RSC_CTL_SECRX_DIS         0x00000001U
+#define TXGBE_RSC_CTL_RX_DIS            0x00000002U
+#define TXGBE_RSC_CTL_CRC_STRIP         0x00000004U
+#define TXGBE_RSC_CTL_IV_MSK_EN         0x00000008U
+#define TXGBE_RSC_CTL_SAVE_MAC_ERR      0x00000040U
+#define TXGBE_RSC_ST_RSEC_RDY           0x00000001U
+#define TXGBE_RSC_ST_RSEC_OFLD_DIS      0x00000002U
+#define TXGBE_RSC_ST_ECC_RXERR          0x00000004U
+
+/* link sec */
+#define TXGBE_RSC_LSEC_CAP              0x17200
+#define TXGBE_RSC_LSEC_CTL              0x17204
+#define TXGBE_RSC_LSEC_SCI_L            0x17208
+#define TXGBE_RSC_LSEC_SCI_H            0x1720C
+#define TXGBE_RSC_LSEC_SA0              0x17210
+#define TXGBE_RSC_LSEC_SA1              0x17214
+#define TXGBE_RSC_LSEC_PKNUM0           0x17218
+#define TXGBE_RSC_LSEC_PKNUM1           0x1721C
+#define TXGBE_RSC_LSEC_KEY0(_n)         0x17220
+#define TXGBE_RSC_LSEC_KEY1(_n)         0x17230
+#define TXGBE_RSC_LSEC_UNTAG_PKT        0x17240
+#define TXGBE_RSC_LSEC_DEC_OCTET        0x17244
+#define TXGBE_RSC_LSEC_VLD_OCTET        0x17248
+#define TXGBE_RSC_LSEC_BAD_PKT          0x1724C
+#define TXGBE_RSC_LSEC_NOSCI_PKT        0x17250
+#define TXGBE_RSC_LSEC_UNSCI_PKT        0x17254
+#define TXGBE_RSC_LSEC_UNCHK_PKT        0x17258
+#define TXGBE_RSC_LSEC_DLY_PKT          0x1725C
+#define TXGBE_RSC_LSEC_LATE_PKT         0x17260
+#define TXGBE_RSC_LSEC_OK_PKT(_n)       0x17264
+#define TXGBE_RSC_LSEC_INV_PKT(_n)      0x17274
+#define TXGBE_RSC_LSEC_BADSA_PKT        0x1727C
+#define TXGBE_RSC_LSEC_INVSA_PKT        0x17280
+
+/* ipsec */
+#define TXGBE_RSC_IPS_IDX               0x17100
+#define TXGBE_RSC_IPS_IDX_WT        0x80000000U
+#define TXGBE_RSC_IPS_IDX_RD        0x40000000U
+#define TXGBE_RSC_IPS_IDX_TB_IDX    0x0U /* */
+#define TXGBE_RSC_IPS_IDX_TB_IP     0x00000002U
+#define TXGBE_RSC_IPS_IDX_TB_SPI    0x00000004U
+#define TXGBE_RSC_IPS_IDX_TB_KEY    0x00000006U
+#define TXGBE_RSC_IPS_IDX_EN        0x00000001U
+#define TXGBE_RSC_IPS_IP(i)             (0x17104 + ((i) * 4))
+#define TXGBE_RSC_IPS_SPI               0x17114
+#define TXGBE_RSC_IPS_IP_IDX            0x17118
+#define TXGBE_RSC_IPS_KEY(i)            (0x1711C + ((i) * 4))
+#define TXGBE_RSC_IPS_SALT              0x1712C
+#define TXGBE_RSC_IPS_MODE              0x17130
+#define TXGBE_RSC_IPS_MODE_IPV6         0x00000010
+#define TXGBE_RSC_IPS_MODE_DEC          0x00000008
+#define TXGBE_RSC_IPS_MODE_ESP          0x00000004
+#define TXGBE_RSC_IPS_MODE_AH           0x00000002
+#define TXGBE_RSC_IPS_MODE_VALID        0x00000001
+
+/************************************** ETH PHY ******************************/
+#define TXGBE_XPCS_IDA_ADDR    0x13000
+#define TXGBE_XPCS_IDA_DATA    0x13004
+#define TXGBE_ETHPHY_IDA_ADDR  0x13008
+#define TXGBE_ETHPHY_IDA_DATA  0x1300C
+
+/************************************** MNG ********************************/
+#define TXGBE_MNG_FW_SM         0x1E000
+#define TXGBE_MNG_SW_SM         0x1E004
+#define TXGBE_MNG_SWFW_SYNC     0x1E008
+#define TXGBE_MNG_MBOX          0x1E100
+#define TXGBE_MNG_MBOX_CTL      0x1E044
+#define TXGBE_MNG_OS2BMC_CNT    0x1E094
+#define TXGBE_MNG_BMC2OS_CNT    0x1E090
+
+/* Firmware Semaphore Register */
+#define TXGBE_MNG_FW_SM_MODE_MASK       0xE
+#define TXGBE_MNG_FW_SM_TS_ENABLED      0x1
+/* SW Semaphore Register bitmasks */
+#define TXGBE_MNG_SW_SM_SM              0x00000001U /* software Semaphore */
+
+/* SW_FW_SYNC definitions */
+#define TXGBE_MNG_SWFW_SYNC_SW_PHY      0x0001
+#define TXGBE_MNG_SWFW_SYNC_SW_FLASH    0x0008
+#define TXGBE_MNG_SWFW_SYNC_SW_MB       0x0004
+
+#define TXGBE_MNG_MBOX_CTL_SWRDY        0x1
+#define TXGBE_MNG_MBOX_CTL_SWACK        0x2
+#define TXGBE_MNG_MBOX_CTL_FWRDY        0x4
+#define TXGBE_MNG_MBOX_CTL_FWACK        0x8
+
+/************************************* ETH MAC *****************************/
+#define TXGBE_MAC_TX_CFG                0x11000
+#define TXGBE_MAC_RX_CFG                0x11004
+#define TXGBE_MAC_PKT_FLT               0x11008
+#define TXGBE_MAC_PKT_FLT_PR            (0x1) /* promiscuous mode */
+#define TXGBE_MAC_PKT_FLT_RA            (0x80000000) /* receive all */
+#define TXGBE_MAC_WDG_TIMEOUT           0x1100C
+#define TXGBE_MAC_RX_FLOW_CTRL          0x11090
+#define TXGBE_MAC_ADDRESS0_HIGH         0x11300
+#define TXGBE_MAC_ADDRESS0_LOW          0x11304
+
+#define TXGBE_MAC_TX_CFG_TE             0x00000001U
+#define TXGBE_MAC_TX_CFG_SPEED_MASK     0x60000000U
+#define TXGBE_MAC_TX_CFG_SPEED_10G      0x00000000U
+#define TXGBE_MAC_TX_CFG_SPEED_1G       0x60000000U
+#define TXGBE_MAC_RX_CFG_RE             0x00000001U
+#define TXGBE_MAC_RX_CFG_JE             0x00000100U
+#define TXGBE_MAC_RX_CFG_LM             0x00000400U
+#define TXGBE_MAC_WDG_TIMEOUT_PWE       0x00000100U
+#define TXGBE_MAC_WDG_TIMEOUT_WTO_MASK  0x0000000FU
+#define TXGBE_MAC_WDG_TIMEOUT_WTO_DELTA 2
+
+#define TXGBE_MAC_RX_FLOW_CTRL_RFE      0x00000001U /* receive fc enable */
+#define TXGBE_MAC_RX_FLOW_CTRL_PFCE     0x00000100U /* pfc enable */
+
+#define TXGBE_MSCA                      0x11200
+#define TXGBE_MSCA_RA(v)                ((0xFFFF & (v)))
+#define TXGBE_MSCA_PA(v)                ((0x1F & (v)) << 16)
+#define TXGBE_MSCA_DA(v)                ((0x1F & (v)) << 21)
+#define TXGBE_MSCC                      0x11204
+#define TXGBE_MSCC_DATA(v)              ((0xFFFF & (v)))
+#define TXGBE_MSCC_CMD(v)               ((0x3 & (v)) << 16)
+enum TXGBE_MSCA_CMD_value {
+	TXGBE_MSCA_CMD_RSV = 0,
+	TXGBE_MSCA_CMD_WRITE,
+	TXGBE_MSCA_CMD_POST_READ,
+	TXGBE_MSCA_CMD_READ,
+};
+
+#define TXGBE_MSCC_SADDR                ((0x1U) << 18)
+#define TXGBE_MSCC_CR(v)                ((0x8U & (v)) << 19)
+#define TXGBE_MSCC_BUSY                 ((0x1U) << 22)
+
+/* EEE registers */
+
+/* statistic */
+#define TXGBE_MAC_LXONRXC               0x11E0C
+#define TXGBE_MAC_LXOFFRXC              0x11988
+#define TXGBE_MAC_PXONRXC(_i)           (0x11E30 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_MAC_PXOFFRXC              0x119DC
+#define TXGBE_RX_BC_FRAMES_GOOD_LOW     0x11918
+#define TXGBE_RX_CRC_ERROR_FRAMES_LOW   0x11928
+#define TXGBE_RX_LEN_ERROR_FRAMES_LOW   0x11978
+#define TXGBE_RX_UNDERSIZE_FRAMES_GOOD  0x11938
+#define TXGBE_RX_OVERSIZE_FRAMES_GOOD   0x1193C
+#define TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW 0x11900
+#define TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW 0x1181C
+#define TXGBE_TX_MC_FRAMES_GOOD_LOW     0x1182C
+#define TXGBE_TX_BC_FRAMES_GOOD_LOW     0x11824
+#define TXGBE_MMC_CONTROL               0x11800
+#define TXGBE_MMC_CONTROL_RSTONRD       0x4 /* reset on read */
+#define TXGBE_MMC_CONTROL_UP            0x700
+
+/********************************* BAR registers ***************************/
+/* Interrupt Registers */
+#define TXGBE_BME_CTL				0x12020
+#define TXGBE_PX_MISC_IC                        0x100
+#define TXGBE_PX_MISC_ICS                       0x104
+#define TXGBE_PX_MISC_IEN                       0x108
+#define TXGBE_PX_MISC_IVAR                      0x4FC
+#define TXGBE_PX_GPIE                           0x118
+#define TXGBE_PX_ISB_ADDR_L                     0x160
+#define TXGBE_PX_ISB_ADDR_H                     0x164
+#define TXGBE_PX_TCP_TIMER                      0x170
+#define TXGBE_PX_ITRSEL                         0x180
+#define TXGBE_PX_IC(_i)                         (0x120 + (_i) * 4)
+#define TXGBE_PX_ICS(_i)                        (0x130 + (_i) * 4)
+#define TXGBE_PX_IMS(_i)                        (0x140 + (_i) * 4)
+#define TXGBE_PX_IMC(_i)                        (0x150 + (_i) * 4)
+#define TXGBE_PX_IVAR(_i)                       (0x500 + (_i) * 4)
+#define TXGBE_PX_ITR(_i)                        (0x200 + (_i) * 4)
+#define TXGBE_PX_TRANSACTION_PENDING            0x168
+#define TXGBE_PX_INTA                           0x110
+
+/* Interrupt register bitmasks */
+/* Extended Interrupt Cause Read */
+#define TXGBE_PX_MISC_IC_ETH_LKDN       0x00000100U /* eth link down */
+#define TXGBE_PX_MISC_IC_DEV_RST        0x00000400U /* device reset event */
+#define TXGBE_PX_MISC_IC_TIMESYNC       0x00000800U /* time sync */
+#define TXGBE_PX_MISC_IC_STALL          0x00001000U /* trans or recv path is stalled */
+#define TXGBE_PX_MISC_IC_LINKSEC        0x00002000U /* Tx LinkSec require key exchange */
+#define TXGBE_PX_MISC_IC_RX_MISS        0x00004000U /* Packet Buffer Overrun */
+#define TXGBE_PX_MISC_IC_FLOW_DIR       0x00008000U /* FDir Exception */
+#define TXGBE_PX_MISC_IC_I2C            0x00010000U /* I2C interrupt */
+#define TXGBE_PX_MISC_IC_ETH_EVENT      0x00020000U /* err reported by MAC except eth link down */
+#define TXGBE_PX_MISC_IC_ETH_LK         0x00040000U /* link up */
+#define TXGBE_PX_MISC_IC_ETH_AN         0x00080000U /* link auto-nego done */
+#define TXGBE_PX_MISC_IC_INT_ERR        0x00100000U /* integrity error */
+#define TXGBE_PX_MISC_IC_SPI            0x00200000U /* SPI interface */
+#define TXGBE_PX_MISC_IC_VF_MBOX        0x00800000U /* VF-PF message box */
+#define TXGBE_PX_MISC_IC_GPIO           0x04000000U /* GPIO interrupt */
+#define TXGBE_PX_MISC_IC_PCIE_REQ_ERR   0x08000000U /* pcie request error int */
+#define TXGBE_PX_MISC_IC_OVER_HEAT      0x10000000U /* overheat detection */
+#define TXGBE_PX_MISC_IC_PROBE_MATCH    0x20000000U /* probe match */
+#define TXGBE_PX_MISC_IC_MNG_HOST_MBOX  0x40000000U /* mng mailbox */
+#define TXGBE_PX_MISC_IC_TIMER          0x80000000U /* tcp timer */
+
+/* Extended Interrupt Cause Set */
+#define TXGBE_PX_MISC_ICS_ETH_LKDN      0x00000100U
+#define TXGBE_PX_MISC_ICS_DEV_RST       0x00000400U
+#define TXGBE_PX_MISC_ICS_TIMESYNC      0x00000800U
+#define TXGBE_PX_MISC_ICS_STALL         0x00001000U
+#define TXGBE_PX_MISC_ICS_LINKSEC       0x00002000U
+#define TXGBE_PX_MISC_ICS_RX_MISS       0x00004000U
+#define TXGBE_PX_MISC_ICS_FLOW_DIR      0x00008000U
+#define TXGBE_PX_MISC_ICS_I2C           0x00010000U
+#define TXGBE_PX_MISC_ICS_ETH_EVENT     0x00020000U
+#define TXGBE_PX_MISC_ICS_ETH_LK        0x00040000U
+#define TXGBE_PX_MISC_ICS_ETH_AN        0x00080000U
+#define TXGBE_PX_MISC_ICS_INT_ERR       0x00100000U
+#define TXGBE_PX_MISC_ICS_SPI           0x00200000U
+#define TXGBE_PX_MISC_ICS_VF_MBOX       0x00800000U
+#define TXGBE_PX_MISC_ICS_GPIO          0x04000000U
+#define TXGBE_PX_MISC_ICS_PCIE_REQ_ERR  0x08000000U
+#define TXGBE_PX_MISC_ICS_OVER_HEAT     0x10000000U
+#define TXGBE_PX_MISC_ICS_PROBE_MATCH   0x20000000U
+#define TXGBE_PX_MISC_ICS_MNG_HOST_MBOX 0x40000000U
+#define TXGBE_PX_MISC_ICS_TIMER         0x80000000U
+
+/* Extended Interrupt Enable Set */
+#define TXGBE_PX_MISC_IEN_ETH_LKDN      0x00000100U
+#define TXGBE_PX_MISC_IEN_DEV_RST       0x00000400U
+#define TXGBE_PX_MISC_IEN_TIMESYNC      0x00000800U
+#define TXGBE_PX_MISC_IEN_STALL         0x00001000U
+#define TXGBE_PX_MISC_IEN_LINKSEC       0x00002000U
+#define TXGBE_PX_MISC_IEN_RX_MISS       0x00004000U
+#define TXGBE_PX_MISC_IEN_FLOW_DIR      0x00008000U
+#define TXGBE_PX_MISC_IEN_I2C           0x00010000U
+#define TXGBE_PX_MISC_IEN_ETH_EVENT     0x00020000U
+#define TXGBE_PX_MISC_IEN_ETH_LK        0x00040000U
+#define TXGBE_PX_MISC_IEN_ETH_AN        0x00080000U
+#define TXGBE_PX_MISC_IEN_INT_ERR       0x00100000U
+#define TXGBE_PX_MISC_IEN_SPI           0x00200000U
+#define TXGBE_PX_MISC_IEN_VF_MBOX       0x00800000U
+#define TXGBE_PX_MISC_IEN_GPIO          0x04000000U
+#define TXGBE_PX_MISC_IEN_PCIE_REQ_ERR  0x08000000U
+#define TXGBE_PX_MISC_IEN_OVER_HEAT     0x10000000U
+#define TXGBE_PX_MISC_IEN_PROBE_MATCH   0x20000000U
+#define TXGBE_PX_MISC_IEN_MNG_HOST_MBOX 0x40000000U
+#define TXGBE_PX_MISC_IEN_TIMER         0x80000000U
+
+#define TXGBE_PX_MISC_IEN_MASK ( \
+				TXGBE_PX_MISC_IEN_ETH_LKDN | \
+				TXGBE_PX_MISC_IEN_DEV_RST | \
+				TXGBE_PX_MISC_IEN_ETH_EVENT | \
+				TXGBE_PX_MISC_IEN_ETH_LK | \
+				TXGBE_PX_MISC_IEN_ETH_AN | \
+				TXGBE_PX_MISC_IEN_INT_ERR | \
+				TXGBE_PX_MISC_IEN_VF_MBOX | \
+				TXGBE_PX_MISC_IEN_GPIO | \
+				TXGBE_PX_MISC_IEN_MNG_HOST_MBOX | \
+				TXGBE_PX_MISC_IEN_STALL | \
+				TXGBE_PX_MISC_IEN_PCIE_REQ_ERR | \
+				TXGBE_PX_MISC_IEN_TIMER)
+
+/* General purpose Interrupt Enable */
+#define TXGBE_PX_GPIE_MODEL             0x00000001U
+#define TXGBE_PX_GPIE_IMEN              0x00000002U
+#define TXGBE_PX_GPIE_LL_INTERVAL       0x000000F0U
+#define TXGBE_PX_GPIE_RSC_DELAY         0x00000700U
+
+/* Interrupt Vector Allocation Registers */
+#define TXGBE_PX_IVAR_REG_NUM              64
+#define TXGBE_PX_IVAR_ALLOC_VAL            0x80 /* Interrupt Allocation valid */
+
+#define TXGBE_MAX_INT_RATE              500000
+#define TXGBE_MIN_INT_RATE              980
+#define TXGBE_MAX_EITR                  0x00000FF8U
+#define TXGBE_MIN_EITR                  8
+#define TXGBE_PX_ITR_ITR_INT_MASK       0x00000FF8U
+#define TXGBE_PX_ITR_LLI_CREDIT         0x001f0000U
+#define TXGBE_PX_ITR_LLI_MOD            0x00008000U
+#define TXGBE_PX_ITR_CNT_WDIS           0x80000000U
+#define TXGBE_PX_ITR_ITR_CNT            0x0FE00000U
+
+/* transmit DMA Registers */
+#define TXGBE_PX_TR_BAL(_i)     (0x03000 + ((_i) * 0x40))
+#define TXGBE_PX_TR_BAH(_i)     (0x03004 + ((_i) * 0x40))
+#define TXGBE_PX_TR_WP(_i)      (0x03008 + ((_i) * 0x40))
+#define TXGBE_PX_TR_RP(_i)      (0x0300C + ((_i) * 0x40))
+#define TXGBE_PX_TR_CFG(_i)     (0x03010 + ((_i) * 0x40))
+/* Transmit Config masks */
+#define TXGBE_PX_TR_CFG_ENABLE          (1) /* Ena specific Tx Queue */
+#define TXGBE_PX_TR_CFG_TR_SIZE_SHIFT   1 /* tx desc number per ring */
+#define TXGBE_PX_TR_CFG_SWFLSH          (1 << 26) /* Tx Desc. wr-bk flushing */
+#define TXGBE_PX_TR_CFG_WTHRESH_SHIFT   16 /* shift to WTHRESH bits */
+#define TXGBE_PX_TR_CFG_THRE_SHIFT      8
+
+#define TXGBE_PX_TR_RPn(q_per_pool, vf_number, vf_q_index) \
+		(TXGBE_PX_TR_RP((q_per_pool) * (vf_number) + (vf_q_index)))
+#define TXGBE_PX_TR_WPn(q_per_pool, vf_number, vf_q_index) \
+		(TXGBE_PX_TR_WP((q_per_pool) * (vf_number) + (vf_q_index)))
+
+/* Receive DMA Registers */
+#define TXGBE_PX_RR_BAL(_i)             (0x01000 + ((_i) * 0x40))
+#define TXGBE_PX_RR_BAH(_i)             (0x01004 + ((_i) * 0x40))
+#define TXGBE_PX_RR_WP(_i)              (0x01008 + ((_i) * 0x40))
+#define TXGBE_PX_RR_RP(_i)              (0x0100C + ((_i) * 0x40))
+#define TXGBE_PX_RR_CFG(_i)             (0x01010 + ((_i) * 0x40))
+/* PX_RR_CFG bit definitions */
+#define TXGBE_PX_RR_CFG_RR_SIZE_SHIFT           1
+#define TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT          2 /* so many KBs */
+#define TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT      6 /* 64byte resolution (>> 6)
+						   * + at bit 8 offset (<< 12)
+						   *  = (<< 6)
+						   */
+#define TXGBE_PX_RR_CFG_DROP_EN         0x40000000U
+#define TXGBE_PX_RR_CFG_VLAN            0x80000000U
+#define TXGBE_PX_RR_CFG_RSC             0x20000000U
+#define TXGBE_PX_RR_CFG_CNTAG           0x10000000U
+#define TXGBE_PX_RR_CFG_RSC_CNT_MD      0x08000000U
+#define TXGBE_PX_RR_CFG_SPLIT_MODE      0x04000000U
+#define TXGBE_PX_RR_CFG_STALL           0x02000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_1    0x00000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_4    0x00800000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_8    0x01000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_16   0x01800000U
+#define TXGBE_PX_RR_CFG_RR_THER         0x00070000U
+#define TXGBE_PX_RR_CFG_RR_THER_SHIFT   16
+
+#define TXGBE_PX_RR_CFG_RR_HDR_SZ       0x0000F000U
+#define TXGBE_PX_RR_CFG_RR_BUF_SZ       0x00000F00U
+#define TXGBE_PX_RR_CFG_RR_SZ           0x0000007EU
+#define TXGBE_PX_RR_CFG_RR_EN           0x00000001U
+
+/* statistic */
+#define TXGBE_PX_MPRC(_i)               (0x1020 + ((_i) * 64))
+#define TXGBE_VX_GPRC(_i)               (0x01014 + (0x40 * (_i)))
+#define TXGBE_VX_GPTC(_i)               (0x03014 + (0x40 * (_i)))
+#define TXGBE_VX_GORC_LSB(_i)           (0x01018 + (0x40 * (_i)))
+#define TXGBE_VX_GORC_MSB(_i)           (0x0101C + (0x40 * (_i)))
+#define TXGBE_VX_GOTC_LSB(_i)           (0x03018 + (0x40 * (_i)))
+#define TXGBE_VX_GOTC_MSB(_i)           (0x0301C + (0x40 * (_i)))
+#define TXGBE_VX_MPRC(_i)               (0x01020 + (0x40 * (_i)))
+
+#define TXGBE_PX_GPRC                   0x12504
+#define TXGBE_PX_GPTC                   0x18308
+
+#define TXGBE_PX_GORC_LSB               0x12508
+#define TXGBE_PX_GORC_MSB               0x1250C
+
+#define TXGBE_PX_GOTC_LSB               0x1830C
+#define TXGBE_PX_GOTC_MSB               0x18310
+
+/************************************* Stats registers ************************/
+#define TXGBE_FCCRC         0x15160 /* Num of Good Eth CRC w/ Bad FC CRC */
+#define TXGBE_FCOERPDC      0x12514 /* FCoE Rx Packets Dropped Count */
+#define TXGBE_FCLAST        0x12518 /* FCoE Last Error Count */
+#define TXGBE_FCOEPRC       0x15164 /* Number of FCoE Packets Received */
+#define TXGBE_FCOEDWRC      0x15168 /* Number of FCoE DWords Received */
+#define TXGBE_FCOEPTC       0x18318 /* Number of FCoE Packets Transmitted */
+#define TXGBE_FCOEDWTC      0x1831C /* Number of FCoE DWords Transmitted */
+
+/*************************** Flash region definition *************************/
+/* EEC Register */
+#define TXGBE_EEC_SK            0x00000001U /* EEPROM Clock */
+#define TXGBE_EEC_CS            0x00000002U /* EEPROM Chip Select */
+#define TXGBE_EEC_DI            0x00000004U /* EEPROM Data In */
+#define TXGBE_EEC_DO            0x00000008U /* EEPROM Data Out */
+#define TXGBE_EEC_FWE_MASK      0x00000030U /* FLASH Write Enable */
+#define TXGBE_EEC_FWE_DIS       0x00000010U /* Disable FLASH writes */
+#define TXGBE_EEC_FWE_EN        0x00000020U /* Enable FLASH writes */
+#define TXGBE_EEC_FWE_SHIFT     4
+#define TXGBE_EEC_REQ           0x00000040U /* EEPROM Access Request */
+#define TXGBE_EEC_GNT           0x00000080U /* EEPROM Access Grant */
+#define TXGBE_EEC_PRES          0x00000100U /* EEPROM Present */
+#define TXGBE_EEC_ARD           0x00000200U /* EEPROM Auto Read Done */
+#define TXGBE_EEC_FLUP          0x00800000U /* Flash update command */
+#define TXGBE_EEC_SEC1VAL       0x02000000U /* Sector 1 Valid */
+#define TXGBE_EEC_FLUDONE       0x04000000U /* Flash update done */
+/* EEPROM Addressing bits based on type (0-small, 1-large) */
+#define TXGBE_EEC_ADDR_SIZE     0x00000400U
+#define TXGBE_EEC_SIZE          0x00007800U /* EEPROM Size */
+#define TXGBE_EERD_MAX_ADDR     0x00003FFFU /* EERD allows 14 bits for addr. */
+
+#define TXGBE_EEC_SIZE_SHIFT            11
+#define TXGBE_EEPROM_WORD_SIZE_SHIFT    6
+#define TXGBE_EEPROM_OPCODE_BITS        8
+
+/* FLA Register */
+#define TXGBE_FLA_LOCKED        0x00000040U
+
+/* Part Number String Length */
+#define TXGBE_PBANUM_LENGTH     32
+
+/* Checksum and EEPROM pointers */
+#define TXGBE_PBANUM_PTR_GUARD          0xFAFA
+#define TXGBE_EEPROM_CHECKSUM           0x2F
+#define TXGBE_EEPROM_SUM                0xBABA
+#define TXGBE_ATLAS0_CONFIG_PTR         0x04
+#define TXGBE_PHY_PTR                   0x04
+#define TXGBE_ATLAS1_CONFIG_PTR         0x05
+#define TXGBE_OPTION_ROM_PTR            0x05
+#define TXGBE_PCIE_GENERAL_PTR          0x06
+#define TXGBE_PCIE_CONFIG0_PTR          0x07
+#define TXGBE_PCIE_CONFIG1_PTR          0x08
+#define TXGBE_CORE0_PTR                 0x09
+#define TXGBE_CORE1_PTR                 0x0A
+#define TXGBE_MAC0_PTR                  0x0B
+#define TXGBE_MAC1_PTR                  0x0C
+#define TXGBE_CSR0_CONFIG_PTR           0x0D
+#define TXGBE_CSR1_CONFIG_PTR           0x0E
+#define TXGBE_PCIE_ANALOG_PTR           0x02
+#define TXGBE_SHADOW_RAM_SIZE           0x4000
+#define TXGBE_TXGBE_PCIE_GENERAL_SIZE   0x24
+#define TXGBE_PCIE_CONFIG_SIZE          0x08
+#define TXGBE_EEPROM_LAST_WORD          0x800
+#define TXGBE_FW_PTR                    0x0F
+#define TXGBE_PBANUM0_PTR               0x05
+#define TXGBE_PBANUM1_PTR               0x06
+#define TXGBE_ALT_MAC_ADDR_PTR          0x37
+#define TXGBE_FREE_SPACE_PTR            0x3E
+#define TXGBE_SW_REGION_PTR             0x1C
+
+#define TXGBE_SAN_MAC_ADDR_PTR          0x18
+#define TXGBE_DEVICE_CAPS               0x1C
+#define TXGBE_EEPROM_VERSION_L          0x1D
+#define TXGBE_EEPROM_VERSION_H          0x1E
+#define TXGBE_ISCSI_BOOT_CONFIG         0x07
+
+#define TXGBE_SERIAL_NUMBER_MAC_ADDR    0x11
+#define TXGBE_MAX_MSIX_VECTORS_SAPPHIRE 0x40
+
+/* MSI-X capability fields masks */
+#define TXGBE_PCIE_MSIX_TBL_SZ_MASK     0x7FF
+
+/* Legacy EEPROM word offsets */
+#define TXGBE_ISCSI_BOOT_CAPS           0x0033
+#define TXGBE_ISCSI_SETUP_PORT_0        0x0030
+#define TXGBE_ISCSI_SETUP_PORT_1        0x0034
+
+/* EEPROM Commands - SPI */
+#define TXGBE_EEPROM_MAX_RETRY_SPI      5000 /* Max wait 5ms for RDY signal */
+#define TXGBE_EEPROM_STATUS_RDY_SPI     0x01
+#define TXGBE_EEPROM_READ_OPCODE_SPI    0x03  /* EEPROM read opcode */
+#define TXGBE_EEPROM_WRITE_OPCODE_SPI   0x02  /* EEPROM write opcode */
+#define TXGBE_EEPROM_A8_OPCODE_SPI      0x08  /* opcode bit-3 = addr bit-8 */
+#define TXGBE_EEPROM_WREN_OPCODE_SPI    0x06  /* EEPROM set Write Ena latch */
+/* EEPROM reset Write Enable latch */
+#define TXGBE_EEPROM_WRDI_OPCODE_SPI        0x04
+#define TXGBE_EEPROM_RDSR_OPCODE_SPI        0x05  /* EEPROM read Status reg */
+#define TXGBE_EEPROM_WRSR_OPCODE_SPI        0x01  /* EEPROM write Status reg */
+#define TXGBE_EEPROM_ERASE4K_OPCODE_SPI     0x20  /* EEPROM ERASE 4KB */
+#define TXGBE_EEPROM_ERASE64K_OPCODE_SPI    0xD8  /* EEPROM ERASE 64KB */
+#define TXGBE_EEPROM_ERASE256_OPCODE_SPI    0xDB  /* EEPROM ERASE 256B */
+
+/* EEPROM Read Register */
+#define TXGBE_EEPROM_RW_REG_DATA        16 /* data offset in EEPROM read reg */
+#define TXGBE_EEPROM_RW_REG_DONE        2 /* Offset to READ done bit */
+#define TXGBE_EEPROM_RW_REG_START       1 /* First bit to start operation */
+#define TXGBE_EEPROM_RW_ADDR_SHIFT      2 /* Shift to the address bits */
+#define TXGBE_NVM_POLL_WRITE            1 /* Flag for polling for wr complete */
+#define TXGBE_NVM_POLL_READ             0 /* Flag for polling for rd complete */
+
+#define NVM_INIT_CTRL_3                 0x38
+#define NVM_INIT_CTRL_3_LPLU            0x8
+#define NVM_INIT_CTRL_3_D10GMP_PORT0    0x40
+#define NVM_INIT_CTRL_3_D10GMP_PORT1    0x100
+
+#define TXGBE_ETH_LENGTH_OF_ADDRESS     6
+
+#define TXGBE_EEPROM_PAGE_SIZE_MAX      128
+#define TXGBE_EEPROM_RD_BUFFER_MAX_COUNT        256 /* words rd in burst */
+#define TXGBE_EEPROM_WR_BUFFER_MAX_COUNT        256 /* words wr in burst */
+#define TXGBE_EEPROM_CTRL_2             1 /* EEPROM CTRL word 2 */
+#define TXGBE_EEPROM_CCD_BIT            2
+
+#ifndef TXGBE_EEPROM_GRANT_ATTEMPTS
+#define TXGBE_EEPROM_GRANT_ATTEMPTS     1000 /* EEPROM attempts to gain grant */
+#endif
+
+#ifndef TXGBE_EERD_EEWR_ATTEMPTS
+/* Number of 5 microseconds we wait for EERD read and
+ * EERW write to complete
+ */
+#define TXGBE_EERD_EEWR_ATTEMPTS        100000
+#endif
+
+#ifndef TXGBE_FLUDONE_ATTEMPTS
+/* # attempts we wait for flush update to complete */
+#define TXGBE_FLUDONE_ATTEMPTS          20000
+#endif
+
+#define TXGBE_PCIE_CTRL2                0x5   /* PCIe Control 2 Offset */
+#define TXGBE_PCIE_CTRL2_DUMMY_ENABLE   0x8   /* Dummy Function Enable */
+#define TXGBE_PCIE_CTRL2_LAN_DISABLE    0x2   /* LAN PCI Disable */
+#define TXGBE_PCIE_CTRL2_DISABLE_SELECT 0x1   /* LAN Disable Select */
+
+#define TXGBE_SAN_MAC_ADDR_PORT0_OFFSET         0x0
+#define TXGBE_SAN_MAC_ADDR_PORT1_OFFSET         0x3
+#define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP         0x1
+#define TXGBE_DEVICE_CAPS_FCOE_OFFLOADS         0x2
+#define TXGBE_FW_LESM_PARAMETERS_PTR            0x2
+#define TXGBE_FW_LESM_STATE_1                   0x1
+#define TXGBE_FW_LESM_STATE_ENABLED             0x8000 /* LESM Enable bit */
+#define TXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR   0x4
+#define TXGBE_FW_PATCH_VERSION_4                0x7
+#define TXGBE_FCOE_IBA_CAPS_BLK_PTR             0x33 /* iSCSI/FCOE block */
+#define TXGBE_FCOE_IBA_CAPS_FCOE                0x20 /* FCOE flags */
+#define TXGBE_ISCSI_FCOE_BLK_PTR                0x17 /* iSCSI/FCOE block */
+#define TXGBE_ISCSI_FCOE_FLAGS_OFFSET           0x0 /* FCOE flags */
+#define TXGBE_ISCSI_FCOE_FLAGS_ENABLE           0x1 /* FCOE flags enable bit */
+#define TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR          0x17 /* Alt. SAN MAC block */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET      0x0 /* Alt SAN MAC capability */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT0_OFFSET     0x1 /* Alt SAN MAC 0 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT1_OFFSET     0x4 /* Alt SAN MAC 1 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET      0x7 /* Alt WWNN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET      0x8 /* Alt WWPN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_SANMAC      0x0 /* Alt SAN MAC exists */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN      0x1 /* Alt WWN base exists */
+#define TXGBE_DEVICE_CAPS_WOL_PORT0_1   0x4 /* WoL supported on ports 0 & 1 */
+#define TXGBE_DEVICE_CAPS_WOL_PORT0     0x8 /* WoL supported on port 0 */
+#define TXGBE_DEVICE_CAPS_WOL_MASK      0xC /* Mask for WoL capabilities */
+
+/******************************** PCI Bus Info *******************************/
+#define TXGBE_PCI_DEVICE_STATUS         0xAA
+#define TXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING     0x0020
+#define TXGBE_PCI_LINK_STATUS           0xB2
+#define TXGBE_PCI_DEVICE_CONTROL2       0xC8
+#define TXGBE_PCI_LINK_WIDTH            0x3F0
+#define TXGBE_PCI_LINK_WIDTH_1          0x10
+#define TXGBE_PCI_LINK_WIDTH_2          0x20
+#define TXGBE_PCI_LINK_WIDTH_4          0x40
+#define TXGBE_PCI_LINK_WIDTH_8          0x80
+#define TXGBE_PCI_LINK_SPEED            0xF
+#define TXGBE_PCI_LINK_SPEED_2500       0x1
+#define TXGBE_PCI_LINK_SPEED_5000       0x2
+#define TXGBE_PCI_LINK_SPEED_8000       0x3
+#define TXGBE_PCI_HEADER_TYPE_REGISTER  0x0E
+#define TXGBE_PCI_HEADER_TYPE_MULTIFUNC 0x80
+#define TXGBE_PCI_DEVICE_CONTROL2_16ms  0x0005
+
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET    4
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_MASK      \
+				(0x0001 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET)
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_ENABLE    \
+				(0x01 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET)
+
+#define TXGBE_PCIDEVCTRL2_TIMEO_MASK    0xf
+#define TXGBE_PCIDEVCTRL2_16_32ms_def   0x0
+#define TXGBE_PCIDEVCTRL2_50_100us      0x1
+#define TXGBE_PCIDEVCTRL2_1_2ms         0x2
+#define TXGBE_PCIDEVCTRL2_16_32ms       0x5
+#define TXGBE_PCIDEVCTRL2_65_130ms      0x6
+#define TXGBE_PCIDEVCTRL2_260_520ms     0x9
+#define TXGBE_PCIDEVCTRL2_1_2s          0xa
+#define TXGBE_PCIDEVCTRL2_4_8s          0xd
+#define TXGBE_PCIDEVCTRL2_17_34s        0xe
+
+/* Number of 100 microseconds we wait for PCI Express master disable */
+#define TXGBE_PCI_MASTER_DISABLE_TIMEOUT        800
+
+/* PCI bus types */
+enum txgbe_bus_type {
+	txgbe_bus_type_unknown = 0,
+	txgbe_bus_type_pci,
+	txgbe_bus_type_pcix,
+	txgbe_bus_type_pci_express,
+	txgbe_bus_type_internal,
+	txgbe_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum txgbe_bus_speed {
+	txgbe_bus_speed_unknown	= 0,
+	txgbe_bus_speed_33	= 33,
+	txgbe_bus_speed_66	= 66,
+	txgbe_bus_speed_100	= 100,
+	txgbe_bus_speed_120	= 120,
+	txgbe_bus_speed_133	= 133,
+	txgbe_bus_speed_2500	= 2500,
+	txgbe_bus_speed_5000	= 5000,
+	txgbe_bus_speed_8000	= 8000,
+	txgbe_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum txgbe_bus_width {
+	txgbe_bus_width_unknown	= 0,
+	txgbe_bus_width_pcie_x1	= 1,
+	txgbe_bus_width_pcie_x2	= 2,
+	txgbe_bus_width_pcie_x4	= 4,
+	txgbe_bus_width_pcie_x8	= 8,
+	txgbe_bus_width_32	= 32,
+	txgbe_bus_width_64	= 64,
+	txgbe_bus_width_reserved
+};
+
+struct txgbe_addr_filter_info {
+	u32 num_mc_addrs;
+	u32 rar_used_count;
+	u32 mta_in_use;
+	u32 overflow_promisc;
+	bool user_set_promisc;
+};
+
+/* Bus parameters */
+struct txgbe_bus_info {
+	enum txgbe_bus_speed speed;
+	enum txgbe_bus_width width;
+	enum txgbe_bus_type type;
+
+	u16 func;
+	u16 lan_id;
+};
+
+/* forward declaration */
+struct txgbe_hw;
+
+struct txgbe_mac_operations {
+	s32 (*init_hw)(struct txgbe_hw *hw);
+	s32 (*reset_hw)(struct txgbe_hw *hw);
+	s32 (*start_hw)(struct txgbe_hw *hw);
+	s32 (*get_mac_addr)(struct txgbe_hw *hw, u8 *mac_addr);
+	s32 (*get_san_mac_addr)(struct txgbe_hw *hw, u8 *san_mac_addr);
+	s32 (*stop_adapter)(struct txgbe_hw *hw);
+	s32 (*get_bus_info)(struct txgbe_hw *hw);
+	void (*set_lan_id)(struct txgbe_hw *hw);
+
+	/* RAR */
+	s32 (*set_rar)(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
+		       u32 enable_addr);
+	s32 (*clear_rar)(struct txgbe_hw *hw, u32 index);
+	s32 (*set_vmdq_san_mac)(struct txgbe_hw *hw, u32 vmdq);
+	s32 (*clear_vmdq)(struct txgbe_hw *hw, u32 rar, u32 vmdq);
+	s32 (*init_rx_addrs)(struct txgbe_hw *hw);
+	s32 (*init_uta_tables)(struct txgbe_hw *hw);
+
+	/* Manageability interface */
+	s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
+	void (*disable_rx)(struct txgbe_hw *hw);
+};
+
+struct txgbe_mac_info {
+	struct txgbe_mac_operations ops;
+	u8 addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+	u8 perm_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+	u8 san_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+	s32 mc_filter_type;
+	u32 mcft_size;
+	u32 num_rar_entries;
+	u32 max_tx_queues;
+	u32 max_rx_queues;
+	u8  san_mac_rar_index;
+	bool autotry_restart;
+	struct txgbe_thermal_sensor_data  thermal_sensor_data;
+	bool set_lben;
+};
+
+enum txgbe_reset_type {
+	TXGBE_LAN_RESET = 0,
+	TXGBE_SW_RESET,
+	TXGBE_GLOBAL_RESET
+};
+
 struct txgbe_hw {
 	u8 __iomem *hw_addr;
 	void *back;
+	struct txgbe_mac_info mac;
+	struct txgbe_addr_filter_info addr_ctrl;
+	struct txgbe_bus_info bus;
 	u16 device_id;
 	u16 vendor_id;
 	u16 subsystem_device_id;
 	u16 subsystem_vendor_id;
 	u8 revision_id;
+	bool adapter_stopped;
+	enum txgbe_reset_type reset_type;
+	bool force_full_reset;
 	u16 subsystem_id;
 };
 
+#define TCALL(hw, func, args...) (((hw)->func != NULL) \
+		? (hw)->func((hw), ##args) : TXGBE_NOT_IMPLEMENTED)
+
+/* Error Codes */
+#define TXGBE_ERR                                100
+#define TXGBE_NOT_IMPLEMENTED                    0x7FFFFFFF
+/* (-TXGBE_ERR, TXGBE_ERR): reserved for non-txgbe defined error code */
+#define TXGBE_ERR_NOSUPP                        -(TXGBE_ERR + 0)
+#define TXGBE_ERR_EEPROM                        -(TXGBE_ERR + 1)
+#define TXGBE_ERR_EEPROM_CHECKSUM               -(TXGBE_ERR + 2)
+#define TXGBE_ERR_PHY                           -(TXGBE_ERR + 3)
+#define TXGBE_ERR_CONFIG                        -(TXGBE_ERR + 4)
+#define TXGBE_ERR_PARAM                         -(TXGBE_ERR + 5)
+#define TXGBE_ERR_MAC_TYPE                      -(TXGBE_ERR + 6)
+#define TXGBE_ERR_UNKNOWN_PHY                   -(TXGBE_ERR + 7)
+#define TXGBE_ERR_LINK_SETUP                    -(TXGBE_ERR + 8)
+#define TXGBE_ERR_ADAPTER_STOPPED               -(TXGBE_ERR + 9)
+#define TXGBE_ERR_INVALID_MAC_ADDR              -(TXGBE_ERR + 10)
+#define TXGBE_ERR_DEVICE_NOT_SUPPORTED          -(TXGBE_ERR + 11)
+#define TXGBE_ERR_MASTER_REQUESTS_PENDING       -(TXGBE_ERR + 12)
+#define TXGBE_ERR_INVALID_LINK_SETTINGS         -(TXGBE_ERR + 13)
+#define TXGBE_ERR_AUTONEG_NOT_COMPLETE          -(TXGBE_ERR + 14)
+#define TXGBE_ERR_RESET_FAILED                  -(TXGBE_ERR + 15)
+#define TXGBE_ERR_SWFW_SYNC                     -(TXGBE_ERR + 16)
+#define TXGBE_ERR_PHY_ADDR_INVALID              -(TXGBE_ERR + 17)
+#define TXGBE_ERR_I2C                           -(TXGBE_ERR + 18)
+#define TXGBE_ERR_SFP_NOT_SUPPORTED             -(TXGBE_ERR + 19)
+#define TXGBE_ERR_SFP_NOT_PRESENT               -(TXGBE_ERR + 20)
+#define TXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT       -(TXGBE_ERR + 21)
+#define TXGBE_ERR_NO_SAN_ADDR_PTR               -(TXGBE_ERR + 22)
+#define TXGBE_ERR_FDIR_REINIT_FAILED            -(TXGBE_ERR + 23)
+#define TXGBE_ERR_EEPROM_VERSION                -(TXGBE_ERR + 24)
+#define TXGBE_ERR_NO_SPACE                      -(TXGBE_ERR + 25)
+#define TXGBE_ERR_OVERTEMP                      -(TXGBE_ERR + 26)
+#define TXGBE_ERR_UNDERTEMP                     -(TXGBE_ERR + 27)
+#define TXGBE_ERR_FC_NOT_NEGOTIATED             -(TXGBE_ERR + 28)
+#define TXGBE_ERR_FC_NOT_SUPPORTED              -(TXGBE_ERR + 29)
+#define TXGBE_ERR_SFP_SETUP_NOT_COMPLETE        -(TXGBE_ERR + 30)
+#define TXGBE_ERR_PBA_SECTION                   -(TXGBE_ERR + 31)
+#define TXGBE_ERR_INVALID_ARGUMENT              -(TXGBE_ERR + 32)
+#define TXGBE_ERR_HOST_INTERFACE_COMMAND        -(TXGBE_ERR + 33)
+#define TXGBE_ERR_OUT_OF_MEM                    -(TXGBE_ERR + 34)
+#define TXGBE_ERR_FEATURE_NOT_SUPPORTED         -(TXGBE_ERR + 36)
+#define TXGBE_ERR_EEPROM_PROTECTED_REGION       -(TXGBE_ERR + 37)
+#define TXGBE_ERR_FDIR_CMD_INCOMPLETE           -(TXGBE_ERR + 38)
+#define TXGBE_ERR_FLASH_LOADING_FAILED          -(TXGBE_ERR + 39)
+#define TXGBE_ERR_XPCS_POWER_UP_FAILED          -(TXGBE_ERR + 40)
+#define TXGBE_ERR_FW_RESP_INVALID               -(TXGBE_ERR + 41)
+#define TXGBE_ERR_PHY_INIT_NOT_DONE             -(TXGBE_ERR + 42)
+#define TXGBE_ERR_TIMEOUT                       -(TXGBE_ERR + 43)
+#define TXGBE_ERR_TOKEN_RETRY                   -(TXGBE_ERR + 44)
+#define TXGBE_ERR_REGISTER                      -(TXGBE_ERR + 45)
+#define TXGBE_ERR_MBX                           -(TXGBE_ERR + 46)
+#define TXGBE_ERR_MNG_ACCESS_FAILED             -(TXGBE_ERR + 47)
+
+/**
+ * register operations
+ **/
+/* read register */
+#define TXGBE_DEAD_READ_RETRIES     10
+#define TXGBE_DEAD_READ_REG         0xdeadbeefU
+#define TXGBE_DEAD_READ_REG64       0xdeadbeefdeadbeefULL
+#define TXGBE_FAILED_READ_REG       0xffffffffU
+#define TXGBE_FAILED_READ_REG64     0xffffffffffffffffULL
+
+static inline bool TXGBE_REMOVED(void __iomem *addr)
+{
+	return unlikely(!addr);
+}
+
+static inline u32
+txgbe_rd32(u8 __iomem *base)
+{
+	return readl(base);
+}
+
+static inline u32
+rd32(struct txgbe_hw *hw, u32 reg)
+{
+	u8 __iomem *base = READ_ONCE(hw->hw_addr);
+	u32 val = TXGBE_FAILED_READ_REG;
+
+	if (unlikely(!base))
+		return val;
+
+	val = txgbe_rd32(base + reg);
+
+	return val;
+}
+
+#define rd32a(a, reg, offset) ( \
+	rd32((a), (reg) + ((offset) << 2)))
+
+static inline u32
+rd32m(struct txgbe_hw *hw, u32 reg, u32 mask)
+{
+	u8 __iomem *base = READ_ONCE(hw->hw_addr);
+	u32 val = TXGBE_FAILED_READ_REG;
+
+	if (unlikely(!base))
+		return val;
+
+	val = txgbe_rd32(base + reg);
+	if (unlikely(val == TXGBE_FAILED_READ_REG))
+		return val;
+
+	return val & mask;
+}
+
+/* write register */
+static inline void
+txgbe_wr32(u8 __iomem *base, u32 val)
+{
+	writel(val, base);
+}
+
+static inline void
+wr32(struct txgbe_hw *hw, u32 reg, u32 val)
+{
+	u8 __iomem *base = READ_ONCE(hw->hw_addr);
+
+	if (unlikely(!base))
+		return;
+
+	txgbe_wr32(base + reg, val);
+}
+
+#define wr32a(a, reg, off, val) \
+	wr32((a), (reg) + ((off) << 2), (val))
+
+static inline void
+wr32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 field)
+{
+	u8 __iomem *base = READ_ONCE(hw->hw_addr);
+	u32 val;
+
+	if (unlikely(!base))
+		return;
+
+	val = txgbe_rd32(base + reg);
+	if (unlikely(val == TXGBE_FAILED_READ_REG))
+		return;
+
+	val = ((val & ~mask) | (field & mask));
+	txgbe_wr32(base + reg, val);
+}
+
+#define TXGBE_WRITE_FLUSH(H) rd32(H, TXGBE_MIS_PWR)
+
 #endif /* _TXGBE_TYPE_H_ */
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 03/14] net: txgbe: Add operations to interact with firmware
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 02/14] net: txgbe: Add hardware initialization Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 04/14] net: txgbe: Add PHY interface support Jiawen Wu
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add firmware interaction to get EEPROM information.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |   13 +
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c | 1024 ++++++++++++++++-
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   26 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |   59 +
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  113 ++
 5 files changed, 1226 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 48e6f69857ad..4c5de310292f 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -38,6 +38,11 @@ struct txgbe_mac_addr {
 #define TXGBE_MAC_STATE_MODIFIED        0x2
 #define TXGBE_MAC_STATE_IN_USE          0x4
 
+/**
+ * txgbe_adapter.flag2
+ **/
+#define TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED     (1U << 0)
+
 /* board specific private data structure */
 struct txgbe_adapter {
 	/* OS defined structs */
@@ -73,6 +78,7 @@ struct txgbe_adapter {
 
 	u8 __iomem *io_addr;    /* Mainly for iounmap use */
 
+	char eeprom_id[32];
 	bool netdev_registered;
 
 	struct txgbe_mac_addr *mac_table;
@@ -133,6 +139,13 @@ static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask)
 	/* skip the flush */
 }
 
+#define TXGBE_CPU_TO_BE16(_x) cpu_to_be16(_x)
+#define TXGBE_BE16_TO_CPU(_x) be16_to_cpu(_x)
+#define TXGBE_CPU_TO_BE32(_x) cpu_to_be32(_x)
+#define TXGBE_BE32_TO_CPU(_x) be32_to_cpu(_x)
+#define TXGBE_CPU_TO_LE32(_i) cpu_to_le32(_i)
+#define TXGBE_LE32_TO_CPUS(_i) le32_to_cpus(_i)
+
 #define msec_delay(_x) msleep(_x)
 #define usec_delay(_x) udelay(_x)
 
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index e4ebaf007da9..9865ae301133 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -9,6 +9,9 @@
 #define TXGBE_SP_MAX_RX_QUEUES  128
 #define TXGBE_SP_RAR_ENTRIES    128
 
+static s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw);
+static void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw);
+
 s32 txgbe_init_hw(struct txgbe_hw *hw)
 {
 	s32 status;
@@ -24,6 +27,118 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_read_pba_string - Reads part number string from EEPROM
+ *  @hw: pointer to hardware structure
+ *  @pba_num: stores the part number string from the EEPROM
+ *  @pba_num_size: part number string buffer length
+ *
+ *  Reads the part number string from the EEPROM.
+ **/
+s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num,
+			  u32 pba_num_size)
+{
+	s32 ret_val;
+	u16 data;
+	u16 pba_ptr;
+	u16 offset;
+	u16 length;
+
+	if (!pba_num) {
+		DEBUGOUT("PBA string buffer was null\n");
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	ret_val = TCALL(hw, eeprom.ops.read,
+			hw->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR,
+			&data);
+	if (ret_val) {
+		DEBUGOUT("NVM Read Error\n");
+		return ret_val;
+	}
+
+	ret_val = TCALL(hw, eeprom.ops.read,
+			hw->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR,
+			&pba_ptr);
+	if (ret_val) {
+		DEBUGOUT("NVM Read Error\n");
+		return ret_val;
+	}
+
+	/* if data is not ptr guard the PBA must be in legacy format which
+	 * means pba_ptr is actually our second data word for the PBA number
+	 * and we can decode it into an ascii string
+	 */
+	if (data != TXGBE_PBANUM_PTR_GUARD) {
+		DEBUGOUT("NVM PBA number is not stored as string\n");
+
+		/* we will need 11 characters to store the PBA */
+		if (pba_num_size < 11) {
+			DEBUGOUT("PBA string buffer too small\n");
+			return TXGBE_ERR_NO_SPACE;
+		}
+
+		/* extract hex string from data and pba_ptr */
+		pba_num[0] = (data >> 12) & 0xF;
+		pba_num[1] = (data >> 8) & 0xF;
+		pba_num[2] = (data >> 4) & 0xF;
+		pba_num[3] = data & 0xF;
+		pba_num[4] = (pba_ptr >> 12) & 0xF;
+		pba_num[5] = (pba_ptr >> 8) & 0xF;
+		pba_num[6] = '-';
+		pba_num[7] = 0;
+		pba_num[8] = (pba_ptr >> 4) & 0xF;
+		pba_num[9] = pba_ptr & 0xF;
+
+		/* put a null character on the end of our string */
+		pba_num[10] = '\0';
+
+		/* switch all the data but the '-' to hex char */
+		for (offset = 0; offset < 10; offset++) {
+			if (pba_num[offset] < 0xA)
+				pba_num[offset] += '0';
+			else if (pba_num[offset] < 0x10)
+				pba_num[offset] += 'A' - 0xA;
+		}
+
+		return 0;
+	}
+
+	ret_val = TCALL(hw, eeprom.ops.read, pba_ptr, &length);
+	if (ret_val) {
+		DEBUGOUT("NVM Read Error\n");
+		return ret_val;
+	}
+
+	if (length == 0xFFFF || length == 0) {
+		DEBUGOUT("NVM PBA number section invalid length\n");
+		return TXGBE_ERR_PBA_SECTION;
+	}
+
+	/* check if pba_num buffer is big enough */
+	if (pba_num_size  < (((u32)length * 2) - 1)) {
+		DEBUGOUT("PBA string buffer too small\n");
+		return TXGBE_ERR_NO_SPACE;
+	}
+
+	/* trim pba length from start of string */
+	pba_ptr++;
+	length--;
+
+	for (offset = 0; offset < length; offset++) {
+		ret_val = TCALL(hw, eeprom.ops.read, pba_ptr + offset, &data);
+		if (ret_val) {
+			DEBUGOUT("NVM Read Error\n");
+			return ret_val;
+		}
+		pba_num[offset * 2] = (u8)(data >> 8);
+		pba_num[(offset * 2) + 1] = (u8)(data & 0xFF);
+	}
+	pba_num[offset * 2] = '\0';
+
+	return 0;
+}
+
 /**
  *  txgbe_get_mac_addr - Generic get MAC address
  *  @hw: pointer to hardware structure
@@ -190,6 +305,102 @@ s32 txgbe_stop_adapter(struct txgbe_hw *hw)
 	return txgbe_disable_pcie_master(hw);
 }
 
+/**
+ *  txgbe_get_eeprom_semaphore - Get hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the hardware semaphores so EEPROM access can occur for bit-bang method
+ **/
+static s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw)
+{
+	s32 status = TXGBE_ERR_EEPROM;
+	u32 timeout = 2000;
+	u32 i;
+	u32 swsm;
+
+	/* Get SMBI software semaphore between device drivers first */
+	for (i = 0; i < timeout; i++) {
+		/* If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, TXGBE_MIS_SWSM);
+		if (!(swsm & TXGBE_MIS_SWSM_SMBI)) {
+			status = 0;
+			break;
+		}
+		usec_delay(50);
+	}
+
+	if (i == timeout) {
+		DEBUGOUT("Driver can't access the Eeprom - SMBI Semaphore not granted.\n");
+
+		/* this release is particularly important because our attempts
+		 * above to get the semaphore may have succeeded, and if there
+		 * was a timeout, we should unconditionally clear the semaphore
+		 * bits to free the driver to make progress
+		 */
+		txgbe_release_eeprom_semaphore(hw);
+
+		usec_delay(50);
+		/* one last try
+		 * If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, TXGBE_MIS_SWSM);
+		if (!(swsm & TXGBE_MIS_SWSM_SMBI))
+			status = 0;
+	}
+
+	/* Now get the semaphore between SW/FW through the SWESMBI bit */
+	if (status == 0) {
+		for (i = 0; i < timeout; i++) {
+			if (txgbe_check_mng_access(hw)) {
+			/* Set the SW EEPROM semaphore bit to request access */
+				wr32m(hw, TXGBE_MNG_SW_SM,
+				      TXGBE_MNG_SW_SM_SM, TXGBE_MNG_SW_SM_SM);
+
+				/* If we set the bit successfully then we got
+				 * semaphore.
+				 */
+				swsm = rd32(hw, TXGBE_MNG_SW_SM);
+				if (swsm & TXGBE_MNG_SW_SM_SM)
+					break;
+			}
+			usec_delay(50);
+		}
+
+		/* Release semaphores and return error if SW EEPROM semaphore
+		 * was not granted because we don't have access to the EEPROM
+		 */
+		if (i >= timeout) {
+			ERROR_REPORT1(TXGBE_ERROR_POLLING,
+				      "SWESMBI Software EEPROM semaphore not granted.\n");
+			txgbe_release_eeprom_semaphore(hw);
+			status = TXGBE_ERR_EEPROM;
+		}
+	} else {
+		ERROR_REPORT1(TXGBE_ERROR_POLLING,
+			      "Software semaphore SMBI between device drivers not granted.\n");
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_release_eeprom_semaphore - Release hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  This function clears hardware semaphore bits.
+ **/
+static void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw)
+{
+	if (txgbe_check_mng_access(hw)) {
+		wr32m(hw, TXGBE_MNG_SW_SM, TXGBE_MNG_SW_SM_SM, 0);
+		wr32m(hw, TXGBE_MIS_SWSM, TXGBE_MIS_SWSM_SMBI, 0);
+		TXGBE_WRITE_FLUSH(hw);
+	}
+}
+
 /**
  *  txgbe_set_rar - Set Rx address register
  *  @hw: pointer to hardware structure
@@ -386,17 +597,137 @@ s32 txgbe_disable_pcie_master(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_acquire_swfw_sync - Acquire SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to acquire
+ *
+ *  Acquires the SWFW semaphore through the GSSR register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+	u32 gssr = 0;
+	u32 swmask = mask;
+	u32 fwmask = mask << 16;
+	u32 timeout = 200;
+	u32 i;
+
+	for (i = 0; i < timeout; i++) {
+		/* SW NVM semaphore bit is used for access to all
+		 * SW_FW_SYNC bits (not just NVM)
+		 */
+		if (txgbe_get_eeprom_semaphore(hw))
+			return TXGBE_ERR_SWFW_SYNC;
+
+		if (txgbe_check_mng_access(hw)) {
+			gssr = rd32(hw, TXGBE_MNG_SWFW_SYNC);
+			if (gssr & (fwmask | swmask)) {
+				/* Resource is currently in use by FW or SW */
+				txgbe_release_eeprom_semaphore(hw);
+				msec_delay(5);
+			} else {
+				gssr |= swmask;
+				wr32(hw, TXGBE_MNG_SWFW_SYNC, gssr);
+				txgbe_release_eeprom_semaphore(hw);
+				return 0;
+			}
+		}
+	}
+
+	/* If time expired clear the bits holding the lock and retry */
+	if (gssr & (fwmask | swmask))
+		txgbe_release_swfw_sync(hw, gssr & (fwmask | swmask));
+
+	msec_delay(5);
+	return TXGBE_ERR_SWFW_SYNC;
+}
+
+/**
+ *  txgbe_release_swfw_sync - Release SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to release
+ *
+ *  Releases the SWFW semaphore through the GSSR register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+	txgbe_get_eeprom_semaphore(hw);
+	if (txgbe_check_mng_access(hw))
+		wr32m(hw, TXGBE_MNG_SWFW_SYNC, mask, 0);
+
+	txgbe_release_eeprom_semaphore(hw);
+}
+
+/**
+ *  txgbe_get_san_mac_addr_offset - Get SAN MAC address offset from the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @san_mac_offset: SAN MAC address offset
+ *
+ *  This function will read the EEPROM location for the SAN MAC address
+ *  pointer, and returns the value at that location.  This is used in both
+ *  get and set mac_addr routines.
+ **/
+static s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
+					 u16 *san_mac_offset)
+{
+	s32 ret_val;
+
+	/* First read the EEPROM pointer to see if the MAC addresses are
+	 * available.
+	 */
+	ret_val = TCALL(hw, eeprom.ops.read,
+			hw->eeprom.sw_region_offset + TXGBE_SAN_MAC_ADDR_PTR,
+			san_mac_offset);
+	if (ret_val) {
+		ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+			      "eeprom at offset %d failed",
+			      TXGBE_SAN_MAC_ADDR_PTR);
+	}
+
+	return ret_val;
+}
+
 /**
  *  txgbe_get_san_mac_addr - SAN MAC address retrieval from the EEPROM
  *  @hw: pointer to hardware structure
  *  @san_mac_addr: SAN MAC address
  *
- *  Reads the SAN MAC address.
+ *  Reads the SAN MAC address from the EEPROM.
  **/
 s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
 {
+	u16 san_mac_data, san_mac_offset;
 	u8 i;
+	s32 ret_val;
+
+	/* First read the EEPROM pointer to see if the MAC addresses are
+	 * available.  If they're not, no point in calling set_lan_id() here.
+	 */
+	ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset);
+	if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF)
+		goto san_mac_addr_out;
+
+	/* apply the port offset to the address offset */
+	(hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) :
+			 (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET);
+	for (i = 0; i < 3; i++) {
+		ret_val = TCALL(hw, eeprom.ops.read, san_mac_offset,
+				&san_mac_data);
+		if (ret_val) {
+			ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+				      "eeprom read at offset %d failed",
+				      san_mac_offset);
+			goto san_mac_addr_out;
+		}
+		san_mac_addr[i * 2] = (u8)(san_mac_data);
+		san_mac_addr[i * 2 + 1] = (u8)(san_mac_data >> 8);
+		san_mac_offset++;
+	}
+	return 0;
 
+san_mac_addr_out:
 	/* No addresses available in this EEPROM.  It's not an
 	 * error though, so just wipe the local address and return.
 	 */
@@ -482,6 +813,324 @@ s32 txgbe_init_uta_tables(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  Get alternative WWNN/WWPN prefix from the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @wwnn_prefix: the alternative WWNN prefix
+ *  @wwpn_prefix: the alternative WWPN prefix
+ *
+ *  This function will read the EEPROM from the alternative SAN MAC address
+ *  block to check the support for the alternative WWNN/WWPN prefix support.
+ **/
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+			 u16 *wwpn_prefix)
+{
+	u16 offset, caps;
+	u16 alt_san_mac_blk_offset;
+
+	/* clear output first */
+	*wwnn_prefix = 0xFFFF;
+	*wwpn_prefix = 0xFFFF;
+
+	/* check if alternative SAN MAC is supported */
+	offset = hw->eeprom.sw_region_offset + TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR;
+	if (TCALL(hw, eeprom.ops.read, offset, &alt_san_mac_blk_offset))
+		goto wwn_prefix_err;
+
+	if (alt_san_mac_blk_offset == 0 ||
+	    alt_san_mac_blk_offset == 0xFFFF)
+		goto wwn_prefix_out;
+
+	/* check capability in alternative san mac address block */
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET;
+	if (TCALL(hw, eeprom.ops.read, offset, &caps))
+		goto wwn_prefix_err;
+	if (!(caps & TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN))
+		goto wwn_prefix_out;
+
+	/* get the corresponding prefix for WWNN/WWPN */
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET;
+	if (TCALL(hw, eeprom.ops.read, offset, wwnn_prefix)) {
+		ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+			      "eeprom read at offset %d failed", offset);
+	}
+
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET;
+	if (TCALL(hw, eeprom.ops.read, offset, wwpn_prefix))
+		goto wwn_prefix_err;
+
+wwn_prefix_err:
+	ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+		      "eeprom read at offset %d failed", offset);
+wwn_prefix_out:
+	return 0;
+}
+
+/**
+ *  txgbe_calculate_checksum - Calculate checksum for buffer
+ *  @buffer: pointer to EEPROM
+ *  @length: size of EEPROM to calculate a checksum for
+ *  Calculates the checksum for some buffer on a specified length.  The
+ *  checksum calculated is returned.
+ **/
+u8 txgbe_calculate_checksum(u8 *buffer, u32 length)
+{
+	u32 i;
+	u8 sum = 0;
+
+	if (!buffer)
+		return 0;
+
+	for (i = 0; i < length; i++)
+		sum += buffer[i];
+
+	return (u8)(0 - sum);
+}
+
+/**
+ *  txgbe_host_interface_command - Issue command to manageability block
+ *  @hw: pointer to the HW structure
+ *  @buffer: contains the command to write and where the return status will
+ *   be placed
+ *  @length: length of buffer, must be multiple of 4 bytes
+ *  @timeout: time in ms to wait for command completion
+ *  @return_data: read and return data from the buffer (true) or not (false)
+ *   Needed because FW structures are big endian and decoding of
+ *   these fields can be 8 bit or 16 bit based on command. Decoding
+ *   is not easily understood without making a table of commands.
+ *   So we will leave this up to the caller to read back the data
+ *   in these cases.
+ *
+ *  Communicates with the manageability block.  On success return 0
+ *  else return TXGBE_ERR_HOST_INTERFACE_COMMAND.
+ **/
+s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
+				 u32 length, u32 timeout, bool return_data)
+{
+	u32 hicr, i, bi;
+	u32 hdr_size = sizeof(struct txgbe_hic_hdr);
+	u16 buf_len;
+	u32 dword_len;
+	s32 status = 0;
+	u32 buf[64] = {};
+
+	if (length == 0 || length > TXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+		DEBUGOUT1("Buffer length failure buffersize=%d.\n", length);
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	if (TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB) != 0)
+		return TXGBE_ERR_SWFW_SYNC;
+
+	/* Calculate length in DWORDs. We must be DWORD aligned */
+	if ((length % (sizeof(u32))) != 0) {
+		DEBUGOUT("Buffer length failure, not aligned to dword");
+		status = TXGBE_ERR_INVALID_ARGUMENT;
+		goto rel_out;
+	}
+
+	dword_len = length >> 2;
+
+	/* The device driver writes the relevant command block
+	 * into the ram area.
+	 */
+	for (i = 0; i < dword_len; i++) {
+		if (txgbe_check_mng_access(hw)) {
+			wr32a(hw, TXGBE_MNG_MBOX, i, TXGBE_CPU_TO_LE32(buffer[i]));
+			/* write flush */
+			buf[i] = rd32a(hw, TXGBE_MNG_MBOX, i);
+		} else {
+			status = TXGBE_ERR_MNG_ACCESS_FAILED;
+			goto rel_out;
+		}
+	}
+	/* Setting this bit tells the ARC that a new command is pending. */
+	if (txgbe_check_mng_access(hw)) {
+		wr32m(hw, TXGBE_MNG_MBOX_CTL,
+		      TXGBE_MNG_MBOX_CTL_SWRDY, TXGBE_MNG_MBOX_CTL_SWRDY);
+	} else {
+		status = TXGBE_ERR_MNG_ACCESS_FAILED;
+		goto rel_out;
+	}
+
+	for (i = 0; i < timeout; i++) {
+		if (txgbe_check_mng_access(hw)) {
+			hicr = rd32(hw, TXGBE_MNG_MBOX_CTL);
+			if ((hicr & TXGBE_MNG_MBOX_CTL_FWRDY))
+				break;
+		}
+		msec_delay(1);
+	}
+
+	buf[0] = rd32(hw, TXGBE_MNG_MBOX);
+	if ((buf[0] & 0xff0000) >> 16 == 0x80) {
+		DEBUGOUT("It's unknown cmd.\n");
+		status = TXGBE_ERR_MNG_ACCESS_FAILED;
+		goto rel_out;
+	}
+	/* Check command completion */
+	if (timeout != 0 && i == timeout) {
+		ERROR_REPORT1(TXGBE_ERROR_CAUTION,
+			      "Command has failed with no status valid.\n");
+
+		ERROR_REPORT1(TXGBE_ERROR_CAUTION, "write value:\n");
+		for (i = 0; i < dword_len; i++)
+			ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buffer[i]);
+		ERROR_REPORT1(TXGBE_ERROR_CAUTION, "read value:\n");
+		for (i = 0; i < dword_len; i++)
+			ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buf[i]);
+		if ((buffer[0] & 0xff) != (~buf[0] >> 24)) {
+			status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+			goto rel_out;
+		}
+	}
+
+	if (!return_data)
+		goto rel_out;
+
+	/* Calculate length in DWORDs */
+	dword_len = hdr_size >> 2;
+
+	/* first pull in the header so we know the buffer length */
+	for (bi = 0; bi < dword_len; bi++) {
+		if (txgbe_check_mng_access(hw)) {
+			buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX, bi);
+			TXGBE_LE32_TO_CPUS(&buffer[bi]);
+		} else {
+			status = TXGBE_ERR_MNG_ACCESS_FAILED;
+			goto rel_out;
+		}
+	}
+
+	/* If there is any thing in data position pull it in */
+	buf_len = ((struct txgbe_hic_hdr *)buffer)->buf_len;
+	if (buf_len == 0)
+		goto rel_out;
+
+	if (length < buf_len + hdr_size) {
+		DEBUGOUT("Buffer not large enough for reply message.\n");
+		status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+		goto rel_out;
+	}
+
+	/* Calculate length in DWORDs, add 3 for odd lengths */
+	dword_len = (buf_len + 3) >> 2;
+
+	/* Pull in the rest of the buffer (bi is where we left off) */
+	for (; bi <= dword_len; bi++) {
+		if (txgbe_check_mng_access(hw)) {
+			buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX, bi);
+			TXGBE_LE32_TO_CPUS(&buffer[bi]);
+		} else {
+			status = TXGBE_ERR_MNG_ACCESS_FAILED;
+			goto rel_out;
+		}
+	}
+
+rel_out:
+	TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB);
+	return status;
+}
+
+/**
+ *  txgbe_set_fw_drv_ver - Sends driver version to firmware
+ *  @hw: pointer to the HW structure
+ *  @maj: driver version major number
+ *  @min: driver version minor number
+ *  @build: driver version build number
+ *  @sub: driver version sub build number
+ *
+ *  Sends driver version number to firmware through the manageability
+ *  block.  On success return 0
+ *  else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
+			 u8 build, u8 sub)
+{
+	struct txgbe_hic_drv_info fw_cmd;
+	int i;
+	s32 ret_val = 0;
+
+	fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+	fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN;
+	fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	fw_cmd.port_num = (u8)hw->bus.func;
+	fw_cmd.ver_maj = maj;
+	fw_cmd.ver_min = min;
+	fw_cmd.ver_build = build;
+	fw_cmd.ver_sub = sub;
+	fw_cmd.hdr.checksum = 0;
+	fw_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&fw_cmd,
+				(FW_CEM_HDR_LEN + fw_cmd.hdr.buf_len));
+	fw_cmd.pad = 0;
+	fw_cmd.pad2 = 0;
+
+	for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+		ret_val = txgbe_host_interface_command(hw, (u32 *)&fw_cmd,
+						       sizeof(fw_cmd),
+						       TXGBE_HI_COMMAND_TIMEOUT,
+						       true);
+		if (ret_val != 0)
+			continue;
+
+		if (fw_cmd.hdr.cmd_or_resp.ret_status ==
+		    FW_CEM_RESP_STATUS_SUCCESS)
+			ret_val = 0;
+		else
+			ret_val = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+		break;
+	}
+
+	return ret_val;
+}
+
+/**
+ *  txgbe_reset_hostif - send reset cmd to fw
+ *  @hw: pointer to hardware structure
+ *
+ *  Sends reset cmd to firmware through the manageability
+ *  block.  On success return 0
+ *  else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 txgbe_reset_hostif(struct txgbe_hw *hw)
+{
+	struct txgbe_hic_reset reset_cmd;
+	int i;
+	s32 status = 0;
+
+	reset_cmd.hdr.cmd = FW_RESET_CMD;
+	reset_cmd.hdr.buf_len = FW_RESET_LEN;
+	reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	reset_cmd.lan_id = hw->bus.lan_id;
+	reset_cmd.reset_type = (u16)hw->reset_type;
+	reset_cmd.hdr.checksum = 0;
+	reset_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&reset_cmd,
+				(FW_CEM_HDR_LEN + reset_cmd.hdr.buf_len));
+
+	for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+		status = txgbe_host_interface_command(hw, (u32 *)&reset_cmd,
+						      sizeof(reset_cmd),
+						      TXGBE_HI_COMMAND_TIMEOUT,
+						      true);
+		if (status != 0)
+			continue;
+
+		if (reset_cmd.hdr.cmd_or_resp.ret_status ==
+		    FW_CEM_RESP_STATUS_SUCCESS) {
+			status = 0;
+		} else {
+			status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+		}
+
+		break;
+	}
+
+	return status;
+}
+
 /**
  *  txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
  *  @hw: pointer to hardware structure
@@ -541,6 +1190,46 @@ void txgbe_disable_rx(struct txgbe_hw *hw)
 	}
 }
 
+/**
+ * txgbe_mng_present - returns true when management capability is present
+ * @hw: pointer to hardware structure
+ */
+bool txgbe_mng_present(struct txgbe_hw *hw)
+{
+	u32 fwsm;
+
+	fwsm = rd32(hw, TXGBE_MIS_ST);
+	return fwsm & TXGBE_MIS_ST_MNG_INIT_DN;
+}
+
+bool txgbe_check_mng_access(struct txgbe_hw *hw)
+{
+	bool ret = false;
+	u32 rst_delay;
+	u32 i;
+
+	struct txgbe_adapter *adapter = hw->back;
+
+	if (!txgbe_mng_present(hw))
+		return false;
+	if (adapter->hw.revision_id != TXGBE_SP_MPW)
+		return true;
+	if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED))
+		return true;
+
+	rst_delay = (rd32(&adapter->hw, TXGBE_MIS_RST_ST) &
+		     TXGBE_MIS_RST_ST_RST_INIT) >>
+		     TXGBE_MIS_RST_ST_RST_INI_SHIFT;
+	for (i = 0; i < rst_delay + 2; i++) {
+		if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED)) {
+			ret = true;
+			break;
+		}
+		msleep(100);
+	}
+	return ret;
+}
+
 int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 {
 	u32 i = 0;
@@ -575,6 +1264,7 @@ int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 s32 txgbe_init_ops(struct txgbe_hw *hw)
 {
 	struct txgbe_mac_info *mac = &hw->mac;
+	struct txgbe_eeprom_info *eeprom = &hw->eeprom;
 
 	/* MAC */
 	mac->ops.init_hw = txgbe_init_hw;
@@ -582,9 +1272,12 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->ops.stop_adapter = txgbe_stop_adapter;
 	mac->ops.get_bus_info = txgbe_get_bus_info;
 	mac->ops.set_lan_id = txgbe_set_lan_id_multi_port_pcie;
+	mac->ops.acquire_swfw_sync = txgbe_acquire_swfw_sync;
+	mac->ops.release_swfw_sync = txgbe_release_swfw_sync;
 	mac->ops.reset_hw = txgbe_reset_hw;
 	mac->ops.start_hw = txgbe_start_hw;
 	mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr;
+	mac->ops.get_wwn_prefix = txgbe_get_wwn_prefix;
 
 	/* RAR */
 	mac->ops.set_rar = txgbe_set_rar;
@@ -596,7 +1289,16 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->max_rx_queues      = TXGBE_SP_MAX_RX_QUEUES;
 	mac->max_tx_queues      = TXGBE_SP_MAX_TX_QUEUES;
 
+	/* EEPROM */
+	eeprom->ops.init_params = txgbe_init_eeprom_params;
+	eeprom->ops.calc_checksum = txgbe_calc_eeprom_checksum;
+	eeprom->ops.read = txgbe_read_ee_hostif;
+	eeprom->ops.read_buffer = txgbe_read_ee_hostif_buffer;
+	eeprom->ops.validate_checksum = txgbe_validate_eeprom_checksum;
+
 	/* Manageability interface */
+	mac->ops.set_fw_drv_ver = txgbe_set_fw_drv_ver;
+
 	mac->ops.init_thermal_sensor_thresh =
 				      txgbe_init_thermal_sensor_thresh;
 
@@ -695,6 +1397,14 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw)
 			status = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_SW_RESET);
 			if (status != 0)
 				goto reset_hw_out;
+			/* errata 7 */
+			if (txgbe_mng_present(hw) &&
+			    hw->revision_id == TXGBE_SP_MPW) {
+				struct txgbe_adapter *adapter =
+					(struct txgbe_adapter *)hw->back;
+				adapter->flags2 &=
+					~TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
+			}
 		} else if (hw->reset_type == TXGBE_GLOBAL_RESET) {
 			struct txgbe_adapter *adapter =
 					(struct txgbe_adapter *)hw->back;
@@ -704,14 +1414,21 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw)
 			pci_wake_from_d3(adapter->pdev, false);
 		}
 	} else {
-		if (hw->bus.lan_id == 0)
-			reset = TXGBE_MIS_RST_LAN0_RST;
-		else
-			reset = TXGBE_MIS_RST_LAN1_RST;
-
-		wr32(hw, TXGBE_MIS_RST,
-		     reset | rd32(hw, TXGBE_MIS_RST));
-		TXGBE_WRITE_FLUSH(hw);
+		if (txgbe_mng_present(hw)) {
+			if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+			      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+				txgbe_reset_hostif(hw);
+			}
+		} else {
+			if (hw->bus.lan_id == 0)
+				reset = TXGBE_MIS_RST_LAN0_RST;
+			else
+				reset = TXGBE_MIS_RST_LAN1_RST;
+
+			wr32(hw, TXGBE_MIS_RST,
+			     reset | rd32(hw, TXGBE_MIS_RST));
+			TXGBE_WRITE_FLUSH(hw);
+		}
 		usec_delay(10);
 
 		if (hw->bus.lan_id == 0)
@@ -752,6 +1469,10 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw)
 		hw->mac.num_rar_entries--;
 	}
 
+	/* Store the alternative WWNN/WWPN prefix */
+	TCALL(hw, mac.ops.get_wwn_prefix, &hw->mac.wwnn_prefix,
+	      &hw->mac.wwpn_prefix);
+
 	pci_set_master(((struct txgbe_adapter *)hw->back)->pdev);
 
 reset_hw_out:
@@ -787,3 +1508,288 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 	return ret_val;
 }
 
+/**
+ *  txgbe_init_eeprom_params - Initialize EEPROM params
+ *  @hw: pointer to hardware structure
+ *
+ *  Initializes the EEPROM parameters txgbe_eeprom_info within the
+ *  txgbe_hw struct in order to set up EEPROM access.
+ **/
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw)
+{
+	struct txgbe_eeprom_info *eeprom = &hw->eeprom;
+	u16 eeprom_size;
+	s32 status = 0;
+	u16 data;
+
+	if (eeprom->type == txgbe_eeprom_uninitialized) {
+		eeprom->semaphore_delay = 10;
+		eeprom->type = txgbe_eeprom_none;
+
+		if (!(rd32(hw, TXGBE_SPI_STATUS) &
+		      TXGBE_SPI_STATUS_FLASH_BYPASS)) {
+			eeprom->type = txgbe_flash;
+
+			eeprom_size = 4096;
+			eeprom->word_size = eeprom_size >> 1;
+
+			DEBUGOUT2("Eeprom params: type = %d, size = %d\n",
+				  eeprom->type, eeprom->word_size);
+		}
+	}
+
+	status = TCALL(hw, eeprom.ops.read, TXGBE_SW_REGION_PTR, &data);
+	if (status) {
+		DEBUGOUT("NVM Read Error\n");
+		return status;
+	}
+	eeprom->sw_region_offset = data >> 1;
+
+	return status;
+}
+
+/**
+ *  txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd
+ *  assuming that the semaphore is already obtained.
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset,
+			      u16 *data)
+{
+	s32 status;
+	struct txgbe_hic_read_shadow_ram buffer;
+
+	buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+	buffer.hdr.req.buf_lenh = 0;
+	buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+	buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+	/* convert offset from words to bytes */
+	buffer.address = TXGBE_CPU_TO_BE32(offset * 2);
+	/* one word */
+	buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16));
+
+	status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+					      sizeof(buffer),
+					      TXGBE_HI_COMMAND_TIMEOUT, false);
+
+	if (status)
+		return status;
+	if (txgbe_check_mng_access(hw)) {
+		*data = (u16)rd32a(hw, TXGBE_MNG_MBOX, FW_NVM_DATA_OFFSET);
+	} else {
+		status = TXGBE_ERR_MNG_ACCESS_FAILED;
+		return status;
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset,
+			 u16 *data)
+{
+	s32 status = 0;
+
+	if (TCALL(hw, mac.ops.acquire_swfw_sync,
+		  TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) {
+		status = txgbe_read_ee_hostif_data(hw, offset, data);
+		TCALL(hw, mac.ops.release_swfw_sync,
+		      TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+	} else {
+		status = TXGBE_ERR_SWFW_SYNC;
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_read_ee_hostif_buffer- Read EEPROM word(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @words: number of words
+ *  @data: word(s) read from the EEPROM
+ *
+ *  Reads a 16 bit word(s) from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
+				u16 offset, u16 words, u16 *data)
+{
+	struct txgbe_hic_read_shadow_ram buffer;
+	u32 current_word = 0;
+	u16 words_to_read;
+	s32 status;
+	u32 i;
+	u32 value = 0;
+
+	/* Take semaphore for the entire operation. */
+	status = TCALL(hw, mac.ops.acquire_swfw_sync,
+		       TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+	if (status) {
+		DEBUGOUT("EEPROM read buffer - semaphore failed\n");
+		return status;
+	}
+	while (words) {
+		if (words > FW_MAX_READ_BUFFER_SIZE / 2)
+			words_to_read = FW_MAX_READ_BUFFER_SIZE / 2;
+		else
+			words_to_read = words;
+
+		buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+		buffer.hdr.req.buf_lenh = 0;
+		buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+		buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+		/* convert offset from words to bytes */
+		buffer.address = TXGBE_CPU_TO_BE32((offset + current_word) * 2);
+		buffer.length = TXGBE_CPU_TO_BE16(words_to_read * 2);
+
+		status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+						      sizeof(buffer),
+						      TXGBE_HI_COMMAND_TIMEOUT,
+						      false);
+
+		if (status) {
+			DEBUGOUT("Host interface command failed\n");
+			goto out;
+		}
+
+		for (i = 0; i < words_to_read; i++) {
+			u32 reg = TXGBE_MNG_MBOX + (FW_NVM_DATA_OFFSET << 2) +
+				  2 * i;
+			if (txgbe_check_mng_access(hw)) {
+				value = rd32(hw, reg);
+			} else {
+				status = TXGBE_ERR_MNG_ACCESS_FAILED;
+				return status;
+			}
+			data[current_word] = (u16)(value & 0xffff);
+			current_word++;
+			i++;
+			if (i < words_to_read) {
+				value >>= 16;
+				data[current_word] = (u16)(value & 0xffff);
+				current_word++;
+			}
+		}
+		words -= words_to_read;
+	}
+
+out:
+	TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+	return status;
+}
+
+/**
+ *  txgbe_calc_eeprom_checksum - Calculates and returns the checksum
+ *  @hw: pointer to hardware structure
+ *
+ *  Returns a negative error code on error, or the 16-bit checksum
+ **/
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw)
+{
+	u16 *buffer = NULL;
+	u32 buffer_size = 0;
+
+	u16 *eeprom_ptrs = NULL;
+	u16 *local_buffer;
+	s32 status;
+	u16 checksum = 0;
+	u16 i;
+
+	TCALL(hw, eeprom.ops.init_params);
+
+	if (!buffer) {
+		eeprom_ptrs = vmalloc(TXGBE_EEPROM_LAST_WORD * sizeof(u16));
+		if (!eeprom_ptrs)
+			return TXGBE_ERR_NO_SPACE;
+		/* Read pointer area */
+		status = txgbe_read_ee_hostif_buffer(hw, 0,
+						     TXGBE_EEPROM_LAST_WORD,
+						     eeprom_ptrs);
+		if (status) {
+			DEBUGOUT("Failed to read EEPROM image\n");
+			return status;
+		}
+		local_buffer = eeprom_ptrs;
+	} else {
+		if (buffer_size < TXGBE_EEPROM_LAST_WORD)
+			return TXGBE_ERR_PARAM;
+		local_buffer = buffer;
+	}
+
+	for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++)
+		if (i != hw->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM)
+			checksum += local_buffer[i];
+
+	checksum = (u16)TXGBE_EEPROM_SUM - checksum;
+	if (eeprom_ptrs)
+		vfree(eeprom_ptrs);
+
+	return (s32)checksum;
+}
+
+/**
+ *  txgbe_validate_eeprom_checksum - Validate EEPROM checksum
+ *  @hw: pointer to hardware structure
+ *  @checksum_val: calculated checksum
+ *
+ *  Performs checksum calculation and validates the EEPROM checksum.  If the
+ *  caller does not need checksum_val, the value can be NULL.
+ **/
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
+				   u16 *checksum_val)
+{
+	s32 status;
+	u16 checksum;
+	u16 read_checksum = 0;
+
+	/* Read the first word from the EEPROM. If this times out or fails, do
+	 * not continue or we could be in for a very long wait while every
+	 * EEPROM read fails
+	 */
+	status = TCALL(hw, eeprom.ops.read, 0, &checksum);
+	if (status) {
+		DEBUGOUT("EEPROM read failed\n");
+		return status;
+	}
+
+	status = TCALL(hw, eeprom.ops.calc_checksum);
+	if (status < 0)
+		return status;
+
+	checksum = (u16)(status & 0xffff);
+
+	status = txgbe_read_ee_hostif(hw, hw->eeprom.sw_region_offset +
+				      TXGBE_EEPROM_CHECKSUM,
+				      &read_checksum);
+	if (status)
+		return status;
+
+	/* Verify read checksum from EEPROM is the same as
+	 * calculated checksum
+	 */
+	if (read_checksum != checksum) {
+		status = TXGBE_ERR_EEPROM_CHECKSUM;
+		ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE,
+			      "Invalid EEPROM checksum\n");
+	}
+
+	/* If the user cares, return the calculated checksum */
+	if (checksum_val)
+		*checksum_val = checksum;
+
+	return status;
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index 318c7f0dc5b9..700d344ba6c1 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -6,6 +6,8 @@
 
 s32 txgbe_init_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw(struct txgbe_hw *hw);
+s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num,
+			  u32 pba_num_size);
 s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
 s32 txgbe_get_bus_info(struct txgbe_hw *hw);
 void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status);
@@ -17,6 +19,8 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
 s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
 s32 txgbe_init_rx_addrs(struct txgbe_hw *hw);
 
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
 s32 txgbe_disable_pcie_master(struct txgbe_hw *hw);
 
 s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
@@ -25,6 +29,19 @@ s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq);
 s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq);
 s32 txgbe_init_uta_tables(struct txgbe_hw *hw);
 
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+			 u16 *wwpn_prefix);
+
+s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
+			 u8 build, u8 ver);
+s32 txgbe_reset_hostif(struct txgbe_hw *hw);
+u8 txgbe_calculate_checksum(u8 *buffer, u32 length);
+s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
+				 u32 length, u32 timeout, bool return_data);
+
+bool txgbe_mng_present(struct txgbe_hw *hw);
+bool txgbe_check_mng_access(struct txgbe_hw *hw);
+
 s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
 void txgbe_disable_rx(struct txgbe_hw *hw);
 int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit);
@@ -33,4 +50,13 @@ int txgbe_reset_misc(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_init_ops(struct txgbe_hw *hw);
 
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
+				   u16 *checksum_val);
+s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
+				u16 offset, u16 words, u16 *data);
+s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset, u16 *data);
+s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset, u16 *data);
+
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index a1441af5b46b..9336eadfc690 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -458,6 +458,12 @@ static int txgbe_probe(struct pci_dev *pdev,
 	struct txgbe_adapter *adapter = NULL;
 	struct txgbe_hw *hw = NULL;
 	int err, pci_using_dac, expected_gts;
+	u16 offset = 0;
+	u16 eeprom_verh = 0, eeprom_verl = 0;
+	u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0;
+	u32 etrack_id = 0;
+	u16 build = 0, major = 0, patch = 0;
+	u8 part_str[TXGBE_PBANUM_LENGTH];
 	unsigned int indices = MAX_TX_QUEUES;
 	bool disable_dev = false;
 
@@ -558,6 +564,14 @@ static int txgbe_probe(struct pci_dev *pdev,
 		goto err_sw_init;
 	}
 
+	/* make sure the EEPROM is good */
+	if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) {
+		txgbe_dev_err("The EEPROM Checksum Is Not Valid\n");
+		wr32(hw, TXGBE_MIS_RST, TXGBE_MIS_RST_SW_RST);
+		err = -EIO;
+		goto err_sw_init;
+	}
+
 	eth_hw_addr_set(netdev, hw->mac.perm_addr);
 
 	if (!is_valid_ether_addr(netdev->dev_addr)) {
@@ -578,6 +592,43 @@ static int txgbe_probe(struct pci_dev *pdev,
 	set_bit(__TXGBE_SERVICE_INITED, &adapter->state);
 	clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
 
+	/* Save off EEPROM version number and Option Rom version which
+	 * together make a unique identify for the eeprom
+	 */
+	TCALL(hw, eeprom.ops.read,
+	      hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H,
+	      &eeprom_verh);
+	TCALL(hw, eeprom.ops.read,
+	      hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L,
+	      &eeprom_verl);
+	etrack_id = (eeprom_verh << 16) | eeprom_verl;
+
+	TCALL(hw, eeprom.ops.read,
+	      hw->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG, &offset);
+
+	/* Make sure offset to SCSI block is valid */
+	if (!(offset == 0x0) && !(offset == 0xffff)) {
+		TCALL(hw, eeprom.ops.read, offset + 0x84, &eeprom_cfg_blkh);
+		TCALL(hw, eeprom.ops.read, offset + 0x83, &eeprom_cfg_blkl);
+
+		/* Only display Option Rom if exist */
+		if (eeprom_cfg_blkl && eeprom_cfg_blkh) {
+			major = eeprom_cfg_blkl >> 8;
+			build = (eeprom_cfg_blkl << 8) | (eeprom_cfg_blkh >> 8);
+			patch = eeprom_cfg_blkh & 0x00ff;
+
+			snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+				 "0x%08x, %d.%d.%d", etrack_id, major, build,
+				 patch);
+		} else {
+			snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+				 "0x%08x", etrack_id);
+		}
+	} else {
+		snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+			 "0x%08x", etrack_id);
+	}
+
 	/* reset the hardware with the new settings */
 	err = TCALL(hw, mac.ops.start_hw);
 	if (err) {
@@ -616,11 +667,19 @@ static int txgbe_probe(struct pci_dev *pdev,
 	else
 		txgbe_info(probe, "NCSI : unsupported");
 
+	/* First try to read PBA as a string */
+	err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH);
+	if (err)
+
+		strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH);
 	txgbe_dev_info("%02x:%02x:%02x:%02x:%02x:%02x\n",
 		       netdev->dev_addr[0], netdev->dev_addr[1],
 		       netdev->dev_addr[2], netdev->dev_addr[3],
 		       netdev->dev_addr[4], netdev->dev_addr[5]);
 
+	/* firmware requires blank driver version */
+	TCALL(hw, mac.ops.set_fw_drv_ver, 0xFF, 0xFF, 0xFF, 0xFF);
+
 	/* add san mac addr to netdev */
 	txgbe_add_sanmac_netdev(netdev);
 
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index 60b1c3a2ac50..e1b438e18edc 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -1540,8 +1540,92 @@ enum TXGBE_MSCA_CMD_value {
 #define TXGBE_PCIDEVCTRL2_4_8s          0xd
 #define TXGBE_PCIDEVCTRL2_17_34s        0xe
 
+/****************** Manageablility Host Interface defines ********************/
+#define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH  256 /* Num of bytes in range */
+#define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */
+#define TXGBE_HI_COMMAND_TIMEOUT        5000 /* Process HI command limit */
+
+/* CEM Support */
+#define FW_CEM_HDR_LEN                  0x4
+#define FW_CEM_CMD_DRIVER_INFO          0xDD
+#define FW_CEM_CMD_DRIVER_INFO_LEN      0x5
+#define FW_CEM_CMD_RESERVED             0X0
+#define FW_CEM_MAX_RETRIES              3
+#define FW_CEM_RESP_STATUS_SUCCESS      0x1
+#define FW_READ_SHADOW_RAM_CMD          0x31
+#define FW_READ_SHADOW_RAM_LEN          0x6
+#define FW_DEFAULT_CHECKSUM             0xFF /* checksum always 0xFF */
+#define FW_NVM_DATA_OFFSET              3
+#define FW_MAX_READ_BUFFER_SIZE         244
+#define FW_RESET_CMD                    0xDF
+#define FW_RESET_LEN                    0x2
+
+/* Host Interface Command Structures */
+struct txgbe_hic_hdr {
+	u8 cmd;
+	u8 buf_len;
+	union {
+		u8 cmd_resv;
+		u8 ret_status;
+	} cmd_or_resp;
+	u8 checksum;
+};
+
+struct txgbe_hic_hdr2_req {
+	u8 cmd;
+	u8 buf_lenh;
+	u8 buf_lenl;
+	u8 checksum;
+};
+
+struct txgbe_hic_hdr2_rsp {
+	u8 cmd;
+	u8 buf_lenl;
+	u8 buf_lenh_status;     /* 7-5: high bits of buf_len, 4-0: status */
+	u8 checksum;
+};
+
+union txgbe_hic_hdr2 {
+	struct txgbe_hic_hdr2_req req;
+	struct txgbe_hic_hdr2_rsp rsp;
+};
+
+struct txgbe_hic_drv_info {
+	struct txgbe_hic_hdr hdr;
+	u8 port_num;
+	u8 ver_sub;
+	u8 ver_build;
+	u8 ver_min;
+	u8 ver_maj;
+	u8 pad; /* end spacing to ensure length is mult. of dword */
+	u16 pad2; /* end spacing to ensure length is mult. of dword2 */
+};
+
+/* These need to be dword aligned */
+struct txgbe_hic_read_shadow_ram {
+	union txgbe_hic_hdr2 hdr;
+	u32 address;
+	u16 length;
+	u16 pad2;
+	u16 data;
+	u16 pad3;
+};
+
+struct txgbe_hic_reset {
+	struct txgbe_hic_hdr hdr;
+	u16 lan_id;
+	u16 reset_type;
+};
+
 /* Number of 100 microseconds we wait for PCI Express master disable */
 #define TXGBE_PCI_MASTER_DISABLE_TIMEOUT        800
+enum txgbe_eeprom_type {
+	txgbe_eeprom_uninitialized = 0,
+	txgbe_eeprom_spi,
+	txgbe_flash,
+	txgbe_eeprom_none /* No NVM support */
+};
+
 
 /* PCI bus types */
 enum txgbe_bus_type {
@@ -1600,15 +1684,29 @@ struct txgbe_bus_info {
 /* forward declaration */
 struct txgbe_hw;
 
+/* Function pointer table */
+struct txgbe_eeprom_operations {
+	s32 (*init_params)(struct txgbe_hw *hw);
+	s32 (*read)(struct txgbe_hw *hw, u16 offset, u16 *data);
+	s32 (*read_buffer)(struct txgbe_hw *hw,
+			   u16 offset, u16 words, u16 *data);
+	s32 (*validate_checksum)(struct txgbe_hw *hw, u16 *checksum_val);
+	s32 (*calc_checksum)(struct txgbe_hw *hw);
+};
+
 struct txgbe_mac_operations {
 	s32 (*init_hw)(struct txgbe_hw *hw);
 	s32 (*reset_hw)(struct txgbe_hw *hw);
 	s32 (*start_hw)(struct txgbe_hw *hw);
 	s32 (*get_mac_addr)(struct txgbe_hw *hw, u8 *mac_addr);
 	s32 (*get_san_mac_addr)(struct txgbe_hw *hw, u8 *san_mac_addr);
+	s32 (*get_wwn_prefix)(struct txgbe_hw *hw, u16 *wwnn_prefix,
+			      u16 *wwpn_prefix);
 	s32 (*stop_adapter)(struct txgbe_hw *hw);
 	s32 (*get_bus_info)(struct txgbe_hw *hw);
 	void (*set_lan_id)(struct txgbe_hw *hw);
+	s32 (*acquire_swfw_sync)(struct txgbe_hw *hw, u32 mask);
+	void (*release_swfw_sync)(struct txgbe_hw *hw, u32 mask);
 
 	/* RAR */
 	s32 (*set_rar)(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
@@ -1620,15 +1718,29 @@ struct txgbe_mac_operations {
 	s32 (*init_uta_tables)(struct txgbe_hw *hw);
 
 	/* Manageability interface */
+	s32 (*set_fw_drv_ver)(struct txgbe_hw *hw, u8 maj, u8 min,
+			      u8 build, u8 ver);
 	s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
 	void (*disable_rx)(struct txgbe_hw *hw);
 };
 
+struct txgbe_eeprom_info {
+	struct txgbe_eeprom_operations ops;
+	enum txgbe_eeprom_type type;
+	u32 semaphore_delay;
+	u16 word_size;
+	u16 sw_region_offset;
+};
+
 struct txgbe_mac_info {
 	struct txgbe_mac_operations ops;
 	u8 addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
 	u8 perm_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
 	u8 san_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+	/* prefix for World Wide Node Name (WWNN) */
+	u16 wwnn_prefix;
+	/* prefix for World Wide Port Name (WWPN) */
+	u16 wwpn_prefix;
 	s32 mc_filter_type;
 	u32 mcft_size;
 	u32 num_rar_entries;
@@ -1651,6 +1763,7 @@ struct txgbe_hw {
 	void *back;
 	struct txgbe_mac_info mac;
 	struct txgbe_addr_filter_info addr_ctrl;
+	struct txgbe_eeprom_info eeprom;
 	struct txgbe_bus_info bus;
 	u16 device_id;
 	u16 vendor_id;
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 04/14] net: txgbe: Add PHY interface support
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (2 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 03/14] net: txgbe: Add operations to interact with firmware Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12 12:21   ` Andrew Lunn
  2022-05-11  3:26 ` [PATCH net-next 05/14] net: txgbe: Add interrupt support Jiawen Wu
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Support to identify PHY and fiber module, setup link and laser control.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |    2 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |   19 +
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c | 1755 ++++++++++++++++-
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   36 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  492 ++++-
 .../net/ethernet/wangxun/txgbe/txgbe_phy.c    |  401 ++++
 .../net/ethernet/wangxun/txgbe/txgbe_phy.h    |   54 +
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  332 ++++
 8 files changed, 3025 insertions(+), 66 deletions(-)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index eaf1d46693bc..7dc73b58fe75 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -7,4 +7,4 @@
 obj-$(CONFIG_TXGBE) += txgbe.o
 
 txgbe-objs := txgbe_main.o \
-              txgbe_hw.o
+              txgbe_hw.o txgbe_phy.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 4c5de310292f..58a89d43046f 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -38,10 +38,21 @@ struct txgbe_mac_addr {
 #define TXGBE_MAC_STATE_MODIFIED        0x2
 #define TXGBE_MAC_STATE_IN_USE          0x4
 
+/* default to trying for four seconds */
+#define TXGBE_TRY_LINK_TIMEOUT  (4 * HZ)
+#define TXGBE_SFP_POLL_JIFFIES  (2 * HZ)        /* SFP poll every 2 seconds */
+
+/**
+ * txgbe_adapter.flag
+ **/
+#define TXGBE_FLAG_NEED_LINK_UPDATE             ((u32)(1 << 0))
+#define TXGBE_FLAG_NEED_LINK_CONFIG             ((u32)(1 << 1))
+
 /**
  * txgbe_adapter.flag2
  **/
 #define TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED     (1U << 0)
+#define TXGBE_FLAG2_SFP_NEEDS_RESET             (1U << 1)
 
 /* board specific private data structure */
 struct txgbe_adapter {
@@ -54,7 +65,10 @@ struct txgbe_adapter {
 	/* Some features need tri-state capability,
 	 * thus the additional *_CAPABLE flags.
 	 */
+	u32 flags;
 	u32 flags2;
+	u8 backplane_an;
+	u8 an37;
 	/* Tx fast path data */
 	int num_tx_queues;
 	u16 tx_itr_setting;
@@ -72,6 +86,11 @@ struct txgbe_adapter {
 	struct txgbe_hw hw;
 	u16 msg_enable;
 
+	u32 link_speed;
+	bool link_up;
+	unsigned long sfp_poll_time;
+	unsigned long link_check_timeout;
+
 	struct timer_list service_timer;
 	struct work_struct service_task;
 	u32 atr_sample_rate;
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 9865ae301133..0c1260ba7c72 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -3,6 +3,7 @@
 
 #include "txgbe_type.h"
 #include "txgbe_hw.h"
+#include "txgbe_phy.h"
 #include "txgbe.h"
 
 #define TXGBE_SP_MAX_TX_QUEUES  128
@@ -12,6 +13,50 @@
 static s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw);
 static void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw);
 
+u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr)
+{
+	unsigned int offset;
+	u32 data;
+	/* Set the LAN port indicator to offset[1] */
+	/* 1st, write the offset to IDA_ADDR register */
+	offset = TXGBE_XPCS_IDA_ADDR;
+	wr32(hw, offset, addr);
+
+	/* 2nd, read the data from IDA_DATA register */
+	offset = TXGBE_XPCS_IDA_DATA;
+	data = rd32(hw, offset);
+
+	return data;
+}
+
+void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+	unsigned int offset;
+
+	/* Set the LAN port indicator to offset[1] */
+	/* 1st, write the offset to IDA_ADDR register */
+	offset = TXGBE_ETHPHY_IDA_ADDR;
+	wr32(hw, offset, addr);
+
+	/* 2nd, read the data from IDA_DATA register */
+	offset = TXGBE_ETHPHY_IDA_DATA;
+	wr32(hw, offset, data);
+}
+
+void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+	unsigned int offset;
+
+	/* Set the LAN port indicator to offset[1] */
+	/* 1st, write the offset to IDA_ADDR register */
+	offset = TXGBE_XPCS_IDA_ADDR;
+	wr32(hw, offset, addr);
+
+	/* 2nd, read the data from IDA_DATA register */
+	offset = TXGBE_XPCS_IDA_DATA;
+	wr32(hw, offset, data);
+}
+
 s32 txgbe_init_hw(struct txgbe_hw *hw)
 {
 	s32 status;
@@ -305,6 +350,40 @@ s32 txgbe_stop_adapter(struct txgbe_hw *hw)
 	return txgbe_disable_pcie_master(hw);
 }
 
+/**
+ *  txgbe_led_on - Turns on the software controllable LEDs.
+ *  @hw: pointer to hardware structure
+ *  @index: led number to turn on
+ **/
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index)
+{
+	u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+
+	/* To turn on the LED, set mode to ON. */
+	led_reg |= index | (index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT);
+	wr32(hw, TXGBE_CFG_LED_CTL, led_reg);
+	TXGBE_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_led_off - Turns off the software controllable LEDs.
+ *  @hw: pointer to hardware structure
+ *  @index: led number to turn off
+ **/
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index)
+{
+	u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+
+	/* To turn off the LED, set mode to OFF. */
+	led_reg &= ~(index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT);
+	led_reg |= index;
+	wr32(hw, TXGBE_CFG_LED_CTL, led_reg);
+	TXGBE_WRITE_FLUSH(hw);
+	return 0;
+}
+
 /**
  *  txgbe_get_eeprom_semaphore - Get hardware semaphore
  *  @hw: pointer to hardware structure
@@ -1121,6 +1200,7 @@ s32 txgbe_reset_hostif(struct txgbe_hw *hw)
 		if (reset_cmd.hdr.cmd_or_resp.ret_status ==
 		    FW_CEM_RESP_STATUS_SUCCESS) {
 			status = 0;
+			hw->link_status = TXGBE_LINK_STATUS_NONE;
 		} else {
 			status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
 		}
@@ -1230,6 +1310,141 @@ bool txgbe_check_mng_access(struct txgbe_hw *hw)
 	return ret;
 }
 
+/**
+ *  txgbe_setup_mac_link_multispeed_fiber - Set MAC link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Set the link speed in the MAC and/or PHY register and restarts link.
+ **/
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+					  u32 speed,
+					  bool autoneg_wait_to_complete)
+{
+	u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	u32 highest_link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	s32 status = 0;
+	u32 speedcnt = 0;
+	u32 i = 0;
+	bool autoneg, link_up = false;
+
+	/* Mask off requested but non-supported speeds */
+	status = TCALL(hw, mac.ops.get_link_capabilities,
+		       &link_speed, &autoneg);
+	if (status != 0)
+		return status;
+
+	speed &= link_speed;
+
+	/* Try each speed one by one, highest priority first.  We do this in
+	 * software because 10Gb fiber doesn't support speed autonegotiation.
+	 */
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL) {
+		speedcnt++;
+		highest_link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+
+		/* If we already have link at this speed, just jump out */
+		status = TCALL(hw, mac.ops.check_link,
+			       &link_speed, &link_up, false);
+		if (status != 0)
+			return status;
+
+		if (link_speed == TXGBE_LINK_SPEED_10GB_FULL && link_up)
+			goto out;
+
+		/* Allow module to change analog characteristics (1G->10G) */
+		msec_delay(40);
+
+		status = TCALL(hw, mac.ops.setup_mac_link,
+			       TXGBE_LINK_SPEED_10GB_FULL,
+			       autoneg_wait_to_complete);
+		if (status != 0)
+			return status;
+
+		/* Flap the Tx laser if it has not already been done */
+		TCALL(hw, mac.ops.flap_tx_laser);
+
+		/* Wait for the controller to acquire link.  Per IEEE 802.3ap,
+		 * Section 73.10.2, we may have to wait up to 500ms if KR is
+		 * attempted.  sapphire uses the same timing for 10g SFI.
+		 */
+		for (i = 0; i < 5; i++) {
+			/* Wait for the link partner to also set speed */
+			msec_delay(100);
+
+			/* If we have link, just jump out */
+			status = TCALL(hw, mac.ops.check_link,
+				       &link_speed, &link_up, false);
+			if (status != 0)
+				return status;
+
+			if (link_up)
+				goto out;
+		}
+	}
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL) {
+		speedcnt++;
+		if (highest_link_speed == TXGBE_LINK_SPEED_UNKNOWN)
+			highest_link_speed = TXGBE_LINK_SPEED_1GB_FULL;
+
+		/* If we already have link at this speed, just jump out */
+		status = TCALL(hw, mac.ops.check_link,
+			       &link_speed, &link_up, false);
+		if (status != 0)
+			return status;
+
+		if (link_speed == TXGBE_LINK_SPEED_1GB_FULL && link_up)
+			goto out;
+
+		/* Allow module to change analog characteristics (10G->1G) */
+		msec_delay(40);
+
+		status = TCALL(hw, mac.ops.setup_mac_link,
+			       TXGBE_LINK_SPEED_1GB_FULL,
+			       autoneg_wait_to_complete);
+		if (status != 0)
+			return status;
+
+		/* Flap the Tx laser if it has not already been done */
+		TCALL(hw, mac.ops.flap_tx_laser);
+
+		/* Wait for the link partner to also set speed */
+		msec_delay(100);
+
+		/* If we have link, just jump out */
+		status = TCALL(hw, mac.ops.check_link,
+			       &link_speed, &link_up, false);
+		if (status != 0)
+			return status;
+
+		if (link_up)
+			goto out;
+	}
+
+	/* We didn't get link.  Configure back to the highest speed we tried,
+	 * (if there was more than one).  We call ourselves back with just the
+	 * single highest speed that the user requested.
+	 */
+	if (speedcnt > 1)
+		status = txgbe_setup_mac_link_multispeed_fiber(hw,
+							       highest_link_speed,
+							       autoneg_wait_to_complete);
+
+out:
+	/* Set autoneg_advertised value based on input link speed */
+	hw->phy.autoneg_advertised = 0;
+
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+	return status;
+}
+
 int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 {
 	u32 i = 0;
@@ -1253,6 +1468,62 @@ int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 	return err;
 }
 
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
+{
+	struct txgbe_mac_info *mac = &hw->mac;
+
+	/* enable the laser control functions for SFP+ fiber
+	 * and MNG not enabled
+	 */
+	if ((TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber) &&
+	    !txgbe_mng_present(hw)) {
+		mac->ops.disable_tx_laser = txgbe_disable_tx_laser_multispeed_fiber;
+		mac->ops.enable_tx_laser = txgbe_enable_tx_laser_multispeed_fiber;
+		mac->ops.flap_tx_laser = txgbe_flap_tx_laser_multispeed_fiber;
+
+	} else {
+		mac->ops.disable_tx_laser = NULL;
+		mac->ops.enable_tx_laser = NULL;
+		mac->ops.flap_tx_laser = NULL;
+	}
+
+	if (hw->phy.multispeed_fiber) {
+		/* Set up dual speed SFP+ support */
+		mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
+		mac->ops.setup_mac_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+	} else {
+		mac->ops.setup_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+	}
+}
+
+/**
+ *  txgbe_init_phy_ops - PHY/SFP specific init
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize any function pointers that were not able to be
+ *  set during init_shared_code because the PHY/SFP type was
+ *  not known.  Perform the SFP init if necessary.
+ *
+ **/
+s32 txgbe_init_phy_ops(struct txgbe_hw *hw)
+{
+	s32 ret_val = 0;
+
+	txgbe_init_i2c(hw);
+	/* Identify the PHY or SFP module */
+	ret_val = TCALL(hw, phy.ops.identify);
+	if (ret_val == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		goto init_phy_ops_out;
+
+	/* Setup function pointers based on detected SFP module and speeds */
+	txgbe_init_mac_link_ops(hw);
+
+init_phy_ops_out:
+	return ret_val;
+}
+
 /**
  *  txgbe_init_ops - Inits func ptrs and MAC type
  *  @hw: pointer to hardware structure
@@ -1264,8 +1535,16 @@ int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 s32 txgbe_init_ops(struct txgbe_hw *hw)
 {
 	struct txgbe_mac_info *mac = &hw->mac;
+	struct txgbe_phy_info *phy = &hw->phy;
 	struct txgbe_eeprom_info *eeprom = &hw->eeprom;
 
+	/* PHY */
+	phy->ops.read_i2c_byte = txgbe_read_i2c_byte;
+	phy->ops.read_i2c_eeprom = txgbe_read_i2c_eeprom;
+	phy->ops.identify_sfp = txgbe_identify_module;
+	phy->ops.identify = txgbe_identify_phy;
+	phy->ops.init = txgbe_init_phy_ops;
+
 	/* MAC */
 	mac->ops.init_hw = txgbe_init_hw;
 	mac->ops.get_mac_addr = txgbe_get_mac_addr;
@@ -1275,16 +1554,25 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->ops.acquire_swfw_sync = txgbe_acquire_swfw_sync;
 	mac->ops.release_swfw_sync = txgbe_release_swfw_sync;
 	mac->ops.reset_hw = txgbe_reset_hw;
+	mac->ops.get_media_type = txgbe_get_media_type;
 	mac->ops.start_hw = txgbe_start_hw;
 	mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr;
 	mac->ops.get_wwn_prefix = txgbe_get_wwn_prefix;
 
+	/* LEDs */
+	mac->ops.led_on = txgbe_led_on;
+	mac->ops.led_off = txgbe_led_off;
+
 	/* RAR */
 	mac->ops.set_rar = txgbe_set_rar;
 	mac->ops.clear_rar = txgbe_clear_rar;
 	mac->ops.init_rx_addrs = txgbe_init_rx_addrs;
 	mac->ops.init_uta_tables = txgbe_init_uta_tables;
 
+
+	/* Link */
+	mac->ops.get_link_capabilities = txgbe_get_link_capabilities;
+	mac->ops.check_link = txgbe_check_mac_link;
 	mac->num_rar_entries    = TXGBE_SP_RAR_ENTRIES;
 	mac->max_rx_queues      = TXGBE_SP_MAX_RX_QUEUES;
 	mac->max_tx_queues      = TXGBE_SP_MAX_TX_QUEUES;
@@ -1305,88 +1593,1242 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	return 0;
 }
 
-int txgbe_reset_misc(struct txgbe_hw *hw)
+/**
+ *  txgbe_get_link_capabilities - Determines link capabilities
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @autoneg: true when autoneg or autotry is enabled
+ *
+ *  Determines the link capabilities by reading the AUTOC register.
+ **/
+s32 txgbe_get_link_capabilities(struct txgbe_hw *hw,
+				u32 *speed,
+				bool *autoneg)
 {
-	int i;
+	s32 status = 0;
+	u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl;
+	u32 sr_an_mmd_adv_reg2;
+
+	/* Check if 1G SFP module. */
+	if (hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1) {
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = false;
+	} else if (hw->phy.multispeed_fiber) {
+		*speed = TXGBE_LINK_SPEED_10GB_FULL |
+			  TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = true;
+	}
+	/* SFP */
+	else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) {
+		*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		*autoneg = false;
+	}
+	/* SGMII */
+	else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) {
+		*speed = TXGBE_LINK_SPEED_1GB_FULL |
+			TXGBE_LINK_SPEED_100_FULL |
+			TXGBE_LINK_SPEED_10_FULL;
+		*autoneg = false;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_T |
+				TXGBE_PHYSICAL_LAYER_100BASE_TX;
+	/* MAC XAUI */
+	} else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) {
+		*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		*autoneg = false;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4;
+	/* MAC SGMII */
+	} else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = false;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+	} else { /* KR KX KX4 */
+		/* Determine link capabilities based on the stored value,
+		 * which represents EEPROM defaults.  If value has not
+		 * been stored, use the current register values.
+		 */
+		if (hw->mac.orig_link_settings_stored) {
+			sr_pcs_ctl = hw->mac.orig_sr_pcs_ctl2;
+			sr_pma_mmd_ctl1 = hw->mac.orig_sr_pma_mmd_ctl1;
+			sr_an_mmd_ctl = hw->mac.orig_sr_an_mmd_ctl;
+			sr_an_mmd_adv_reg2 = hw->mac.orig_sr_an_mmd_adv_reg2;
+		} else {
+			sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+			sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw,
+							  TXGBE_SR_PMA_MMD_CTL1);
+			sr_an_mmd_ctl = txgbe_rd32_epcs(hw,
+							TXGBE_SR_AN_MMD_CTL);
+			sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw,
+							     TXGBE_SR_AN_MMD_ADV_REG2);
+		}
 
-	/* receive packets that size > 2048 */
-	wr32m(hw, TXGBE_MAC_RX_CFG,
-	      TXGBE_MAC_RX_CFG_JE, TXGBE_MAC_RX_CFG_JE);
+		if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+				TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X &&
+		    (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK) ==
+				TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G &&
+		    (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+			/* 1G or KX - no backplane auto-negotiation */
+			*speed = TXGBE_LINK_SPEED_1GB_FULL;
+			*autoneg = false;
+			hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+		} else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+				TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X &&
+			   (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK) ==
+				TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G &&
+			   (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+			*speed = TXGBE_LINK_SPEED_10GB_FULL;
+			*autoneg = false;
+			hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4;
+		} else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+				TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R &&
+			   (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+			/* 10 GbE serial link (KR -no backplane auto-negotiation) */
+			*speed = TXGBE_LINK_SPEED_10GB_FULL;
+			*autoneg = false;
+			hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR;
+		} else if (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) {
+			/* KX/KX4/KR backplane auto-negotiation enable */
+			*speed = TXGBE_LINK_SPEED_UNKNOWN;
+			if (sr_an_mmd_adv_reg2 & TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR)
+				*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+			if (sr_an_mmd_adv_reg2 & TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4)
+				*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+			if (sr_an_mmd_adv_reg2 & TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX)
+				*speed |= TXGBE_LINK_SPEED_1GB_FULL;
+			*autoneg = true;
+			hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR |
+					TXGBE_PHYSICAL_LAYER_10GBASE_KX4 |
+					TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+		} else {
+			status = TXGBE_ERR_LINK_SETUP;
+			goto out;
+		}
+	}
 
-	/* clear counters on read */
-	wr32m(hw, TXGBE_MMC_CONTROL,
-	      TXGBE_MMC_CONTROL_RSTONRD, TXGBE_MMC_CONTROL_RSTONRD);
+out:
+	return status;
+}
 
-	wr32m(hw, TXGBE_MAC_RX_FLOW_CTRL,
-	      TXGBE_MAC_RX_FLOW_CTRL_RFE, TXGBE_MAC_RX_FLOW_CTRL_RFE);
+/**
+ *  txgbe_get_media_type - Get media type
+ *  @hw: pointer to hardware structure
+ *
+ *  Returns the media type (fiber, copper, backplane)
+ **/
+enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw)
+{
+	enum txgbe_media_type media_type;
+	u8 device_type = hw->subsystem_id & 0xF0;
+
+	switch (device_type) {
+	case TXGBE_ID_MAC_XAUI:
+	case TXGBE_ID_MAC_SGMII:
+	case TXGBE_ID_KR_KX_KX4:
+		/* Default device ID is mezzanine card KX/KX4 */
+		media_type = txgbe_media_type_backplane;
+		break;
+	case TXGBE_ID_SFP:
+		media_type = txgbe_media_type_fiber;
+		break;
+	case TXGBE_ID_XAUI:
+	case TXGBE_ID_SGMII:
+		media_type = txgbe_media_type_copper;
+		break;
+	case TXGBE_ID_SFI_XAUI:
+		if (hw->bus.lan_id == 0)
+			media_type = txgbe_media_type_fiber;
+		else
+			media_type = txgbe_media_type_copper;
+		break;
+	default:
+		media_type = txgbe_media_type_unknown;
+		break;
+	}
 
-	wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+	return media_type;
+}
 
-	wr32m(hw, TXGBE_MIS_RST_ST,
-	      TXGBE_MIS_RST_ST_RST_INIT, 0x1E00);
+/**
+ *  txgbe_disable_tx_laser_multispeed_fiber - Disable Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  The base drivers may require better control over SFP+ module
+ *  PHY states.  This includes selectively shutting down the Tx
+ *  laser on the PHY, effectively halting physical link.
+ **/
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR);
+
+	if (!(TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber))
+		return;
+	/* Blocked by MNG FW so bail */
+	txgbe_check_reset_blocked(hw);
+
+	if (txgbe_close_notify(hw))
+		/* over write led when ifconfig down */
+		TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP |
+		      TXGBE_LED_LINK_10G | TXGBE_LED_LINK_1G |
+		      TXGBE_LED_LINK_ACTIVE);
+
+	/* Disable Tx laser; allow 100us to go dark per spec */
+	esdp_reg |= TXGBE_GPIO_DR_1 | TXGBE_GPIO_DR_0;
+	wr32(hw, TXGBE_GPIO_DR, esdp_reg);
+	TXGBE_WRITE_FLUSH(hw);
+	usec_delay(100);
+}
 
-	/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
-	wr32(hw, TXGBE_PSR_MNG_FLEX_SEL, 0);
-	for (i = 0; i < 16; i++) {
-		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_L(i), 0);
-		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_H(i), 0);
-		wr32(hw, TXGBE_PSR_MNG_FLEX_MSK(i), 0);
+/**
+ *  txgbe_enable_tx_laser_multispeed_fiber - Enable Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  The base drivers may require better control over SFP+ module
+ *  PHY states.  This includes selectively turning on the Tx
+ *  laser on the PHY, effectively starting physical link.
+ **/
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	if (!(TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber))
+		return;
+	if (txgbe_open_notify(hw))
+		/* recover led configure when ifconfig up */
+		wr32(hw, TXGBE_CFG_LED_CTL, 0);
+
+	/* Enable Tx laser; allow 100ms to light up */
+	wr32m(hw, TXGBE_GPIO_DR,
+	      TXGBE_GPIO_DR_0 | TXGBE_GPIO_DR_1, 0);
+	TXGBE_WRITE_FLUSH(hw);
+	msec_delay(100);
+}
+
+/**
+ *  txgbe_flap_tx_laser_multispeed_fiber - Flap Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  When the driver changes the link speeds that it can support,
+ *  it sets autotry_restart to true to indicate that we need to
+ *  initiate a new autotry session with the link partner.  To do
+ *  so, we set the speed then disable and re-enable the Tx laser, to
+ *  alert the link partner that it also needs to restart autotry on its
+ *  end.  This is consistent with true clause 37 autoneg, which also
+ *  involves a loss of signal.
+ **/
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	/* Blocked by MNG FW so bail */
+	txgbe_check_reset_blocked(hw);
+
+	if (hw->mac.autotry_restart) {
+		txgbe_disable_tx_laser_multispeed_fiber(hw);
+		txgbe_enable_tx_laser_multispeed_fiber(hw);
+		hw->mac.autotry_restart = false;
 	}
-	wr32(hw, TXGBE_PSR_LAN_FLEX_SEL, 0);
-	for (i = 0; i < 16; i++) {
-		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_L(i), 0);
-		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_H(i), 0);
-		wr32(hw, TXGBE_PSR_LAN_FLEX_MSK(i), 0);
+}
+
+/**
+ *  txgbe_set_hard_rate_select_speed - Set module link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: link speed to set
+ *
+ *  Set module link speed via RS0/RS1 rate select pins.
+ */
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw,
+				      u32 speed)
+{
+	u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR);
+
+	switch (speed) {
+	case TXGBE_LINK_SPEED_10GB_FULL:
+		esdp_reg |= TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4;
+		break;
+	case TXGBE_LINK_SPEED_1GB_FULL:
+		esdp_reg &= ~(TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4);
+		break;
+	default:
+		DEBUGOUT("Invalid fixed module speed\n");
+		return;
 	}
 
-	/* set pause frame dst mac addr */
-	wr32(hw, TXGBE_RDB_PFCMACDAL, 0xC2000001);
-	wr32(hw, TXGBE_RDB_PFCMACDAH, 0x0180);
+	wr32(hw, TXGBE_GPIO_DDR,
+	     TXGBE_GPIO_DDR_5 | TXGBE_GPIO_DDR_4 |
+	     TXGBE_GPIO_DDR_1 | TXGBE_GPIO_DDR_0);
 
-	txgbe_init_thermal_sensor_thresh(hw);
+	wr32(hw, TXGBE_GPIO_DR, esdp_reg);
+
+	TXGBE_WRITE_FLUSH(hw);
+}
 
+s32 txgbe_set_sgmii_an37_ability(struct txgbe_hw *hw)
+{
+	u32 value;
+
+	txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0x3002);
+	/* for sgmii + external phy, set to 0x0105 (phy sgmii mode) */
+	/* for sgmii direct link, set to 0x010c (mac sgmii mode) */
+	if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII ||
+	    txgbe_get_media_type(hw) == txgbe_media_type_fiber) {
+		txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x010c);
+	} else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII ||
+		   (hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) {
+		txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0105);
+	}
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0200);
+	value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_CTL);
+	value = (value & ~0x1200) | (0x1 << 12) | (0x1 << 9);
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, value);
 	return 0;
 }
 
-/**
- *  txgbe_reset_hw - Perform hardware reset
- *  @hw: pointer to hardware structure
- *
- *  Resets the hardware by resetting the transmit and receive units, masks
- *  and clears all interrupts, perform a PHY reset, and perform a link (MAC)
- *  reset.
- **/
-s32 txgbe_reset_hw(struct txgbe_hw *hw)
+s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg)
 {
-	s32 status;
-	u32 reset = 0;
 	u32 i;
+	s32 status = 0;
+	struct txgbe_adapter *adapter = hw->back;
 
-	u32 reset_status = 0;
-	u32 rst_delay = 0;
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+		    TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+		    TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+		status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+	txgbe_dev_info("It is set to kr.\n");
 
-	/* Call adapter stop to disable tx/rx and clear interrupts */
-	status = TCALL(hw, mac.ops.stop_adapter);
-	if (status != 0)
-		goto reset_hw_out;
+	txgbe_wr32_epcs(hw, 0x78001, 0x7);
 
-	/* Issue global reset to the MAC.  Needs to be SW reset if link is up.
-	 * If link reset is used when link is up, it might reset the PHY when
-	 * mng is using it.  If link is down or the flag to force full link
-	 * reset is set, then perform link reset.
-	 */
-	if (hw->force_full_reset) {
-		rst_delay = (rd32(hw, TXGBE_MIS_RST_ST) &
-			     TXGBE_MIS_RST_ST_RST_INIT) >>
-			     TXGBE_MIS_RST_ST_RST_INI_SHIFT;
-		if (hw->reset_type == TXGBE_SW_RESET) {
-			for (i = 0; i < rst_delay + 20; i++) {
-				reset_status =
-					rd32(hw, TXGBE_MIS_RST_ST);
-				if (!(reset_status &
-				    TXGBE_MIS_RST_ST_DEV_RST_ST_MASK))
-					break;
-				msleep(100);
+	if (1) {
+		/* 2. Disable xpcs AN-73 */
+		if (adapter->backplane_an == 1) {
+			txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+			txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x1);
+		} else {
+			txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+			txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0);
+		}
+
+		/* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register */
+		/* Bit[10:0](MPLLA_BANDWIDTH) = 11'd123 (default: 11'd16) */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+				TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR);
+
+		/* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register */
+		/* Bit[12:8](RX_VREF_CTRL) = 5'hF (default: 5'h11) */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+
+		/* 5. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register */
+		/* Bit[15:8](VGA1/2_GAIN_0) = 8'h77,
+		 * Bit[7:5](CTLE_POLE_0) = 3'h2
+		 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774A);
+
+		/* 6. Set VR_MII_Gen5_12G_RX_GENCTRL3 Register */
+		/* Bit[2:0](LOS_TRSHLD_0) = 3'h4 (default: 3) */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, 0x0004);
+		/* 7. Initialize the mode by setting VR XS or PCS MMD Digital */
+		/* Control1 Register Bit[15](VR_RST) */
+		txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+		/* wait phy initialization done */
+		for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+			if ((txgbe_rd32_epcs(hw,
+					     TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			    TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+				break;
+			msleep(100);
+		}
+		if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+			status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+			goto out;
+		}
+	} else {
+		txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x1);
+	}
+
+out:
+	return status;
+}
+
+s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg)
+{
+	u32 i;
+	s32 status = 0;
+	u32 value;
+	struct txgbe_adapter *adapter = hw->back;
+
+	/* check link status, if already set, skip setting it again */
+	if (hw->link_status == TXGBE_LINK_STATUS_KX4)
+		goto out;
+
+	txgbe_dev_info("It is set to kx4.\n");
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+		status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, ~TXGBE_MAC_TX_CFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	if (!autoneg)
+		txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+	else
+		txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+
+	if (hw->revision_id == TXGBE_SP_MPW) {
+		/* Disable PHY MPLLA */
+		txgbe_wr32_ephy(hw, 0x4, 0x2501);
+		/* Reset rx lane0-3 clock */
+		txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+		txgbe_wr32_ephy(hw, 0x1105, 0x4001);
+		txgbe_wr32_ephy(hw, 0x1205, 0x4001);
+		txgbe_wr32_ephy(hw, 0x1305, 0x4001);
+	} else {
+		/* Disable PHY MPLLA for eth mode change(after ECO) */
+		txgbe_wr32_ephy(hw, 0x4, 0x250A);
+		TXGBE_WRITE_FLUSH(hw);
+		msleep(1);
+
+		/* Set the eth change_mode bit first in mis_rst register
+		 * for corresponding LAN port
+		 */
+		if (hw->bus.lan_id == 0)
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+		else
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+	}
+
+	/* Set SR PCS Control2 Register Bits[1:0] = 2'b01
+	 * PCS_TYPE_SEL: non KR
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2,
+			TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X);
+	/* Set SR PMA MMD Control1 Register Bit[13] = 1'b1  SS13: 10G speed */
+	txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+			TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G);
+
+	value = (0xf5f0 & ~0x7F0) | (0x5 << 8) | (0x7 << 5) | 0xF0;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+	if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI)
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+	else
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00);
+
+	for (i = 0; i < 4; i++) {
+		if (i == 0)
+			value = (0x45 & ~0xFFFF) | (0x7 << 12) |
+				(0x7 << 8) | 0x6;
+		else
+			value = (0xff06 & ~0xFFFF) | (0x7 << 12) |
+				(0x7 << 8) | 0x6;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+	}
+
+	value = 0x0 & ~0x7777;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+	value = (0x6db & ~0xFFF) | (0x1 << 9) | (0x1 << 6) | (0x1 << 3) | 0x1;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA */
+	/* Control 0 Register Bit[7:0] = 8'd40  MPLLA_MULTIPLIER */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+			TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER);
+	/* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA */
+	/* Control 3 Register Bit[10:0] = 11'd86  MPLLA_BANDWIDTH */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+			TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Load 0 Register  Bit[12:0] = 13'd1360 VCO_LD_VAL_0 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Load 1 Register  Bit[12:0] = 13'd1360 VCO_LD_VAL_1 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Load 2 Register  Bit[12:0] = 13'd1360 VCO_LD_VAL_2 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Load 3 Register  Bit[12:0] = 13'd1360 VCO_LD_VAL_3 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Reference 0 Register Bit[5:0] = 6'd34 VCO_REF_LD_0/1 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+	/* Calibration Reference 1 Register Bit[5:0] = 6'd34 VCO_REF_LD_2/3 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE */
+	/* Enable Register Bit[7:0] = 8'd0  AFE_EN_0/3_1, DFE_EN_0/3_1 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+
+	/* Set  VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx */
+	/* Equalization Control 4 Register Bit[3:0] = 4'd0 CONT_ADAPT_0/3_1 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x00F0);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate */
+	/* Control Register Bit[14:12], Bit[10:8], Bit[6:4], Bit[2:0],
+	 * all rates to 3'b010  TX0/1/2/3_RATE
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate */
+	/* Control Register Bit[13:12], Bit[9:8], Bit[5:4], Bit[1:0],
+	 * all rates to 2'b10  RX0/1/2/3_RATE
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General */
+	/* Control 2 Register Bit[15:8] = 2'b01  TX0/1/2/3_WIDTH: 10bits */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x5500);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General */
+	/* Control 2 Register Bit[15:8] = 2'b01  RX0/1/2/3_WIDTH: 10bits */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x5500);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 2 Register Bit[10:8] = 3'b010
+	 * MPLLA_DIV16P5_CLK_EN=0, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+			TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+
+	txgbe_wr32_epcs(hw, 0x1f0000, 0x0);
+	txgbe_wr32_epcs(hw, 0x1f8001, 0x0);
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0);
+
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST)
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+
+	/* wait phy initialization done */
+	for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+
+	/* if success, set link status */
+	hw->link_status = TXGBE_LINK_STATUS_KX4;
+
+	if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+		status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+out:
+	return status;
+}
+
+s32 txgbe_set_link_to_kx(struct txgbe_hw *hw, u32 speed, bool autoneg)
+{
+	u32 i;
+	s32 status = 0;
+	u32 wdata = 0;
+	u32 value;
+	struct txgbe_adapter *adapter = hw->back;
+
+	/* check link status, if already set, skip setting it again */
+	if (hw->link_status == TXGBE_LINK_STATUS_KX)
+		goto out;
+
+	txgbe_dev_info("It is set to kx. speed =0x%x\n", speed);
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+		status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, ~TXGBE_MAC_TX_CFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	if (!autoneg)
+		txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+	else
+		txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+
+	if (hw->revision_id == TXGBE_SP_MPW) {
+		/* Disable PHY MPLLA */
+		txgbe_wr32_ephy(hw, 0x4, 0x2401);
+		/* Reset rx lane0 clock */
+		txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+	} else {
+		/* Disable PHY MPLLA for eth mode change(after ECO) */
+		txgbe_wr32_ephy(hw, 0x4, 0x240A);
+		TXGBE_WRITE_FLUSH(hw);
+		msleep(1);
+
+		/* Set the eth change_mode bit first in mis_rst register */
+		/* for corresponding LAN port */
+		if (hw->bus.lan_id == 0)
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+		else
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+	}
+
+	/* Set SR PCS Control2 Register Bits[1:0] = 2'b01
+	 * PCS_TYPE_SEL: non KR
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2,
+			TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X);
+
+	/* Set SR PMA MMD Control1 Register Bit[13] = 1'b0 SS13: 1G speed */
+	txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+			TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G);
+
+	/* Set SR MII MMD Control Register to corresponding speed: {Bit[6],
+	 * Bit[13]}=[2'b00,2'b01,2'b10]->[10M,100M,1G]
+	 */
+	if (speed == TXGBE_LINK_SPEED_100_FULL)
+		wdata = 0x2100;
+	else if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+		wdata = 0x0140;
+	else if (speed == TXGBE_LINK_SPEED_10_FULL)
+		wdata = 0x0100;
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, wdata);
+
+	value = (0xf5f0 & ~0x710) | (0x5 << 8) | 0x10;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+
+	for (i = 0; i < 4; i++) {
+		if (i)
+			value = 0xff06;
+		else
+			value = (0x45 & ~0xFFFF) | (0x7 << 12) |
+				(0x7 << 8) | 0x6;
+
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+	}
+
+	value = 0x0 & ~0x7;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+	value = (0x6db & ~0x7) | 0x4;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 0 Register Bit[7:0] = 8'd32  MPLLA_MULTIPLIER
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+			TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX);
+
+	/* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 3 Register Bit[10:0] = 11'd70  MPLLA_BANDWIDTH
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+			TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 0 Register  Bit[12:0] = 13'd1344  VCO_LD_VAL_0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0,
+			TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, 0x549);
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, 0x549);
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, 0x549);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Reference 0 Register Bit[5:0] = 6'd42  VCO_REF_LD_0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0,
+			TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2929);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE
+	 * Enable Register Bit[4], Bit[0] = 1'b0  AFE_EN_0, DFE_EN_0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+	 * Equalization Control 4 Register Bit[0] = 1'b0  CONT_ADAPT_0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x0010);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+	 * Control Register Bit[2:0] = 3'b011  TX0_RATE
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL,
+			TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+	 * Control Register Bit[2:0] = 3'b011 RX0_RATE
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL,
+			TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General
+	 * Control 2 Register Bit[9:8] = 2'b01  TX0_WIDTH: 10bits
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2,
+			TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General
+	 * Control 2 Register Bit[9:8] = 2'b01  RX0_WIDTH: 10bits
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2,
+			TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 2 Register Bit[10:8] = 3'b010	MPLLA_DIV16P5_CLK_EN=0,
+	 * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+			TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+
+	/* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */
+	/* Set to 8bit MII (required in 10M/100M SGMII) */
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0100);
+
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST)
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+	/* wait phy initialization done */
+	for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+
+	/* if success, set link status */
+	hw->link_status = TXGBE_LINK_STATUS_KX;
+
+	if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+		status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+	txgbe_dev_info("Set KX TX_EQ MAIN:24 PRE:4 POST:16\n");
+	/* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register
+	 * Bit[13:8](TX_EQ_MAIN) = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+	 */
+	value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+	value = (value & ~0x3F3F) | (24 << 8) | 4;
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+	/* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register
+	 * Bit[6](TX_EQ_OVR_RIDE) = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+	 */
+	value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+	value = (value & ~0x7F) | 16 | (1 << 6);
+	txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+
+out:
+	return status;
+}
+
+s32 txgbe_set_link_to_sfi(struct txgbe_hw *hw, u32 speed)
+{
+	u32 i;
+	s32 status = 0;
+	u32 value = 0;
+
+	/* Set the module link speed */
+	TCALL(hw, mac.ops.set_rate_select_speed, speed);
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+		status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, ~TXGBE_MAC_TX_CFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+
+	if (hw->revision_id != TXGBE_SP_MPW) {
+		/* Disable PHY MPLLA for eth mode change(after ECO) */
+		txgbe_wr32_ephy(hw, 0x4, 0x243A);
+		TXGBE_WRITE_FLUSH(hw);
+		msleep(1);
+		/* Set the eth change_mode bit first in mis_rst register
+		 * for corresponding LAN port
+		 */
+		if (hw->bus.lan_id == 0)
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+		else
+			wr32(hw, TXGBE_MIS_RST,
+			     TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+	}
+	if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+		/* Set SR PCS Control2 Register Bits[1:0] = 2'b00
+		 * PCS_TYPE_SEL: KR
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0);
+		value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+		value = value | 0x2000;
+		txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, value);
+		/* Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL0 Register Bit[7:0] = 8'd33
+		 * MPLLA_MULTIPLIER
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0021);
+		/* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register
+		 * Bit[10:0](MPLLA_BANDWIDTH) = 11'd0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0);
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+		value = (value & ~0x700) | 0x500;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+		/* 4.Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register
+		 * Bit[12:8](RX_VREF_CTRL) = 5'hF
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+		/* Set VR_XS_PMA_Gen5_12G_VCO_CAL_LD0 Register
+		 * Bit[12:0] = 13'd1353 VCO_LD_VAL_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0549);
+		/* Set VR_XS_PMA_Gen5_12G_VCO_CAL_REF0 Register Bit[5:0] = 6'd41
+		 * VCO_REF_LD_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x0029);
+		/* Set VR_XS_PMA_Gen5_12G_TX_RATE_CTRL Register
+		 * Bit[2:0] = 3'b000 TX0_RATE
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0);
+		/* Set VR_XS_PMA_Gen5_12G_RX_RATE_CTRL Register
+		 * Bit[2:0] = 3'b000 RX0_RATE
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0);
+		/* Set VR_XS_PMA_Gen5_12G_TX_GENCTRL2 Register Bit[9:8] = 2'b11
+		 * TX0_WIDTH: 20bits
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0300);
+		/* Set VR_XS_PMA_Gen5_12G_RX_GENCTRL2 Register Bit[9:8] = 2'b11
+		 * RX0_WIDTH: 20bits
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0300);
+		/* Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL2 Register
+		 * Bit[10:8] = 3'b110 MPLLA_DIV16P5_CLK_EN=1,
+		 * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0600);
+
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+		    hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8](VGA1/2_GAIN_0) = 8'h77, Bit[7:5]
+			 * (CTLE_POLE_0) = 3'h2, Bit[4:0](CTLE_BOOST_0) = 4'hF
+			 */
+			txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+		} else {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8] (VGA1/2_GAIN_0) = 8'h00,
+			 * Bit[7:5](CTLE_POLE_0) = 3'h2,
+			 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+			 */
+			value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+			value = (value & ~0xFFFF) | (2 << 5) | 0x05;
+			txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+		}
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+		value &= ~0x7;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+		    hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+			 * Bit[7:0](DFE_TAP1_0) = 8'd20
+			 */
+			txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0014);
+			value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+			value = (value & ~0x11) | 0x11;
+			txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+		} else {
+			/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+			 * Bit[7:0](DFE_TAP1_0) = 8'd20
+			 */
+			txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0xBE);
+			/* 9. Set VR_MII_Gen5_12G_AFE_DFE_EN_CTRL Register
+			 * Bit[4](DFE_EN_0) = 1'b0, Bit[0](AFE_EN_0) = 1'b0
+			 */
+			value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+			value = (value & ~0x11) | 0x0;
+			txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+		}
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL);
+		value = value & ~0x1;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value);
+	} else {
+		if (hw->revision_id == TXGBE_SP_MPW) {
+			/* Disable PHY MPLLA */
+			txgbe_wr32_ephy(hw, 0x4, 0x2401);
+			/* Reset rx lane0 clock */
+			txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+		}
+		/* Set SR PCS Control2 Register Bits[1:0] = 2'b00
+		 * PCS_TYPE_SEL: KR
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0x1);
+		/* Set SR PMA MMD Control1 Register Bit[13] = 1'b0
+		 * SS13: 1G speed
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, 0x0000);
+		/* Set SR MII MMD Control Register to corresponding speed: */
+		txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, 0x0140);
+
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+		value = (value & ~0x710) | 0x500;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+		/* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register
+		 * Bit[12:8](RX_VREF_CTRL) = 5'hF
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+		/* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register
+		 * Bit[13:8](TX_EQ_MAIN) = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+		 */
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+		value = (value & ~0x3F3F) | (24 << 8) | 4;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+		/* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register
+		 * Bit[6](TX_EQ_OVR_RIDE) = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+		 */
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+		value = (value & ~0x7F) | 16 | (1 << 6);
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+		    hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+		} else {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8] (VGA1/2_GAIN_0) = 8'h00,
+			 * Bit[7:5](CTLE_POLE_0) = 3'h2,
+			 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+			 */
+			value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+			value = (value & ~0xFFFF) | 0x7706;
+			txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+		}
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+		value = (value & ~0x7) | 0x0;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+		/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+		 * Bit[7:0](DFE_TAP1_0) = 8'd00
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+		/* Set VR_XS_PMA_Gen5_12G_RX_GENCTRL3 Register
+		 * Bit[2:0] LOS_TRSHLD_0 = 4
+		 */
+		value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3);
+		value = (value & ~0x7) | 0x4;
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY
+		 * MPLLA Control 0 Register Bit[7:0] = 8'd32  MPLLA_MULTIPLIER
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0020);
+		/* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA
+		 * Control 3 Register Bit[10:0] = 11'd70  MPLLA_BANDWIDTH
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0x0046);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+		 * Calibration Load 0 Register
+		 * Bit[12:0] = 13'd1344  VCO_LD_VAL_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0540);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+		 * Calibration Reference 0 Register
+		 * Bit[5:0] = 6'd42 VCO_REF_LD_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x002A);
+		/* Set VR XS, PMA, MII Synopsys Enterprise Gen5 12G PHY AFE-DFE
+		 * Enable Register Bit[4], Bit[0] = 1'b0  AFE_EN_0, DFE_EN_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+		/* Set  VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+		 * Equalization Control 4 Register Bit[0] = 1'b0  CONT_ADAPT_0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x0010);
+		/* Set VR XS, PMA, MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+		 * Control Register Bit[2:0] = 3'b011  TX0_RATE
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x0003);
+		/* Set VR XS, PMA, MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+		 * Control Register Bit[2:0] = 3'b011
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x0003);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY
+		 * Tx General Control 2 Register
+		 * Bit[9:8] = 2'b01  TX0_WIDTH: 10bits
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0100);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY
+		 * Rx General Control 2 Register
+		 * Bit[9:8] = 2'b01  RX0_WIDTH: 10bits
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0100);
+		/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA
+		 * Control 2 Register Bit[10:8] = 3'b010 MPLLA_DIV16P5_CLK_EN=0,
+		 * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+		 */
+		txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0200);
+		/* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */
+		txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0100);
+	}
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST)
+	 */
+	txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+	/* wait phy initialization done */
+	for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+		if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+	if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+		status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+out:
+	return status;
+}
+
+/**
+ *  txgbe_setup_mac_link - Set MAC link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Set the link speed.
+ **/
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw,
+			 u32 speed,
+			 bool __maybe_unused autoneg_wait_to_complete)
+{
+	bool autoneg = false;
+	s32 status = 0;
+	u32 link_capabilities = TXGBE_LINK_SPEED_UNKNOWN;
+	struct txgbe_adapter *adapter = hw->back;
+	u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	bool link_up = false;
+	u16 sub_dev_id = hw->subsystem_device_id;
+
+	/* Check to see if speed passed in is supported. */
+	status = TCALL(hw, mac.ops.get_link_capabilities,
+		       &link_capabilities, &autoneg);
+	if (status)
+		goto out;
+
+	speed &= link_capabilities;
+
+	if (speed == TXGBE_LINK_SPEED_UNKNOWN) {
+		status = TXGBE_ERR_LINK_SETUP;
+		goto out;
+	}
+
+	if (!(((sub_dev_id & TXGBE_DEV_MASK) == TXGBE_ID_KR_KX_KX4) ||
+	      ((sub_dev_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_XAUI) ||
+	      ((sub_dev_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_SGMII))) {
+		status = TCALL(hw, mac.ops.check_link,
+			       &link_speed, &link_up, false);
+		if (status != 0)
+			goto out;
+		if (link_speed == speed && link_up)
+			goto out;
+	}
+
+	if ((hw->subsystem_id & TXGBE_DEV_MASK) == TXGBE_ID_KR_KX_KX4) {
+		if (!autoneg) {
+			switch (hw->phy.link_mode) {
+			case TXGBE_PHYSICAL_LAYER_10GBASE_KR:
+				txgbe_set_link_to_kr(hw, autoneg);
+				break;
+			case TXGBE_PHYSICAL_LAYER_10GBASE_KX4:
+				txgbe_set_link_to_kx4(hw, autoneg);
+				break;
+			case TXGBE_PHYSICAL_LAYER_1000BASE_KX:
+				txgbe_set_link_to_kx(hw, speed, autoneg);
+				break;
+			default:
+				status = TXGBE_ERR_PHY;
+				goto out;
+			}
+		} else {
+			txgbe_set_link_to_kr(hw, autoneg);
+		}
+	} else if ((hw->subsystem_id & TXGBE_DEV_MASK) == TXGBE_ID_XAUI ||
+		   (hw->subsystem_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_XAUI ||
+		   (hw->subsystem_id & TXGBE_DEV_MASK) == TXGBE_ID_SGMII ||
+		   (hw->subsystem_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_SGMII) {
+		if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+			txgbe_set_link_to_kx4(hw, 0);
+		} else {
+			txgbe_set_link_to_kx(hw, speed, 0);
+			if (adapter->an37)
+				txgbe_set_sgmii_an37_ability(hw);
+		}
+	} else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) {
+		txgbe_set_link_to_sfi(hw, speed);
+		if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+			txgbe_set_sgmii_an37_ability(hw);
+	}
+
+out:
+	return status;
+}
+
+int txgbe_reset_misc(struct txgbe_hw *hw)
+{
+	int i;
+	u32 value;
+
+	txgbe_init_i2c(hw);
+
+	value = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+	if ((value & 0x3) != TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X)
+		hw->link_status = TXGBE_LINK_STATUS_NONE;
+
+	/* receive packets that size > 2048 */
+	wr32m(hw, TXGBE_MAC_RX_CFG,
+	      TXGBE_MAC_RX_CFG_JE, TXGBE_MAC_RX_CFG_JE);
+
+	/* clear counters on read */
+	wr32m(hw, TXGBE_MMC_CONTROL,
+	      TXGBE_MMC_CONTROL_RSTONRD, TXGBE_MMC_CONTROL_RSTONRD);
+
+	wr32m(hw, TXGBE_MAC_RX_FLOW_CTRL,
+	      TXGBE_MAC_RX_FLOW_CTRL_RFE, TXGBE_MAC_RX_FLOW_CTRL_RFE);
+
+	wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+
+	wr32m(hw, TXGBE_MIS_RST_ST,
+	      TXGBE_MIS_RST_ST_RST_INIT, 0x1E00);
+
+	/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+	wr32(hw, TXGBE_PSR_MNG_FLEX_SEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_L(i), 0);
+		wr32(hw, TXGBE_PSR_MNG_FLEX_DW_H(i), 0);
+		wr32(hw, TXGBE_PSR_MNG_FLEX_MSK(i), 0);
+	}
+	wr32(hw, TXGBE_PSR_LAN_FLEX_SEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_L(i), 0);
+		wr32(hw, TXGBE_PSR_LAN_FLEX_DW_H(i), 0);
+		wr32(hw, TXGBE_PSR_LAN_FLEX_MSK(i), 0);
+	}
+
+	/* set pause frame dst mac addr */
+	wr32(hw, TXGBE_RDB_PFCMACDAL, 0xC2000001);
+	wr32(hw, TXGBE_RDB_PFCMACDAH, 0x0180);
+
+	txgbe_init_thermal_sensor_thresh(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_reset_hw - Perform hardware reset
+ *  @hw: pointer to hardware structure
+ *
+ *  Resets the hardware by resetting the transmit and receive units, masks
+ *  and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ *  reset.
+ **/
+s32 txgbe_reset_hw(struct txgbe_hw *hw)
+{
+	s32 status;
+	u32 reset = 0;
+	u32 i;
+
+	u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl, sr_an_mmd_adv_reg2;
+	u32 vr_xs_or_pcs_mmd_digi_ctl1, curr_vr_xs_or_pcs_mmd_digi_ctl1;
+	u32 curr_sr_pcs_ctl, curr_sr_pma_mmd_ctl1;
+	u32 curr_sr_an_mmd_ctl, curr_sr_an_mmd_adv_reg2;
+
+	u32 reset_status = 0;
+	u32 rst_delay = 0;
+
+	/* Call adapter stop to disable tx/rx and clear interrupts */
+	status = TCALL(hw, mac.ops.stop_adapter);
+	if (status != 0)
+		goto reset_hw_out;
+
+	/* Identify PHY and related function pointers */
+	status = TCALL(hw, phy.ops.init);
+
+	if (status == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		goto reset_hw_out;
+
+	/* Remember internal phy regs from before we reset */
+	curr_sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+	curr_sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+	curr_sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL);
+	curr_sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw,
+						  TXGBE_SR_AN_MMD_ADV_REG2);
+	curr_vr_xs_or_pcs_mmd_digi_ctl1 =
+		txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1);
+
+	/* Issue global reset to the MAC.  Needs to be SW reset if link is up.
+	 * If link reset is used when link is up, it might reset the PHY when
+	 * mng is using it.  If link is down or the flag to force full link
+	 * reset is set, then perform link reset.
+	 */
+	if (hw->force_full_reset) {
+		rst_delay = (rd32(hw, TXGBE_MIS_RST_ST) &
+			     TXGBE_MIS_RST_ST_RST_INIT) >>
+			     TXGBE_MIS_RST_ST_RST_INI_SHIFT;
+		if (hw->reset_type == TXGBE_SW_RESET) {
+			for (i = 0; i < rst_delay + 20; i++) {
+				reset_status =
+					rd32(hw, TXGBE_MIS_RST_ST);
+				if (!(reset_status &
+				    TXGBE_MIS_RST_ST_DEV_RST_ST_MASK))
+					break;
+				msleep(100);
 			}
 
 			if (reset_status & TXGBE_MIS_RST_ST_DEV_RST_ST_MASK) {
@@ -1444,6 +2886,38 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw)
 	if (status != 0)
 		goto reset_hw_out;
 
+	/* Store the original values if they have not been stored
+	 * off yet.  Otherwise restore the stored original values
+	 * since the reset operation sets back to defaults.
+	 */
+	sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+	sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+	sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL);
+	sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG2);
+	vr_xs_or_pcs_mmd_digi_ctl1 =
+		txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1);
+
+	if (!hw->mac.orig_link_settings_stored) {
+		hw->mac.orig_sr_pcs_ctl2 = sr_pcs_ctl;
+		hw->mac.orig_sr_pma_mmd_ctl1 = sr_pma_mmd_ctl1;
+		hw->mac.orig_sr_an_mmd_ctl = sr_an_mmd_ctl;
+		hw->mac.orig_sr_an_mmd_adv_reg2 = sr_an_mmd_adv_reg2;
+		hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 =
+						vr_xs_or_pcs_mmd_digi_ctl1;
+		hw->mac.orig_link_settings_stored = true;
+	} else {
+		hw->mac.orig_sr_pcs_ctl2 = curr_sr_pcs_ctl;
+		hw->mac.orig_sr_pma_mmd_ctl1 = curr_sr_pma_mmd_ctl1;
+		hw->mac.orig_sr_an_mmd_ctl = curr_sr_an_mmd_ctl;
+		hw->mac.orig_sr_an_mmd_adv_reg2 =
+					curr_sr_an_mmd_adv_reg2;
+		hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 =
+					curr_vr_xs_or_pcs_mmd_digi_ctl1;
+	}
+
+	/*make sure phy power is up*/
+	msleep(100);
+
 	/* Store the permanent mac address */
 	TCALL(hw, mac.ops.get_mac_addr, hw->mac.perm_addr);
 
@@ -1492,6 +2966,9 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 	int ret_val = 0;
 	u32 i;
 
+	/* Set the media type */
+	hw->phy.media_type = TCALL(hw, mac.ops.get_media_type);
+
 	/* Clear the rate limiters */
 	for (i = 0; i < hw->mac.max_tx_queues; i++) {
 		wr32(hw, TXGBE_TDM_RP_IDX, i);
@@ -1508,6 +2985,38 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 	return ret_val;
 }
 
+/**
+ *  txgbe_identify_phy - Get physical layer module
+ *  @hw: pointer to hardware structure
+ *
+ *  Determines the physical layer module found on the current adapter.
+ *  If PHY already detected, maintains current PHY type in hw struct,
+ *  otherwise executes the PHY detection routine.
+ **/
+s32 txgbe_identify_phy(struct txgbe_hw *hw)
+{
+	/* Detect PHY if not unknown - returns success if already detected. */
+	s32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+	enum txgbe_media_type media_type;
+
+	if (!hw->phy.phy_semaphore_mask)
+		hw->phy.phy_semaphore_mask = TXGBE_MNG_SWFW_SYNC_SW_PHY;
+
+	media_type = TCALL(hw, mac.ops.get_media_type);
+	if (media_type == txgbe_media_type_fiber) {
+		status = txgbe_identify_module(hw);
+	} else {
+		hw->phy.type = txgbe_phy_none;
+		status = 0;
+	}
+
+	/* Return error if SFP module has been detected but is not supported */
+	if (hw->phy.type == txgbe_phy_sfp_unsupported)
+		return TXGBE_ERR_SFP_NOT_SUPPORTED;
+
+	return status;
+}
+
 /**
  *  txgbe_init_eeprom_params - Initialize EEPROM params
  *  @hw: pointer to hardware structure
@@ -1691,6 +3200,76 @@ s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
 	return status;
 }
 
+s32 txgbe_close_notify(struct txgbe_hw *hw)
+{
+	int tmp;
+	s32 status;
+	struct txgbe_hic_write_shadow_ram buffer;
+
+	buffer.hdr.req.cmd = FW_DW_CLOSE_NOTIFY;
+	buffer.hdr.req.buf_lenh = 0;
+	buffer.hdr.req.buf_lenl = 0;
+	buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+	/* one word */
+	buffer.length = 0;
+	buffer.address = 0;
+
+	status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+					      sizeof(buffer),
+					      TXGBE_HI_COMMAND_TIMEOUT, false);
+	if (status)
+		return status;
+
+	if (txgbe_check_mng_access(hw)) {
+		tmp = (u32)rd32(hw, TXGBE_MNG_SW_SM);
+		if (tmp == TXGBE_CHECKSUM_CAP_ST_PASS)
+			status = 0;
+		else
+			status = TXGBE_ERR_EEPROM_CHECKSUM;
+	} else {
+		status = TXGBE_ERR_MNG_ACCESS_FAILED;
+		return status;
+	}
+
+	return status;
+}
+
+s32 txgbe_open_notify(struct txgbe_hw *hw)
+{
+	int tmp;
+	s32 status;
+	struct txgbe_hic_write_shadow_ram buffer;
+
+	buffer.hdr.req.cmd = FW_DW_OPEN_NOTIFY;
+	buffer.hdr.req.buf_lenh = 0;
+	buffer.hdr.req.buf_lenl = 0;
+	buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+	/* one word */
+	buffer.length = 0;
+	buffer.address = 0;
+
+	status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+					      sizeof(buffer),
+					      TXGBE_HI_COMMAND_TIMEOUT, false);
+	if (status)
+		return status;
+
+	if (txgbe_check_mng_access(hw)) {
+		tmp = (u32)rd32(hw, TXGBE_MNG_SW_SM);
+		if (tmp == TXGBE_CHECKSUM_CAP_ST_PASS)
+			status = 0;
+		else
+			status = TXGBE_ERR_EEPROM_CHECKSUM;
+	} else {
+		status = TXGBE_ERR_MNG_ACCESS_FAILED;
+		return status;
+	}
+
+	return status;
+}
+
 /**
  *  txgbe_calc_eeprom_checksum - Calculates and returns the checksum
  *  @hw: pointer to hardware structure
@@ -1793,3 +3372,55 @@ s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
 	return status;
 }
 
+/**
+ *  txgbe_check_mac_link - Determine link and speed status
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @link_up: true when link is up
+ *  @link_up_wait_to_complete: bool used to wait for link up or not
+ *
+ *  Reads the links register to determine if link is up and the current speed
+ **/
+s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
+			 bool *link_up, bool link_up_wait_to_complete)
+{
+	u32 links_reg = 0;
+	u32 i;
+
+	if (link_up_wait_to_complete) {
+		for (i = 0; i < TXGBE_LINK_UP_TIME; i++) {
+			links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+			if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+				*link_up = true;
+				break;
+			}
+			*link_up = false;
+			msleep(100);
+		}
+	} else {
+		links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+		if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP)
+			*link_up = true;
+		else
+			*link_up = false;
+	}
+
+	if (*link_up) {
+		if ((links_reg & TXGBE_CFG_PORT_ST_LINK_10G) ==
+				TXGBE_CFG_PORT_ST_LINK_10G)
+			*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_1G) ==
+				TXGBE_CFG_PORT_ST_LINK_1G)
+			*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_100M) ==
+				TXGBE_CFG_PORT_ST_LINK_100M)
+			*speed = TXGBE_LINK_SPEED_100_FULL;
+		else
+			*speed = TXGBE_LINK_SPEED_10_FULL;
+	} else {
+		*speed = TXGBE_LINK_SPEED_UNKNOWN;
+	}
+
+	return 0;
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index 700d344ba6c1..2273afaea4e2 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -14,6 +14,9 @@ void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status);
 void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw);
 s32 txgbe_stop_adapter(struct txgbe_hw *hw);
 
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index);
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index);
+
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
 		  u32 enable_addr);
 s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
@@ -44,10 +47,27 @@ bool txgbe_check_mng_access(struct txgbe_hw *hw);
 
 s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
 void txgbe_disable_rx(struct txgbe_hw *hw);
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+					  u32 speed,
+					  bool autoneg_wait_to_complete);
 int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit);
 
+s32 txgbe_get_link_capabilities(struct txgbe_hw *hw,
+				u32 *speed, bool *autoneg);
+enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw);
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw, u32 speed);
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw, u32 speed,
+			 bool autoneg_wait_to_complete);
+s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
+			 bool *link_up, bool link_up_wait_to_complete);
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
 int txgbe_reset_misc(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
+s32 txgbe_identify_phy(struct txgbe_hw *hw);
+s32 txgbe_init_phy_ops(struct txgbe_hw *hw);
 s32 txgbe_init_ops(struct txgbe_hw *hw);
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
@@ -58,5 +78,21 @@ s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
 				u16 offset, u16 words, u16 *data);
 s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset, u16 *data);
 s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset, u16 *data);
+u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr);
+void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data);
+void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data);
+
+s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw,  u32 region,
+			       const u8 *data, u32 size);
+
+s32 txgbe_close_notify(struct txgbe_hw *hw);
+s32 txgbe_open_notify(struct txgbe_hw *hw);
+
+s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg);
+s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg);
+
+s32 txgbe_set_link_to_kx(struct txgbe_hw *hw,
+			 u32 speed,
+			 bool autoneg);
 
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 9336eadfc690..e49a31cefb67 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -13,6 +13,7 @@
 
 #include "txgbe.h"
 #include "txgbe_hw.h"
+#include "txgbe_phy.h"
 
 char txgbe_driver_name[32] = TXGBE_NAME;
 static const char txgbe_driver_string[] =
@@ -45,6 +46,7 @@ MODULE_LICENSE("GPL");
 
 static struct workqueue_struct *txgbe_wq;
 
+static bool txgbe_is_sfp(struct txgbe_hw *hw);
 static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev);
 
 static void txgbe_check_minimum_link(struct txgbe_adapter *adapter,
@@ -124,6 +126,20 @@ static void txgbe_remove_adapter(struct txgbe_hw *hw)
 		txgbe_service_event_schedule(adapter);
 }
 
+static void txgbe_release_hw_control(struct txgbe_adapter *adapter)
+{
+	/* Let firmware take over control of hw */
+	wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_DRV_LOAD, 0);
+}
+
+static void txgbe_get_hw_control(struct txgbe_adapter *adapter)
+{
+	/* Let firmware know the driver has taken over */
+	wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_DRV_LOAD, TXGBE_CFG_PORT_CTL_DRV_LOAD);
+}
+
 static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
@@ -175,6 +191,93 @@ static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
 	txgbe_sync_mac_table(adapter);
 }
 
+static bool txgbe_is_sfp(struct txgbe_hw *hw)
+{
+	switch (TCALL(hw, mac.ops.get_media_type)) {
+	case txgbe_media_type_fiber:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static bool txgbe_is_backplane(struct txgbe_hw *hw)
+{
+	switch (TCALL(hw, mac.ops.get_media_type)) {
+	case txgbe_media_type_backplane:
+		return true;
+	default:
+		return false;
+	}
+}
+
+/**
+ * txgbe_sfp_link_config - set up SFP+ link
+ * @adapter: pointer to private adapter struct
+ **/
+static void txgbe_sfp_link_config(struct txgbe_adapter *adapter)
+{
+	/* We are assuming the worst case scenerio here, and that
+	 * is that an SFP was inserted/removed after the reset
+	 * but before SFP detection was enabled.  As such the best
+	 * solution is to just start searching as soon as we start
+	 */
+
+	adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+	adapter->sfp_poll_time = 0;
+}
+
+static void txgbe_up_complete(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 links_reg;
+
+	txgbe_get_hw_control(adapter);
+
+	/* enable the optics for SFP+ fiber */
+	TCALL(hw, mac.ops.enable_tx_laser);
+
+	smp_mb__before_atomic();
+	clear_bit(__TXGBE_DOWN, &adapter->state);
+
+	if (txgbe_is_sfp(hw)) {
+		txgbe_sfp_link_config(adapter);
+	} else if (txgbe_is_backplane(hw)) {
+		adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+	if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+		if (links_reg & TXGBE_CFG_PORT_ST_LINK_10G) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+			     TXGBE_MAC_TX_CFG_SPEED_10G);
+		} else if (links_reg & (TXGBE_CFG_PORT_ST_LINK_1G | TXGBE_CFG_PORT_ST_LINK_100M)) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+			     TXGBE_MAC_TX_CFG_SPEED_1G);
+		}
+	}
+
+	if (hw->bus.lan_id == 0) {
+		wr32m(hw, TXGBE_MIS_PRB_CTL,
+		      TXGBE_MIS_PRB_CTL_LAN0_UP, TXGBE_MIS_PRB_CTL_LAN0_UP);
+	} else if (hw->bus.lan_id == 1) {
+		wr32m(hw, TXGBE_MIS_PRB_CTL,
+		      TXGBE_MIS_PRB_CTL_LAN1_UP, TXGBE_MIS_PRB_CTL_LAN1_UP);
+	} else {
+		txgbe_err(probe, "%s:invalid bus lan id %d\n",
+			  __func__, hw->bus.lan_id);
+	}
+
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	wr32m(hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD);
+}
+
 void txgbe_reset(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
@@ -184,10 +287,19 @@ void txgbe_reset(struct txgbe_adapter *adapter)
 
 	if (TXGBE_REMOVED(hw->hw_addr))
 		return;
+	/* lock SFP init bit to prevent race conditions with the watchdog */
+	while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		usleep_range(1000, 2000);
+
+	/* clear all SFP and link config related flags while holding SFP_INIT */
+	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
 
 	err = TCALL(hw, mac.ops.init_hw);
 	switch (err) {
 	case 0:
+	case TXGBE_ERR_SFP_NOT_PRESENT:
+	case TXGBE_ERR_SFP_NOT_SUPPORTED:
 		break;
 	case TXGBE_ERR_MASTER_REQUESTS_PENDING:
 		txgbe_dev_err("master disable timed out\n");
@@ -196,6 +308,7 @@ void txgbe_reset(struct txgbe_adapter *adapter)
 		txgbe_dev_err("Hardware Error: %d\n", err);
 	}
 
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
 	/* do not flush user set addresses */
 	memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
 	txgbe_flush_sw_mac_table(adapter);
@@ -223,6 +336,8 @@ void txgbe_disable_device(struct txgbe_adapter *adapter)
 	netif_carrier_off(netdev);
 	netif_tx_disable(netdev);
 
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+
 	del_timer_sync(&adapter->service_timer);
 
 	if (hw->bus.lan_id == 0)
@@ -251,8 +366,14 @@ void txgbe_disable_device(struct txgbe_adapter *adapter)
 
 void txgbe_down(struct txgbe_adapter *adapter)
 {
+	struct txgbe_hw *hw = &adapter->hw;
+
 	txgbe_disable_device(adapter);
 	txgbe_reset(adapter);
+
+	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)))
+		/* power down the optics for SFP+ fiber */
+		TCALL(&adapter->hw, mac.ops.disable_tx_laser);
 }
 
 /**
@@ -330,6 +451,67 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 	return err;
 }
 
+/**
+ * txgbe_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP).  At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the watchdog timer is started,
+ * and the stack is notified that the interface is ready.
+ **/
+int txgbe_open(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	netif_carrier_off(netdev);
+
+	txgbe_up_complete(adapter);
+
+	return 0;
+}
+
+/**
+ * txgbe_close_suspend - actions necessary to both suspend and close flows
+ * @adapter: the private adapter struct
+ *
+ * This function should contain the necessary work common to both suspending
+ * and closing of the device.
+ */
+static void txgbe_close_suspend(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	txgbe_disable_device(adapter);
+	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+		TCALL(hw, mac.ops.disable_tx_laser);
+}
+
+/**
+ * txgbe_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS.  The hardware is still under the drivers control, but
+ * needs to be disabled.  A global MAC reset is issued to stop the
+ * hardware, and all transmit and receive resources are freed.
+ **/
+int txgbe_close(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	txgbe_down(adapter);
+
+	txgbe_release_hw_control(adapter);
+
+	return 0;
+}
+
 static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 {
 	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
@@ -337,6 +519,13 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 
 	netif_device_detach(netdev);
 
+	rtnl_lock();
+	if (netif_running(netdev))
+		txgbe_close_suspend(adapter);
+	rtnl_unlock();
+
+	txgbe_release_hw_control(adapter);
+
 	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
 		pci_disable_device(pdev);
 
@@ -355,6 +544,247 @@ static void txgbe_shutdown(struct pci_dev *pdev)
 	}
 }
 
+/**
+ * txgbe_watchdog_update_link - update the link status
+ * @adapter - pointer to the device adapter structure
+ * @link_speed - pointer to a u32 to store the link_speed
+ **/
+static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 link_speed = adapter->link_speed;
+	bool link_up = adapter->link_up;
+	u32 reg;
+	u32 i = 1;
+
+	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE))
+		return;
+
+	link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+	link_up = true;
+	TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+
+	if (link_up || time_after(jiffies, (adapter->link_check_timeout +
+		TXGBE_TRY_LINK_TIMEOUT))) {
+		adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+	}
+
+	for (i = 0; i < 3; i++) {
+		TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+		msleep(1);
+	}
+
+	adapter->link_up = link_up;
+	adapter->link_speed = link_speed;
+
+	if (link_up) {
+		if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+			     TXGBE_MAC_TX_CFG_SPEED_10G);
+		} else if (link_speed & (TXGBE_LINK_SPEED_1GB_FULL |
+			   TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL)) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+			     TXGBE_MAC_TX_CFG_SPEED_1G);
+		}
+
+		/* Re configure MAC RX */
+		reg = rd32(hw, TXGBE_MAC_RX_CFG);
+		wr32(hw, TXGBE_MAC_RX_CFG, reg);
+		wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+		reg = rd32(hw, TXGBE_MAC_WDG_TIMEOUT);
+		wr32(hw, TXGBE_MAC_WDG_TIMEOUT, reg);
+	}
+}
+
+/**
+ * txgbe_watchdog_link_is_up - update netif_carrier status and
+ *                             print link up message
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_link_is_up(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 link_speed = adapter->link_speed;
+	bool flow_rx, flow_tx;
+
+	/* only continue if link was previously down */
+	if (netif_carrier_ok(netdev))
+		return;
+
+	/* flow_rx, flow_tx report link flow control status */
+	flow_rx = (rd32(hw, TXGBE_MAC_RX_FLOW_CTRL) & 0x101) == 0x1;
+	flow_tx = !!(TXGBE_RDB_RFCC_RFCE_802_3X &
+		     rd32(hw, TXGBE_RDB_RFCC));
+
+	txgbe_info(drv, "NIC Link is Up %s, Flow Control: %s\n",
+		   (link_speed == TXGBE_LINK_SPEED_10GB_FULL ?
+		    "10 Gbps" :
+		    (link_speed == TXGBE_LINK_SPEED_1GB_FULL ?
+		     "1 Gbps" :
+		     (link_speed == TXGBE_LINK_SPEED_100_FULL ?
+		      "100 Mbps" :
+		      (link_speed == TXGBE_LINK_SPEED_10_FULL ?
+		       "10 Mbps" :
+		       "unknown speed")))),
+		  ((flow_rx && flow_tx) ? "RX/TX" :
+		   (flow_rx ? "RX" :
+		    (flow_tx ? "TX" : "None"))));
+
+	netif_carrier_on(netdev);
+}
+
+/**
+ * txgbe_watchdog_link_is_down - update netif_carrier status and
+ *                               print link down message
+ * @adapter - pointer to the adapter structure
+ **/
+static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+
+	adapter->link_up = false;
+	adapter->link_speed = 0;
+
+	/* only continue if link was up previously */
+	if (!netif_carrier_ok(netdev))
+		return;
+
+	txgbe_info(drv, "NIC Link is Down\n");
+	netif_carrier_off(netdev);
+}
+
+/**
+ * txgbe_watchdog_subtask - check and bring link up
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter)
+{
+	/* if interface is down do nothing */
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_REMOVING, &adapter->state) ||
+	    test_bit(__TXGBE_RESETTING, &adapter->state))
+		return;
+
+	txgbe_watchdog_update_link(adapter);
+
+	if (adapter->link_up)
+		txgbe_watchdog_link_is_up(adapter);
+	else
+		txgbe_watchdog_link_is_down(adapter);
+}
+
+/**
+ * txgbe_sfp_detection_subtask - poll for SFP+ cable
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_detection_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_mac_info *mac = &hw->mac;
+	s32 err;
+
+	/* not searching for SFP so there is nothing to do here */
+	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+		return;
+
+	if (adapter->sfp_poll_time &&
+	    time_after(adapter->sfp_poll_time, jiffies))
+		return; /* If not yet time to poll for SFP */
+
+	/* someone else is in init, wait until next service event */
+	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		return;
+
+	adapter->sfp_poll_time = jiffies + TXGBE_SFP_POLL_JIFFIES - 1;
+
+	err = TCALL(hw, phy.ops.identify_sfp);
+	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		goto sfp_out;
+
+	if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
+		/* If no cable is present, then we need to reset
+		 * the next time we find a good cable.
+		 */
+		adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+	}
+
+	/* exit on error */
+	if (err)
+		goto sfp_out;
+
+	/* exit if reset not needed */
+	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+		goto sfp_out;
+
+	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+
+	if (hw->phy.multispeed_fiber) {
+		/* Set up dual speed SFP+ support */
+		mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
+		mac->ops.setup_mac_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+	} else {
+		mac->ops.setup_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+		hw->phy.autoneg_advertised = 0;
+	}
+
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+	txgbe_info(probe, "detected SFP+: %d\n", hw->phy.sfp_type);
+
+sfp_out:
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+
+	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED && adapter->netdev_registered)
+		txgbe_dev_err("failed to initialize because an unsupported SFP+ module type was detected.\n");
+}
+
+/**
+ * txgbe_sfp_link_config_subtask - set up link SFP after module install
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_link_config_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 speed;
+	bool autoneg = false;
+	u8 device_type = hw->subsystem_id & 0xF0;
+
+	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG))
+		return;
+
+	/* someone else is in init, wait until next service event */
+	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		return;
+
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+
+	if (device_type == TXGBE_ID_MAC_SGMII) {
+		speed = TXGBE_LINK_SPEED_1GB_FULL;
+	} else {
+		speed = hw->phy.autoneg_advertised;
+		if (!speed && hw->mac.ops.get_link_capabilities) {
+			TCALL(hw, mac.ops.get_link_capabilities, &speed, &autoneg);
+			/* setup the highest link when no autoneg */
+			if (!autoneg) {
+				if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+					speed = TXGBE_LINK_SPEED_10GB_FULL;
+			}
+		}
+	}
+
+	TCALL(hw, mac.ops.setup_link, speed, false);
+
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+}
+
 /**
  * txgbe_service_timer - Timer Call-back
  * @data: pointer to adapter cast into an unsigned long
@@ -363,8 +793,17 @@ static void txgbe_service_timer(struct timer_list *t)
 {
 	struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
 	unsigned long next_event_offset;
+	struct txgbe_hw *hw = &adapter->hw;
 
-	next_event_offset = HZ * 2;
+	/* poll faster when waiting for link */
+	if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
+		if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+			next_event_offset = HZ;
+		else
+			next_event_offset = HZ / 10;
+	} else {
+		next_event_offset = HZ * 2;
+	}
 
 	/* Reset the timer */
 	mod_timer(&adapter->service_timer, next_event_offset + jiffies);
@@ -391,6 +830,10 @@ static void txgbe_service_task(struct work_struct *work)
 		return;
 	}
 
+	txgbe_sfp_detection_subtask(adapter);
+	txgbe_sfp_link_config_subtask(adapter);
+	txgbe_watchdog_subtask(adapter);
+
 	txgbe_service_event_complete(adapter);
 }
 
@@ -440,6 +883,16 @@ static int txgbe_del_sanmac_netdev(struct net_device *dev)
 	return err;
 }
 
+static const struct net_device_ops txgbe_netdev_ops = {
+	.ndo_open               = txgbe_open,
+	.ndo_stop               = txgbe_close,
+};
+
+void txgbe_assign_netdev_ops(struct net_device *dev)
+{
+	dev->netdev_ops = &txgbe_netdev_ops;
+}
+
 /**
  * txgbe_probe - Device Initialization Routine
  * @pdev: PCI device information struct
@@ -457,6 +910,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 	struct net_device *netdev;
 	struct txgbe_adapter *adapter = NULL;
 	struct txgbe_hw *hw = NULL;
+	static int cards_found;
 	int err, pci_using_dac, expected_gts;
 	u16 offset = 0;
 	u16 eeprom_verh = 0, eeprom_verl = 0;
@@ -540,6 +994,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 		goto err_ioremap;
 	}
 
+	txgbe_assign_netdev_ops(netdev);
 	netdev->watchdog_timeo = 5 * HZ;
 	strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
 
@@ -559,7 +1014,13 @@ static int txgbe_probe(struct pci_dev *pdev,
 		goto err_sw_init;
 
 	err = TCALL(hw, mac.ops.reset_hw);
-	if (err) {
+	if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
+		err = 0;
+	} else if (err == TXGBE_ERR_SFP_NOT_SUPPORTED) {
+		txgbe_dev_err("failed to load because an unsupported SFP+ module type was detected.\n");
+		txgbe_dev_err("Reload the driver after installing a supported module.\n");
+		goto err_sw_init;
+	} else if (err) {
 		txgbe_dev_err("HW Init failed: %d\n", err);
 		goto err_sw_init;
 	}
@@ -647,6 +1108,10 @@ static int txgbe_probe(struct pci_dev *pdev,
 	pci_set_drvdata(pdev, adapter);
 	adapter->netdev_registered = true;
 
+	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+		/* power down the optics for SFP+ fiber */
+		TCALL(hw, mac.ops.disable_tx_laser);
+
 	/* carrier off reporting is important to ethtool even BEFORE open */
 	netif_carrier_off(netdev);
 
@@ -670,8 +1135,14 @@ static int txgbe_probe(struct pci_dev *pdev,
 	/* First try to read PBA as a string */
 	err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH);
 	if (err)
-
 		strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH);
+	if (txgbe_is_sfp(hw) && hw->phy.sfp_type != txgbe_sfp_type_not_present)
+		txgbe_info(probe, "PHY: %d, SFP+: %d, PBA No: %s\n",
+			   hw->phy.type, hw->phy.sfp_type, part_str);
+	else
+		txgbe_info(probe, "PHY: %d, PBA No: %s\n",
+			   hw->phy.type, part_str);
+
 	txgbe_dev_info("%02x:%02x:%02x:%02x:%02x:%02x\n",
 		       netdev->dev_addr[0], netdev->dev_addr[1],
 		       netdev->dev_addr[2], netdev->dev_addr[3],
@@ -683,7 +1154,20 @@ static int txgbe_probe(struct pci_dev *pdev,
 	/* add san mac addr to netdev */
 	txgbe_add_sanmac_netdev(netdev);
 
+	txgbe_info(probe, "WangXun(R) 10 Gigabit Network Connection\n");
+	cards_found++;
+
+	/* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */
+	if (txgbe_mng_present(hw) && txgbe_is_sfp(hw) &&
+	    ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+		TCALL(hw, mac.ops.setup_link,
+		      TXGBE_LINK_SPEED_10GB_FULL | TXGBE_LINK_SPEED_1GB_FULL,
+		      true);
+
+	return 0;
+
 err_register:
+	txgbe_release_hw_control(adapter);
 err_sw_init:
 	kfree(adapter->mac_table);
 	iounmap(adapter->io_addr);
@@ -731,6 +1215,8 @@ static void txgbe_remove(struct pci_dev *pdev)
 		adapter->netdev_registered = false;
 	}
 
+	txgbe_release_hw_control(adapter);
+
 	iounmap(adapter->io_addr);
 	pci_release_selected_regions(pdev,
 				     pci_select_bars(pdev, IORESOURCE_MEM));
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
new file mode 100644
index 000000000000..bd4c5e117358
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
@@ -0,0 +1,401 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe_phy.h"
+
+/**
+ * txgbe_check_reset_blocked - check status of MNG FW veto bit
+ * @hw: pointer to the hardware structure
+ *
+ * This function checks the MMNGC.MNG_VETO bit to see if there are
+ * any constraints on link from manageability.  For MAC's that don't
+ * have this bit just return faluse since the link can not be blocked
+ * via this method.
+ **/
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw)
+{
+	u32 mmngc;
+
+	mmngc = rd32(hw, TXGBE_MIS_ST);
+	if (mmngc & TXGBE_MIS_ST_MNG_VETO) {
+		ERROR_REPORT1(TXGBE_ERROR_SOFTWARE,
+			      "MNG_VETO bit detected.\n");
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ *  txgbe_identify_module - Identifies module type
+ *  @hw: pointer to hardware structure
+ *
+ *  Determines HW type and calls appropriate function.
+ **/
+s32 txgbe_identify_module(struct txgbe_hw *hw)
+{
+	s32 status = TXGBE_ERR_SFP_NOT_PRESENT;
+
+	switch (TCALL(hw, mac.ops.get_media_type)) {
+	case txgbe_media_type_fiber:
+		status = txgbe_identify_sfp_module(hw);
+		break;
+
+	default:
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		status = TXGBE_ERR_SFP_NOT_PRESENT;
+		break;
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_identify_sfp_module - Identifies SFP modules
+ *  @hw: pointer to hardware structure
+ *
+ *  Searches for and identifies the SFP module and assigns appropriate PHY type.
+ **/
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw)
+{
+	s32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+	u32 vendor_oui = 0;
+	u8 identifier = 0;
+	u8 comp_codes_1g = 0;
+	u8 comp_codes_10g = 0;
+	u8 oui_bytes[3] = {0, 0, 0};
+	u8 cable_tech = 0;
+	u8 cable_spec = 0;
+
+	if (TCALL(hw, mac.ops.get_media_type) != txgbe_media_type_fiber) {
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		status = TXGBE_ERR_SFP_NOT_PRESENT;
+		goto out;
+	}
+
+	/* LAN ID is needed for I2C access */
+	txgbe_init_i2c(hw);
+	status = TCALL(hw, phy.ops.read_i2c_eeprom,
+		       TXGBE_SFF_IDENTIFIER,
+		       &identifier);
+
+	if (status != 0)
+		goto err_read_i2c_eeprom;
+
+	if (identifier != TXGBE_SFF_IDENTIFIER_SFP) {
+		hw->phy.type = txgbe_phy_sfp_unsupported;
+		status = TXGBE_ERR_SFP_NOT_SUPPORTED;
+	} else {
+		status = TCALL(hw, phy.ops.read_i2c_eeprom,
+			       TXGBE_SFF_1GBE_COMP_CODES,
+			       &comp_codes_1g);
+
+		if (status != 0)
+			goto err_read_i2c_eeprom;
+
+		status = TCALL(hw, phy.ops.read_i2c_eeprom,
+			       TXGBE_SFF_10GBE_COMP_CODES,
+			       &comp_codes_10g);
+
+		if (status != 0)
+			goto err_read_i2c_eeprom;
+		status = TCALL(hw, phy.ops.read_i2c_eeprom,
+			       TXGBE_SFF_CABLE_TECHNOLOGY,
+			       &cable_tech);
+
+		if (status != 0)
+			goto err_read_i2c_eeprom;
+
+		 /* ID Module
+		  * =========
+		  * 0   SFP_DA_CU
+		  * 1   SFP_SR
+		  * 2   SFP_LR
+		  * 3   SFP_DA_CORE0
+		  * 4   SFP_DA_CORE1
+		  * 5   SFP_SR/LR_CORE0
+		  * 6   SFP_SR/LR_CORE1
+		  * 7   SFP_act_lmt_DA_CORE0
+		  * 8   SFP_act_lmt_DA_CORE1
+		  * 9   SFP_1g_cu_CORE0
+		  * 10  SFP_1g_cu_CORE1
+		  * 11  SFP_1g_sx_CORE0
+		  * 12  SFP_1g_sx_CORE1
+		  */
+		{
+			if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE) {
+				if (hw->bus.lan_id == 0)
+					hw->phy.sfp_type =
+						     txgbe_sfp_type_da_cu_core0;
+				else
+					hw->phy.sfp_type =
+						     txgbe_sfp_type_da_cu_core1;
+			} else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE) {
+				TCALL(hw, phy.ops.read_i2c_eeprom,
+				      TXGBE_SFF_CABLE_SPEC_COMP,
+				      &cable_spec);
+				if (cable_spec &
+				    TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING) {
+					if (hw->bus.lan_id == 0)
+						hw->phy.sfp_type =
+						txgbe_sfp_type_da_act_lmt_core0;
+					else
+						hw->phy.sfp_type =
+						txgbe_sfp_type_da_act_lmt_core1;
+				} else {
+					hw->phy.sfp_type =
+							txgbe_sfp_type_unknown;
+				}
+			} else if (comp_codes_10g &
+				   (TXGBE_SFF_10GBASESR_CAPABLE |
+				    TXGBE_SFF_10GBASELR_CAPABLE)) {
+				if (hw->bus.lan_id == 0)
+					hw->phy.sfp_type =
+						      txgbe_sfp_type_srlr_core0;
+				else
+					hw->phy.sfp_type =
+						      txgbe_sfp_type_srlr_core1;
+			} else if (comp_codes_1g & TXGBE_SFF_1GBASET_CAPABLE) {
+				if (hw->bus.lan_id == 0)
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_cu_core0;
+				else
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_cu_core1;
+			} else if (comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) {
+				if (hw->bus.lan_id == 0)
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_sx_core0;
+				else
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_sx_core1;
+			} else if (comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) {
+				if (hw->bus.lan_id == 0)
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_lx_core0;
+				else
+					hw->phy.sfp_type =
+						txgbe_sfp_type_1g_lx_core1;
+			} else {
+				hw->phy.sfp_type = txgbe_sfp_type_unknown;
+			}
+		}
+
+		/* Determine if the SFP+ PHY is dual speed or not. */
+		hw->phy.multispeed_fiber = false;
+		if (((comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) &&
+		     (comp_codes_10g & TXGBE_SFF_10GBASESR_CAPABLE)) ||
+		    ((comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) &&
+		     (comp_codes_10g & TXGBE_SFF_10GBASELR_CAPABLE)))
+			hw->phy.multispeed_fiber = true;
+
+		/* Determine PHY vendor */
+		if (hw->phy.type != txgbe_phy_nl) {
+			status = TCALL(hw, phy.ops.read_i2c_eeprom,
+				       TXGBE_SFF_VENDOR_OUI_BYTE0,
+				       &oui_bytes[0]);
+
+			if (status != 0)
+				goto err_read_i2c_eeprom;
+
+			status = TCALL(hw, phy.ops.read_i2c_eeprom,
+				       TXGBE_SFF_VENDOR_OUI_BYTE1,
+				       &oui_bytes[1]);
+
+			if (status != 0)
+				goto err_read_i2c_eeprom;
+
+			status = TCALL(hw, phy.ops.read_i2c_eeprom,
+				       TXGBE_SFF_VENDOR_OUI_BYTE2,
+				       &oui_bytes[2]);
+
+			if (status != 0)
+				goto err_read_i2c_eeprom;
+
+			vendor_oui =
+			  ((oui_bytes[0] << TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT) |
+			   (oui_bytes[1] << TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT) |
+			   (oui_bytes[2] << TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT));
+
+			switch (vendor_oui) {
+			case TXGBE_SFF_VENDOR_OUI_TYCO:
+				if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE)
+					hw->phy.type =
+						    txgbe_phy_sfp_passive_tyco;
+				break;
+			case TXGBE_SFF_VENDOR_OUI_FTL:
+				if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE)
+					hw->phy.type = txgbe_phy_sfp_ftl_active;
+				else
+					hw->phy.type = txgbe_phy_sfp_ftl;
+				break;
+			case TXGBE_SFF_VENDOR_OUI_AVAGO:
+				hw->phy.type = txgbe_phy_sfp_avago;
+				break;
+			case TXGBE_SFF_VENDOR_OUI_INTEL:
+				hw->phy.type = txgbe_phy_sfp_intel;
+				break;
+			default:
+				if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE)
+					hw->phy.type =
+						 txgbe_phy_sfp_passive_unknown;
+				else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE)
+					hw->phy.type =
+						txgbe_phy_sfp_active_unknown;
+				else
+					hw->phy.type = txgbe_phy_sfp_unknown;
+				break;
+			}
+		}
+
+		/* Allow any DA cable vendor */
+		if (cable_tech & (TXGBE_SFF_DA_PASSIVE_CABLE |
+		    TXGBE_SFF_DA_ACTIVE_CABLE)) {
+			status = 0;
+			goto out;
+		}
+
+		/* Verify supported 1G SFP modules */
+		if (comp_codes_10g == 0 &&
+		    !(hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+		      hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+		      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+		      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+		      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+		      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1)) {
+			hw->phy.type = txgbe_phy_sfp_unsupported;
+			status = TXGBE_ERR_SFP_NOT_SUPPORTED;
+			goto out;
+		}
+	}
+
+out:
+	return status;
+
+err_read_i2c_eeprom:
+	hw->phy.sfp_type = txgbe_sfp_type_not_present;
+	if (hw->phy.type != txgbe_phy_nl)
+		hw->phy.type = txgbe_phy_unknown;
+
+	return TXGBE_ERR_SFP_NOT_PRESENT;
+}
+
+s32 txgbe_init_i2c(struct txgbe_hw *hw)
+{
+	wr32(hw, TXGBE_I2C_ENABLE, 0);
+
+	wr32(hw, TXGBE_I2C_CON,
+	     (TXGBE_I2C_CON_MASTER_MODE |
+	      TXGBE_I2C_CON_SPEED(1) |
+	      TXGBE_I2C_CON_RESTART_EN |
+	      TXGBE_I2C_CON_SLAVE_DISABLE));
+	/* Default addr is 0xA0 ,bit 0 is configure for read/write! */
+	wr32(hw, TXGBE_I2C_TAR, TXGBE_I2C_SLAVE_ADDR);
+	wr32(hw, TXGBE_I2C_SS_SCL_HCNT, 600);
+	wr32(hw, TXGBE_I2C_SS_SCL_LCNT, 600);
+	wr32(hw, TXGBE_I2C_RX_TL, 0); /* 1byte for rx full signal */
+	wr32(hw, TXGBE_I2C_TX_TL, 4);
+	wr32(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT, 0xFFFFFF);
+	wr32(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT, 0xFFFFFF);
+
+	wr32(hw, TXGBE_I2C_INTR_MASK, 0);
+	wr32(hw, TXGBE_I2C_ENABLE, 1);
+	return 0;
+}
+
+/**
+ *  txgbe_read_i2c_eeprom - Reads 8 bit EEPROM word over I2C interface
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: EEPROM byte offset to read
+ *  @eeprom_data: value read
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface.
+ **/
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+			  u8 *eeprom_data)
+{
+	return TCALL(hw, phy.ops.read_i2c_byte, byte_offset,
+		     TXGBE_I2C_EEPROM_DEV_ADDR,
+		     eeprom_data);
+}
+
+/**
+ *  txgbe_read_i2c_byte_int - Reads 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to read
+ *  @data: value read
+ *  @lock: true if to take and release semaphore
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+static s32 txgbe_read_i2c_byte_int(struct txgbe_hw *hw, u8 byte_offset,
+				   u8 __maybe_unused dev_addr, u8 *data, bool lock)
+{
+	s32 status = 0;
+	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+
+	if (lock && 0 != TCALL(hw, mac.ops.acquire_swfw_sync, swfw_mask))
+		return TXGBE_ERR_SWFW_SYNC;
+
+	/* wait tx empty */
+	status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+		       TXGBE_I2C_INTR_STAT_TX_EMPTY,
+		       TXGBE_I2C_INTR_STAT_TX_EMPTY,
+		       TXGBE_I2C_TIMEOUT, 10);
+	if (status != 0)
+		goto out;
+
+	/* read data */
+	wr32(hw, TXGBE_I2C_DATA_CMD,
+	     byte_offset | TXGBE_I2C_DATA_CMD_STOP);
+	wr32(hw, TXGBE_I2C_DATA_CMD, TXGBE_I2C_DATA_CMD_READ);
+
+	/* wait for read complete */
+	status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+		       TXGBE_I2C_INTR_STAT_RX_FULL,
+		       TXGBE_I2C_INTR_STAT_RX_FULL,
+		       TXGBE_I2C_TIMEOUT, 10);
+	if (status != 0)
+		goto out;
+
+	*data = 0xFF & rd32(hw, TXGBE_I2C_DATA_CMD);
+
+out:
+	if (lock)
+		TCALL(hw, mac.ops.release_swfw_sync, swfw_mask);
+	return status;
+}
+
+/**
+ *  txgbe_switch_i2c_slave_addr - Switch I2C slave address
+ *  @hw: pointer to hardware structure
+ *  @dev_addr: slave addr to switch
+ *
+ **/
+s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr)
+{
+	wr32(hw, TXGBE_I2C_ENABLE, 0);
+	wr32(hw, TXGBE_I2C_TAR, dev_addr >> 1);
+	wr32(hw, TXGBE_I2C_ENABLE, 1);
+	return 0;
+}
+
+/**
+ *  txgbe_read_i2c_byte - Reads 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to read
+ *  @data: value read
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+			u8 dev_addr, u8 *data)
+{
+	txgbe_switch_i2c_slave_addr(hw, dev_addr);
+
+	return txgbe_read_i2c_byte_int(hw, byte_offset, dev_addr,
+				       data, true);
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
new file mode 100644
index 000000000000..4798623fb735
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_PHY_H_
+#define _TXGBE_PHY_H_
+
+#include "txgbe.h"
+
+#define TXGBE_I2C_EEPROM_DEV_ADDR       0xA0
+
+/* EEPROM byte offsets */
+#define TXGBE_SFF_IDENTIFIER            0x0
+#define TXGBE_SFF_IDENTIFIER_SFP        0x3
+#define TXGBE_SFF_VENDOR_OUI_BYTE0      0x25
+#define TXGBE_SFF_VENDOR_OUI_BYTE1      0x26
+#define TXGBE_SFF_VENDOR_OUI_BYTE2      0x27
+#define TXGBE_SFF_1GBE_COMP_CODES       0x6
+#define TXGBE_SFF_10GBE_COMP_CODES      0x3
+#define TXGBE_SFF_CABLE_TECHNOLOGY      0x8
+#define TXGBE_SFF_CABLE_SPEC_COMP       0x3C
+
+/* Bitmasks */
+#define TXGBE_SFF_DA_PASSIVE_CABLE      0x4
+#define TXGBE_SFF_DA_ACTIVE_CABLE       0x8
+#define TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING       0x4
+#define TXGBE_SFF_1GBASESX_CAPABLE      0x1
+#define TXGBE_SFF_1GBASELX_CAPABLE      0x2
+#define TXGBE_SFF_1GBASET_CAPABLE       0x8
+#define TXGBE_SFF_10GBASESR_CAPABLE     0x10
+#define TXGBE_SFF_10GBASELR_CAPABLE     0x20
+/* Bit-shift macros */
+#define TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT        24
+#define TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT        16
+#define TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT        8
+
+/* Vendor OUIs: format of OUI is 0x[byte0][byte1][byte2][00] */
+#define TXGBE_SFF_VENDOR_OUI_TYCO       0x00407600
+#define TXGBE_SFF_VENDOR_OUI_FTL        0x00906500
+#define TXGBE_SFF_VENDOR_OUI_AVAGO      0x00176A00
+#define TXGBE_SFF_VENDOR_OUI_INTEL      0x001B2100
+
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
+
+s32 txgbe_identify_module(struct txgbe_hw *hw);
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
+s32 txgbe_init_i2c(struct txgbe_hw *hw);
+s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr);
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+			u8 dev_addr, u8 *data);
+
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+			  u8 *eeprom_data);
+
+#endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index e1b438e18edc..c068fa933ab6 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -72,6 +72,149 @@
 
 /* Revision ID */
 #define TXGBE_SP_MPW  1
+/* ETH PHY Registers */
+#define TXGBE_SR_XS_PCS_MMD_STATUS1             0x30001
+#define TXGBE_SR_PCS_CTL2                       0x30007
+#define TXGBE_SR_PMA_MMD_CTL1                   0x10000
+#define TXGBE_SR_MII_MMD_CTL                    0x1F0000
+#define TXGBE_SR_MII_MMD_DIGI_CTL               0x1F8000
+#define TXGBE_SR_MII_MMD_AN_CTL                 0x1F8001
+#define TXGBE_SR_MII_MMD_AN_ADV                 0x1F0004
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE(_v)       ((0x3 & (_v)) << 7)
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM       0x80
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM       0x100
+#define TXGBE_SR_MII_MMD_LP_BABL                0x1F0005
+#define TXGBE_SR_AN_MMD_CTL                     0x70000
+#define TXGBE_SR_AN_MMD_ADV_REG1                0x70010
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE(_v)      ((0x3 & (_v)) << 10)
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM      0x400
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM      0x800
+#define TXGBE_SR_AN_MMD_ADV_REG2                0x70011
+#define TXGBE_SR_AN_MMD_LP_ABL1                 0x70013
+#define TXGBE_VR_AN_KR_MODE_CL                  0x78003
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1        0x38000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS      0x38010
+#define TXGBE_PHY_MPLLA_CTL0                    0x18071
+#define TXGBE_PHY_MPLLA_CTL3                    0x18077
+#define TXGBE_PHY_MISC_CTL0                     0x18090
+#define TXGBE_PHY_VCO_CAL_LD0                   0x18092
+#define TXGBE_PHY_VCO_CAL_LD1                   0x18093
+#define TXGBE_PHY_VCO_CAL_LD2                   0x18094
+#define TXGBE_PHY_VCO_CAL_LD3                   0x18095
+#define TXGBE_PHY_VCO_CAL_REF0                  0x18096
+#define TXGBE_PHY_VCO_CAL_REF1                  0x18097
+#define TXGBE_PHY_RX_AD_ACK                     0x18098
+#define TXGBE_PHY_AFE_DFE_ENABLE                0x1805D
+#define TXGBE_PHY_DFE_TAP_CTL0                  0x1805E
+#define TXGBE_PHY_RX_EQ_ATT_LVL0                0x18057
+#define TXGBE_PHY_RX_EQ_CTL0                    0x18058
+#define TXGBE_PHY_RX_EQ_CTL                     0x1805C
+#define TXGBE_PHY_TX_EQ_CTL0                    0x18036
+#define TXGBE_PHY_TX_EQ_CTL1                    0x18037
+#define TXGBE_PHY_TX_RATE_CTL                   0x18034
+#define TXGBE_PHY_RX_RATE_CTL                   0x18054
+#define TXGBE_PHY_TX_GEN_CTL2                   0x18032
+#define TXGBE_PHY_RX_GEN_CTL2                   0x18052
+#define TXGBE_PHY_RX_GEN_CTL3                   0x18053
+#define TXGBE_PHY_MPLLA_CTL2                    0x18073
+#define TXGBE_PHY_RX_POWER_ST_CTL               0x18055
+#define TXGBE_PHY_TX_POWER_ST_CTL               0x18035
+#define TXGBE_PHY_TX_GENCTRL1                   0x18031
+
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R        0x0
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X        0x1
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK     0x3
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G      0x0
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G     0x2000
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK    0x2000
+#define TXGBE_SR_PMA_MMD_CTL1_LB_EN             0x1
+#define TXGBE_SR_MII_MMD_CTL_AN_EN              0x1000
+#define TXGBE_SR_MII_MMD_CTL_RESTART_AN         0x0200
+#define TXGBE_SR_AN_MMD_CTL_RESTART_AN          0x0200
+#define TXGBE_SR_AN_MMD_CTL_ENABLE              0x1000
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4    0x40
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX     0x20
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR     0x80
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_MASK   0xFFFF
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_ENABLE 0x1000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST 0x8000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK            0x1C
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD      0x10
+
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX              32
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_10GBASER_KR             33
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER                   40
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_MASK                    0xFF
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX           0x56
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR          0x7B
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER                0x56
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_MASK                 0x7FF
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0                       0x1
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1                     0xE
+#define TXGBE_PHY_MISC_CTL0_RX_VREF_CTRL                        0x1F00
+#define TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX                        1344
+#define TXGBE_PHY_VCO_CAL_LD0_10GBASER_KR                       1353
+#define TXGBE_PHY_VCO_CAL_LD0_OTHER                             1360
+#define TXGBE_PHY_VCO_CAL_LD0_MASK                              0x1000
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX                   42
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_10GBASER_KR                  41
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_OTHER                        34
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_MASK                         0x3F
+#define TXGBE_PHY_AFE_DFE_ENABLE_DFE_EN0                        0x10
+#define TXGBE_PHY_AFE_DFE_ENABLE_AFE_EN0                        0x1
+#define TXGBE_PHY_AFE_DFE_ENABLE_MASK                           0xFF
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT0                         0x1
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT_MASK                     0xF
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_10GBASER_KR              0x0
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_RXAUI                    0x1
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX               0x3
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_OTHER                    0x2
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_OTHER                    0x20
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_OTHER                    0x200
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_OTHER                    0x2000
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_MASK                     0x7
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_MASK                     0x70
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_MASK                     0x700
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_MASK                     0x7000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_10GBASER_KR              0x0
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_RXAUI                    0x1
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX               0x3
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_OTHER                    0x2
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_OTHER                    0x20
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_OTHER                    0x200
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_OTHER                    0x2000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_MASK                     0x7
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_MASK                     0x70
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_MASK                     0x700
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_MASK                     0x7000
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR             0x200
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR_RXAUI       0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER                   0x100
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_MASK                    0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_OTHER                   0x400
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_MASK                    0xC00
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_OTHER                   0x1000
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_MASK                    0x3000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_OTHER                   0x4000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_MASK                    0xC000
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR             0x200
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR_RXAUI       0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER                   0x100
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_MASK                    0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_OTHER                   0x400
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_MASK                    0xC00
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_OTHER                   0x1000
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_MASK                    0x3000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_OTHER                   0x4000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_MASK                    0xC000
+
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_8                       0x100
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10                      0x200
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_16P5                    0x400
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_MASK                    0x700
+
+#define TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME  100
+#define TXGBE_PHY_INIT_DONE_POLLING_TIME        100
 
 /**************** Global Registers ****************************/
 /* chip control Registers */
@@ -1554,11 +1697,18 @@ enum TXGBE_MSCA_CMD_value {
 #define FW_CEM_RESP_STATUS_SUCCESS      0x1
 #define FW_READ_SHADOW_RAM_CMD          0x31
 #define FW_READ_SHADOW_RAM_LEN          0x6
+#define FW_WRITE_SHADOW_RAM_CMD         0x33
+#define FW_WRITE_SHADOW_RAM_LEN         0xA /* 8 plus 1 WORD to write */
 #define FW_DEFAULT_CHECKSUM             0xFF /* checksum always 0xFF */
 #define FW_NVM_DATA_OFFSET              3
 #define FW_MAX_READ_BUFFER_SIZE         244
 #define FW_RESET_CMD                    0xDF
 #define FW_RESET_LEN                    0x2
+#define FW_DW_OPEN_NOTIFY               0xE9
+#define FW_DW_CLOSE_NOTIFY              0xEA
+
+#define TXGBE_CHECKSUM_CAP_ST_PASS      0x80658383
+#define TXGBE_CHECKSUM_CAP_ST_FAIL      0x70657376
 
 /* Host Interface Command Structures */
 struct txgbe_hic_hdr {
@@ -1611,6 +1761,15 @@ struct txgbe_hic_read_shadow_ram {
 	u16 pad3;
 };
 
+struct txgbe_hic_write_shadow_ram {
+	union txgbe_hic_hdr2 hdr;
+	u32 address;
+	u16 length;
+	u16 pad2;
+	u16 data;
+	u16 pad3;
+};
+
 struct txgbe_hic_reset {
 	struct txgbe_hic_hdr hdr;
 	u16 lan_id;
@@ -1619,6 +1778,38 @@ struct txgbe_hic_reset {
 
 /* Number of 100 microseconds we wait for PCI Express master disable */
 #define TXGBE_PCI_MASTER_DISABLE_TIMEOUT        800
+
+/* Autonegotiation advertised speeds */
+typedef u32 txgbe_autoneg_advertised;
+/* Link speed */
+#define TXGBE_LINK_SPEED_UNKNOWN        0
+#define TXGBE_LINK_SPEED_100_FULL       1
+#define TXGBE_LINK_SPEED_1GB_FULL       2
+#define TXGBE_LINK_SPEED_10GB_FULL      4
+#define TXGBE_LINK_SPEED_10_FULL        8
+#define TXGBE_LINK_SPEED_AUTONEG  (TXGBE_LINK_SPEED_100_FULL | \
+				   TXGBE_LINK_SPEED_1GB_FULL | \
+				   TXGBE_LINK_SPEED_10GB_FULL | \
+				   TXGBE_LINK_SPEED_10_FULL)
+
+/* Physical layer type */
+typedef u32 txgbe_physical_layer;
+#define TXGBE_PHYSICAL_LAYER_UNKNOWN            0
+#define TXGBE_PHYSICAL_LAYER_10GBASE_T          0x0001
+#define TXGBE_PHYSICAL_LAYER_1000BASE_T         0x0002
+#define TXGBE_PHYSICAL_LAYER_100BASE_TX         0x0004
+#define TXGBE_PHYSICAL_LAYER_SFP_PLUS_CU        0x0008
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LR         0x0010
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LRM        0x0020
+#define TXGBE_PHYSICAL_LAYER_10GBASE_SR         0x0040
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KX4        0x0080
+#define TXGBE_PHYSICAL_LAYER_1000BASE_KX        0x0200
+#define TXGBE_PHYSICAL_LAYER_1000BASE_BX        0x0400
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KR         0x0800
+#define TXGBE_PHYSICAL_LAYER_10GBASE_XAUI       0x1000
+#define TXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA      0x2000
+#define TXGBE_PHYSICAL_LAYER_1000BASE_SX        0x4000
+
 enum txgbe_eeprom_type {
 	txgbe_eeprom_uninitialized = 0,
 	txgbe_eeprom_spi,
@@ -1626,6 +1817,66 @@ enum txgbe_eeprom_type {
 	txgbe_eeprom_none /* No NVM support */
 };
 
+enum txgbe_phy_type {
+	txgbe_phy_unknown = 0,
+	txgbe_phy_none,
+	txgbe_phy_tn,
+	txgbe_phy_aq,
+	txgbe_phy_cu_unknown,
+	txgbe_phy_qt,
+	txgbe_phy_xaui,
+	txgbe_phy_nl,
+	txgbe_phy_sfp_passive_tyco,
+	txgbe_phy_sfp_passive_unknown,
+	txgbe_phy_sfp_active_unknown,
+	txgbe_phy_sfp_avago,
+	txgbe_phy_sfp_ftl,
+	txgbe_phy_sfp_ftl_active,
+	txgbe_phy_sfp_unknown,
+	txgbe_phy_sfp_intel,
+	txgbe_phy_sfp_unsupported, /*Enforce bit set with unsupported module*/
+	txgbe_phy_generic
+};
+
+/* SFP+ module type IDs:
+ *
+ * ID   Module Type
+ * =============
+ * 0    SFP_DA_CU
+ * 1    SFP_SR
+ * 2    SFP_LR
+ * 3    SFP_DA_CU_CORE0
+ * 4    SFP_DA_CU_CORE1
+ * 5    SFP_SR/LR_CORE0
+ * 6    SFP_SR/LR_CORE1
+ */
+enum txgbe_sfp_type {
+	txgbe_sfp_type_da_cu = 0,
+	txgbe_sfp_type_sr = 1,
+	txgbe_sfp_type_lr = 2,
+	txgbe_sfp_type_da_cu_core0 = 3,
+	txgbe_sfp_type_da_cu_core1 = 4,
+	txgbe_sfp_type_srlr_core0 = 5,
+	txgbe_sfp_type_srlr_core1 = 6,
+	txgbe_sfp_type_da_act_lmt_core0 = 7,
+	txgbe_sfp_type_da_act_lmt_core1 = 8,
+	txgbe_sfp_type_1g_cu_core0 = 9,
+	txgbe_sfp_type_1g_cu_core1 = 10,
+	txgbe_sfp_type_1g_sx_core0 = 11,
+	txgbe_sfp_type_1g_sx_core1 = 12,
+	txgbe_sfp_type_1g_lx_core0 = 13,
+	txgbe_sfp_type_1g_lx_core1 = 14,
+	txgbe_sfp_type_not_present = 0xFFFE,
+	txgbe_sfp_type_unknown = 0xFFFF
+};
+
+enum txgbe_media_type {
+	txgbe_media_type_unknown = 0,
+	txgbe_media_type_fiber,
+	txgbe_media_type_copper,
+	txgbe_media_type_backplane,
+	txgbe_media_type_virtual
+};
 
 /* PCI bus types */
 enum txgbe_bus_type {
@@ -1698,6 +1949,7 @@ struct txgbe_mac_operations {
 	s32 (*init_hw)(struct txgbe_hw *hw);
 	s32 (*reset_hw)(struct txgbe_hw *hw);
 	s32 (*start_hw)(struct txgbe_hw *hw);
+	enum txgbe_media_type (*get_media_type)(struct txgbe_hw *hw);
 	s32 (*get_mac_addr)(struct txgbe_hw *hw, u8 *mac_addr);
 	s32 (*get_san_mac_addr)(struct txgbe_hw *hw, u8 *san_mac_addr);
 	s32 (*get_wwn_prefix)(struct txgbe_hw *hw, u16 *wwnn_prefix,
@@ -1708,6 +1960,24 @@ struct txgbe_mac_operations {
 	s32 (*acquire_swfw_sync)(struct txgbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct txgbe_hw *hw, u32 mask);
 
+	/* Link */
+	void (*disable_tx_laser)(struct txgbe_hw *hw);
+	void (*enable_tx_laser)(struct txgbe_hw *hw);
+	void (*flap_tx_laser)(struct txgbe_hw *hw);
+	s32 (*setup_link)(struct txgbe_hw *hw, u32 speed,
+			  bool autoneg_wait_to_complete);
+	s32 (*setup_mac_link)(struct txgbe_hw *hw, u32 speed,
+			      bool autoneg_wait_to_complete);
+	s32 (*check_link)(struct txgbe_hw *hw, u32 *speed,
+			  bool *link_up, bool link_up_wait_to_complete);
+	s32 (*get_link_capabilities)(struct txgbe_hw *hw, u32 *speed,
+				     bool *autoneg);
+	void (*set_rate_select_speed)(struct txgbe_hw *hw, u32 speed);
+
+	/* LED */
+	s32 (*led_on)(struct txgbe_hw *hw, u32 index);
+	s32 (*led_off)(struct txgbe_hw *hw, u32 index);
+
 	/* RAR */
 	s32 (*set_rar)(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
 		       u32 enable_addr);
@@ -1724,6 +1994,16 @@ struct txgbe_mac_operations {
 	void (*disable_rx)(struct txgbe_hw *hw);
 };
 
+struct txgbe_phy_operations {
+	s32 (*identify)(struct txgbe_hw *hw);
+	s32 (*identify_sfp)(struct txgbe_hw *hw);
+	s32 (*init)(struct txgbe_hw *hw);
+	s32 (*read_i2c_byte)(struct txgbe_hw *hw, u8 byte_offset,
+			     u8 dev_addr, u8 *data);
+	s32 (*read_i2c_eeprom)(struct txgbe_hw *hw, u8 byte_offset,
+			       u8 *eeprom_data);
+};
+
 struct txgbe_eeprom_info {
 	struct txgbe_eeprom_operations ops;
 	enum txgbe_eeprom_type type;
@@ -1746,23 +2026,47 @@ struct txgbe_mac_info {
 	u32 num_rar_entries;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
+	u32 orig_sr_pcs_ctl2;
+	u32 orig_sr_pma_mmd_ctl1;
+	u32 orig_sr_an_mmd_ctl;
+	u32 orig_sr_an_mmd_adv_reg2;
+	u32 orig_vr_xs_or_pcs_mmd_digi_ctl1;
 	u8  san_mac_rar_index;
+	bool orig_link_settings_stored;
 	bool autotry_restart;
 	struct txgbe_thermal_sensor_data  thermal_sensor_data;
 	bool set_lben;
 };
 
+struct txgbe_phy_info {
+	struct txgbe_phy_operations ops;
+	enum txgbe_phy_type type;
+	enum txgbe_sfp_type sfp_type;
+	enum txgbe_media_type media_type;
+	u32 phy_semaphore_mask;
+	txgbe_autoneg_advertised autoneg_advertised;
+	bool multispeed_fiber;
+	txgbe_physical_layer link_mode;
+};
+
 enum txgbe_reset_type {
 	TXGBE_LAN_RESET = 0,
 	TXGBE_SW_RESET,
 	TXGBE_GLOBAL_RESET
 };
 
+enum txgbe_link_status {
+	TXGBE_LINK_STATUS_NONE = 0,
+	TXGBE_LINK_STATUS_KX,
+	TXGBE_LINK_STATUS_KX4
+};
+
 struct txgbe_hw {
 	u8 __iomem *hw_addr;
 	void *back;
 	struct txgbe_mac_info mac;
 	struct txgbe_addr_filter_info addr_ctrl;
+	struct txgbe_phy_info phy;
 	struct txgbe_eeprom_info eeprom;
 	struct txgbe_bus_info bus;
 	u16 device_id;
@@ -1773,6 +2077,7 @@ struct txgbe_hw {
 	bool adapter_stopped;
 	enum txgbe_reset_type reset_type;
 	bool force_full_reset;
+	enum txgbe_link_status link_status;
 	u16 subsystem_id;
 };
 
@@ -1923,6 +2228,33 @@ wr32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 field)
 	txgbe_wr32(base + reg, val);
 }
 
+/* poll register */
+#define TXGBE_I2C_TIMEOUT  1000
+static inline s32
+po32m(struct txgbe_hw *hw, u32 reg, u32 mask,
+	u32 field, int usecs, int count)
+{
+	int loop;
+
+	loop = (count ? count : (usecs + 9) / 10);
+	usecs = (loop ? (usecs + loop - 1) / loop : 0);
+
+	count = loop;
+	do {
+		u32 value = rd32(hw, reg);
+
+		if ((value & mask) == (field & mask))
+			break;
+
+		if (loop-- <= 0)
+			break;
+
+		udelay(usecs);
+	} while (true);
+
+	return (count - loop <= count ? 0 : TXGBE_ERR_TIMEOUT);
+}
+
 #define TXGBE_WRITE_FLUSH(H) rd32(H, TXGBE_MIS_PWR)
 
 #endif /* _TXGBE_TYPE_H_ */
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 05/14] net: txgbe: Add interrupt support
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (3 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 04/14] net: txgbe: Add PHY interface support Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 06/14] net: txgbe: Support to receive and tranmit packets Jiawen Wu
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Determine proper interrupt scheme based on kernel support and
hardware queue count. Allocate memory for interrupt vectors,
and handle different interrupts.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   3 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    | 116 +++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c |  35 +
 .../net/ethernet/wangxun/txgbe/txgbe_lib.c    | 442 +++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 736 ++++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_phy.c    |  22 +
 .../net/ethernet/wangxun/txgbe/txgbe_phy.h    |   1 +
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |   2 +
 8 files changed, 1356 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index 7dc73b58fe75..aceb0d2e56ee 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -7,4 +7,5 @@
 obj-$(CONFIG_TXGBE) += txgbe.o
 
 txgbe-objs := txgbe_main.o \
-              txgbe_hw.o txgbe_phy.o
+              txgbe_hw.o txgbe_phy.o \
+              txgbe_lib.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 58a89d43046f..c5a02f1950ae 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -19,13 +19,66 @@
 #endif
 
 struct txgbe_ring {
+	struct txgbe_ring *next;        /* pointer to next ring in q_vector */
+	struct txgbe_q_vector *q_vector; /* backpointer to host q_vector */
+	struct net_device *netdev;      /* netdev ring belongs to */
+	struct device *dev;             /* device for DMA mapping */
+	u16 count;                      /* amount of descriptors */
+
+	u8 queue_index; /* needed for multiqueue queue management */
 	u8 reg_idx;
 } ____cacheline_internodealigned_in_smp;
 
 #define TXGBE_MAX_FDIR_INDICES          63
 
+#define MAX_RX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
 #define MAX_TX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
 
+struct txgbe_ring_container {
+	struct txgbe_ring *ring;        /* pointer to linked list of rings */
+	u16 work_limit;                 /* total work allowed per interrupt */
+	u8 count;                       /* total number of rings in vector */
+};
+
+/* iterator for handling rings in ring container */
+#define txgbe_for_each_ring(pos, head) \
+	for (pos = (head).ring; pos != NULL; pos = pos->next)
+
+/* MAX_MSIX_Q_VECTORS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+struct txgbe_q_vector {
+	struct txgbe_adapter *adapter;
+	int cpu;
+	u16 v_idx;      /* index of q_vector within array, also used for
+			 * finding the bit in EICR and friends that
+			 * represents the vector for this ring
+			 */
+	u16 itr;        /* Interrupt throttle rate written to EITR */
+	struct txgbe_ring_container rx, tx;
+
+	struct napi_struct napi;
+	cpumask_t affinity_mask;
+	int numa_node;
+	struct rcu_head rcu;    /* to avoid race with update stats on free */
+	char name[IFNAMSIZ + 17];
+
+	/* for dynamic allocation of rings associated with this q_vector */
+	struct txgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+/* microsecond values for various ITR rates shifted by 2 to fit itr register
+ * with the first 3 bits reserved 0
+ */
+#define TXGBE_100K_ITR          40
+#define TXGBE_20K_ITR           200
+#define TXGBE_16K_ITR           248
+#define TXGBE_12K_ITR           336
+
+#define TCP_TIMER_VECTOR        0
+#define OTHER_VECTOR    1
+#define NON_Q_VECTORS   (OTHER_VECTOR + TCP_TIMER_VECTOR)
+
 #define TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE       64
 
 struct txgbe_mac_addr {
@@ -38,6 +91,12 @@ struct txgbe_mac_addr {
 #define TXGBE_MAC_STATE_MODIFIED        0x2
 #define TXGBE_MAC_STATE_IN_USE          0x4
 
+#define MAX_MSIX_Q_VECTORS      TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE
+#define MAX_MSIX_COUNT          TXGBE_MAX_MSIX_VECTORS_SAPPHIRE
+
+#define MIN_MSIX_Q_VECTORS      1
+#define MIN_MSIX_COUNT          (MIN_MSIX_Q_VECTORS + NON_Q_VECTORS)
+
 /* default to trying for four seconds */
 #define TXGBE_TRY_LINK_TIMEOUT  (4 * HZ)
 #define TXGBE_SFP_POLL_JIFFIES  (2 * HZ)        /* SFP poll every 2 seconds */
@@ -47,12 +106,26 @@ struct txgbe_mac_addr {
  **/
 #define TXGBE_FLAG_NEED_LINK_UPDATE             ((u32)(1 << 0))
 #define TXGBE_FLAG_NEED_LINK_CONFIG             ((u32)(1 << 1))
+#define TXGBE_FLAG_MSI_ENABLED                  ((u32)(1 << 2))
+#define TXGBE_FLAG_MSIX_ENABLED                 ((u32)(1 << 3))
 
 /**
  * txgbe_adapter.flag2
  **/
 #define TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED     (1U << 0)
 #define TXGBE_FLAG2_SFP_NEEDS_RESET             (1U << 1)
+#define TXGBE_FLAG2_TEMP_SENSOR_EVENT           (1U << 2)
+#define TXGBE_FLAG2_PF_RESET_REQUESTED          (1U << 3)
+#define TXGBE_FLAG2_RESET_INTR_RECEIVED         (1U << 4)
+#define TXGBE_FLAG2_GLOBAL_RESET_REQUESTED      (1U << 5)
+
+enum txgbe_isb_idx {
+	TXGBE_ISB_HEADER,
+	TXGBE_ISB_MISC,
+	TXGBE_ISB_VEC0,
+	TXGBE_ISB_VEC1,
+	TXGBE_ISB_MAX
+};
 
 /* board specific private data structure */
 struct txgbe_adapter {
@@ -72,20 +145,33 @@ struct txgbe_adapter {
 	/* Tx fast path data */
 	int num_tx_queues;
 	u16 tx_itr_setting;
+	u16 tx_work_limit;
 
 	/* Rx fast path data */
 	int num_rx_queues;
 	u16 rx_itr_setting;
+	u16 rx_work_limit;
 
 	/* TX */
 	struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
 
+	u64 lsc_int;
+
+	/* RX */
+	struct txgbe_ring *rx_ring[MAX_RX_QUEUES];
+	struct txgbe_q_vector *q_vector[MAX_MSIX_Q_VECTORS];
+
+	int num_q_vectors;      /* current number of q_vectors for device */
 	int max_q_vectors;      /* upper limit of q_vectors for device */
+	struct msix_entry *msix_entries;
 
 	/* structs defined in txgbe_hw.h */
 	struct txgbe_hw hw;
 	u16 msg_enable;
 
+	unsigned int tx_ring_count;
+	unsigned int rx_ring_count;
+
 	u32 link_speed;
 	bool link_up;
 	unsigned long sfp_poll_time;
@@ -99,11 +185,30 @@ struct txgbe_adapter {
 
 	char eeprom_id[32];
 	bool netdev_registered;
+	u32 interrupt_event;
 
 	struct txgbe_mac_addr *mac_table;
 
+	/* misc interrupt status block */
+	dma_addr_t isb_dma;
+	u32 *isb_mem;
+	u32 isb_tag[TXGBE_ISB_MAX];
 };
 
+static inline u32 txgbe_misc_isb(struct txgbe_adapter *adapter,
+				 enum txgbe_isb_idx idx)
+{
+	u32 cur_tag = 0;
+	u32 cur_diff = 0;
+
+	cur_tag = adapter->isb_mem[TXGBE_ISB_HEADER];
+	cur_diff = cur_tag - adapter->isb_tag[idx];
+
+	adapter->isb_tag[idx] = cur_tag;
+
+	return adapter->isb_mem[idx];
+}
+
 enum txgbe_state_t {
 	__TXGBE_TESTING,
 	__TXGBE_RESETTING,
@@ -118,7 +223,18 @@ enum txgbe_state_t {
 	__TXGBE_PTP_TX_IN_PROGRESS,
 };
 
+void txgbe_irq_disable(struct txgbe_adapter *adapter);
+void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush);
+int txgbe_open(struct net_device *netdev);
+int txgbe_close(struct net_device *netdev);
+void txgbe_up(struct txgbe_adapter *adapter);
 void txgbe_down(struct txgbe_adapter *adapter);
+int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter);
+void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter);
+void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter);
+void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter);
+void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
+
 
 /**
  * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 0c1260ba7c72..1e30878f7c6f 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -57,6 +57,39 @@ void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data)
 	wr32(hw, offset, data);
 }
 
+/**
+ *  txgbe_get_pcie_msix_count - Gets MSI-X vector count
+ *  @hw: pointer to hardware structure
+ *
+ *  Read PCIe configuration space, and get the MSI-X vector count from
+ *  the capabilities table.
+ **/
+u16 txgbe_get_pcie_msix_count(struct txgbe_hw *hw)
+{
+	u16 msix_count = 1;
+	u16 max_msix_count;
+	u32 pos;
+
+	max_msix_count = TXGBE_MAX_MSIX_VECTORS_SAPPHIRE;
+	pos = pci_find_capability(((struct txgbe_adapter *)hw->back)->pdev, PCI_CAP_ID_MSIX);
+	if (!pos)
+		return msix_count;
+	pci_read_config_word(((struct txgbe_adapter *)hw->back)->pdev,
+			     pos + PCI_MSIX_FLAGS, &msix_count);
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		msix_count = 0;
+	msix_count &= TXGBE_PCIE_MSIX_TBL_SZ_MASK;
+
+	/* MSI-X count is zero-based in HW */
+	msix_count++;
+
+	if (msix_count > max_msix_count)
+		msix_count = max_msix_count;
+
+	return msix_count;
+}
+
 s32 txgbe_init_hw(struct txgbe_hw *hw)
 {
 	s32 status;
@@ -1542,6 +1575,7 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	phy->ops.read_i2c_byte = txgbe_read_i2c_byte;
 	phy->ops.read_i2c_eeprom = txgbe_read_i2c_eeprom;
 	phy->ops.identify_sfp = txgbe_identify_module;
+	phy->ops.check_overtemp = txgbe_check_overtemp;
 	phy->ops.identify = txgbe_identify_phy;
 	phy->ops.init = txgbe_init_phy_ops;
 
@@ -1576,6 +1610,7 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->num_rar_entries    = TXGBE_SP_RAR_ENTRIES;
 	mac->max_rx_queues      = TXGBE_SP_MAX_RX_QUEUES;
 	mac->max_tx_queues      = TXGBE_SP_MAX_TX_QUEUES;
+	mac->max_msix_vectors   = txgbe_get_pcie_msix_count(hw);
 
 	/* EEPROM */
 	eeprom->ops.init_params = txgbe_init_eeprom_params;
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
new file mode 100644
index 000000000000..514a38941488
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
@@ -0,0 +1,442 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe.h"
+
+/**
+ * txgbe_cache_ring_register - Descriptor ring to register mapping
+ * @adapter: board private structure to initialize
+ *
+ * Once we know the feature-set enabled for the device, we'll cache
+ * the register offset the descriptor ring is assigned to.
+ *
+ * Note, the order the various feature calls is important.  It must start with
+ * the "most" features enabled at the same time, then trickle down to the
+ * least amount of features turned on at once.
+ **/
+static void txgbe_cache_ring_register(struct txgbe_adapter *adapter)
+{
+	u16 i;
+
+	for (i = 0; i < adapter->num_rx_queues; i++)
+		adapter->rx_ring[i]->reg_idx = i;
+
+	for (i = 0; i < adapter->num_tx_queues; i++)
+		adapter->tx_ring[i]->reg_idx = i;
+}
+
+/**
+ * txgbe_set_num_queues: Allocate queues for device, feature dependent
+ * @adapter: board private structure to initialize
+ **/
+static void txgbe_set_num_queues(struct txgbe_adapter *adapter)
+{
+	/* Start with base case */
+	adapter->num_rx_queues = 1;
+	adapter->num_tx_queues = 1;
+}
+
+/**
+ * txgbe_acquire_msix_vectors - acquire MSI-X vectors
+ * @adapter: board private structure
+ *
+ * Attempts to acquire a suitable range of MSI-X vector interrupts. Will
+ * return a negative error code if unable to acquire MSI-X vectors for any
+ * reason.
+ */
+static int txgbe_acquire_msix_vectors(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i, vectors, vector_threshold;
+
+	/* We start by asking for one vector per queue pair */
+	vectors = max(adapter->num_rx_queues, adapter->num_tx_queues);
+
+	/* It is easy to be greedy for MSI-X vectors. However, it really
+	 * doesn't do much good if we have a lot more vectors than CPUs. We'll
+	 * be somewhat conservative and only ask for (roughly) the same number
+	 * of vectors as there are CPUs.
+	 */
+	vectors = min_t(int, vectors, num_online_cpus());
+
+	/* Some vectors are necessary for non-queue interrupts */
+	vectors += NON_Q_VECTORS;
+
+	/* Hardware can only support a maximum of hw.mac->max_msix_vectors.
+	 * With features such as RSS and VMDq, we can easily surpass the
+	 * number of Rx and Tx descriptor queues supported by our device.
+	 * Thus, we cap the maximum in the rare cases where the CPU count also
+	 * exceeds our vector limit
+	 */
+	vectors = min_t(int, vectors, hw->mac.max_msix_vectors);
+
+	/* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0]
+	 * handler, and (2) an Other (Link Status Change, etc.) handler.
+	 */
+	vector_threshold = MIN_MSIX_COUNT;
+
+	adapter->msix_entries = kcalloc(vectors,
+					sizeof(struct msix_entry),
+					GFP_KERNEL);
+	if (!adapter->msix_entries)
+		return -ENOMEM;
+
+	for (i = 0; i < vectors; i++)
+		adapter->msix_entries[i].entry = i;
+
+	vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
+					vector_threshold, vectors);
+	if (vectors < 0) {
+		/* A negative count of allocated vectors indicates an error in
+		 * acquiring within the specified range of MSI-X vectors
+		 */
+		txgbe_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n",
+			       vectors);
+
+		adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+		kfree(adapter->msix_entries);
+		adapter->msix_entries = NULL;
+
+		return vectors;
+	}
+
+	/* we successfully allocated some number of vectors within our
+	 * requested range.
+	 */
+	adapter->flags |= TXGBE_FLAG_MSIX_ENABLED;
+
+	/* Adjust for only the vectors we'll use, which is minimum
+	 * of max_q_vectors, or the number of vectors we were allocated.
+	 */
+	vectors -= NON_Q_VECTORS;
+	adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors);
+
+	return 0;
+}
+
+static void txgbe_add_ring(struct txgbe_ring *ring,
+			   struct txgbe_ring_container *head)
+{
+	ring->next = head->ring;
+	head->ring = ring;
+	head->count++;
+}
+
+/**
+ * txgbe_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @adapter: board private structure to initialize
+ * @v_count: q_vectors allocated on adapter, used for ring interleaving
+ * @v_idx: index of vector in adapter struct
+ * @txr_count: total number of Tx rings to allocate
+ * @txr_idx: index of first Tx ring to allocate
+ * @rxr_count: total number of Rx rings to allocate
+ * @rxr_idx: index of first Rx ring to allocate
+ *
+ * We allocate one q_vector.  If allocation fails we return -ENOMEM.
+ **/
+static int txgbe_alloc_q_vector(struct txgbe_adapter *adapter,
+				unsigned int v_count, unsigned int v_idx,
+				unsigned int txr_count, unsigned int txr_idx,
+				unsigned int rxr_count, unsigned int rxr_idx)
+{
+	struct txgbe_q_vector *q_vector;
+	struct txgbe_ring *ring;
+	int node = -1;
+	int cpu = -1;
+	int ring_count, size;
+
+	/* note this will allocate space for the ring structure as well! */
+	ring_count = txr_count + rxr_count;
+	size = sizeof(struct txgbe_q_vector) +
+	       (sizeof(struct txgbe_ring) * ring_count);
+
+	/* allocate q_vector and rings */
+	q_vector = kzalloc_node(size, GFP_KERNEL, node);
+	if (!q_vector)
+		q_vector = kzalloc(size, GFP_KERNEL);
+	if (!q_vector)
+		return -ENOMEM;
+
+	/* setup affinity mask and node */
+	if (cpu != -1)
+		cpumask_set_cpu(cpu, &q_vector->affinity_mask);
+	q_vector->numa_node = node;
+
+	/* initialize CPU for DCA */
+	q_vector->cpu = -1;
+
+	/* tie q_vector and adapter together */
+	adapter->q_vector[v_idx] = q_vector;
+	q_vector->adapter = adapter;
+	q_vector->v_idx = v_idx;
+
+	/* initialize work limits */
+	q_vector->tx.work_limit = adapter->tx_work_limit;
+	q_vector->rx.work_limit = adapter->rx_work_limit;
+
+	/* initialize pointer to rings */
+	ring = q_vector->ring;
+
+	/* initialize ITR */
+	if (txr_count && !rxr_count) {
+		/* tx only vector */
+		if (adapter->tx_itr_setting == 1)
+			q_vector->itr = TXGBE_12K_ITR;
+		else
+			q_vector->itr = adapter->tx_itr_setting;
+	} else {
+		/* rx or rx/tx vector */
+		if (adapter->rx_itr_setting == 1)
+			q_vector->itr = TXGBE_20K_ITR;
+		else
+			q_vector->itr = adapter->rx_itr_setting;
+	}
+
+	while (txr_count) {
+		/* assign generic ring traits */
+		ring->dev = pci_dev_to_dev(adapter->pdev);
+		ring->netdev = adapter->netdev;
+
+		/* configure backlink on ring */
+		ring->q_vector = q_vector;
+
+		/* update q_vector Tx values */
+		txgbe_add_ring(ring, &q_vector->tx);
+
+		/* apply Tx specific ring traits */
+		ring->count = adapter->tx_ring_count;
+		ring->queue_index = txr_idx;
+
+		/* assign ring to adapter */
+		adapter->tx_ring[txr_idx] = ring;
+
+		/* update count and index */
+		txr_count--;
+		txr_idx += v_count;
+
+		/* push pointer to next ring */
+		ring++;
+	}
+
+	while (rxr_count) {
+		/* assign generic ring traits */
+		ring->dev = pci_dev_to_dev(adapter->pdev);
+		ring->netdev = adapter->netdev;
+
+		/* configure backlink on ring */
+		ring->q_vector = q_vector;
+
+		/* update q_vector Rx values */
+		txgbe_add_ring(ring, &q_vector->rx);
+
+		/* apply Rx specific ring traits */
+		ring->count = adapter->rx_ring_count;
+		ring->queue_index = rxr_idx;
+
+		/* assign ring to adapter */
+		adapter->rx_ring[rxr_idx] = ring;
+
+		/* update count and index */
+		rxr_count--;
+		rxr_idx += v_count;
+
+		/* push pointer to next ring */
+		ring++;
+	}
+
+	return 0;
+}
+
+/**
+ * txgbe_free_q_vector - Free memory allocated for specific interrupt vector
+ * @adapter: board private structure to initialize
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector.  In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void txgbe_free_q_vector(struct txgbe_adapter *adapter, int v_idx)
+{
+	struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
+	struct txgbe_ring *ring;
+
+	txgbe_for_each_ring(ring, q_vector->tx)
+		adapter->tx_ring[ring->queue_index] = NULL;
+
+	txgbe_for_each_ring(ring, q_vector->rx)
+		adapter->rx_ring[ring->queue_index] = NULL;
+
+	adapter->q_vector[v_idx] = NULL;
+	kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * txgbe_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @adapter: board private structure to initialize
+ *
+ * We allocate one q_vector per queue interrupt.  If allocation fails we
+ * return -ENOMEM.
+ **/
+static int txgbe_alloc_q_vectors(struct txgbe_adapter *adapter)
+{
+	unsigned int q_vectors = adapter->num_q_vectors;
+	unsigned int rxr_remaining = adapter->num_rx_queues;
+	unsigned int txr_remaining = adapter->num_tx_queues;
+	unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0;
+	int err;
+
+	if (q_vectors >= (rxr_remaining + txr_remaining)) {
+		for (; rxr_remaining; v_idx++) {
+			err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx,
+						   0, 0, 1, rxr_idx);
+			if (err)
+				goto err_out;
+
+			/* update counts and index */
+			rxr_remaining--;
+			rxr_idx++;
+		}
+	}
+
+	for (; v_idx < q_vectors; v_idx++) {
+		int rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx);
+		int tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx);
+
+		err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx,
+					   tqpv, txr_idx,
+					   rqpv, rxr_idx);
+
+		if (err)
+			goto err_out;
+
+		/* update counts and index */
+		rxr_remaining -= rqpv;
+		txr_remaining -= tqpv;
+		rxr_idx++;
+		txr_idx++;
+	}
+
+	return 0;
+
+err_out:
+	adapter->num_tx_queues = 0;
+	adapter->num_rx_queues = 0;
+	adapter->num_q_vectors = 0;
+
+	while (v_idx--)
+		txgbe_free_q_vector(adapter, v_idx);
+
+	return -ENOMEM;
+}
+
+/**
+ * txgbe_free_q_vectors - Free memory allocated for interrupt vectors
+ * @adapter: board private structure to initialize
+ *
+ * This function frees the memory allocated to the q_vectors.  In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void txgbe_free_q_vectors(struct txgbe_adapter *adapter)
+{
+	int v_idx = adapter->num_q_vectors;
+
+	adapter->num_tx_queues = 0;
+	adapter->num_rx_queues = 0;
+	adapter->num_q_vectors = 0;
+
+	while (v_idx--)
+		txgbe_free_q_vector(adapter, v_idx);
+}
+
+void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter)
+{
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+		adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+		pci_disable_msix(adapter->pdev);
+		kfree(adapter->msix_entries);
+		adapter->msix_entries = NULL;
+	} else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) {
+		adapter->flags &= ~TXGBE_FLAG_MSI_ENABLED;
+		pci_disable_msi(adapter->pdev);
+	}
+}
+
+/**
+ * txgbe_set_interrupt_capability - set MSI-X or MSI if supported
+ * @adapter: board private structure to initialize
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware and the kernel.
+ **/
+void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter)
+{
+	int err;
+
+	/* We will try to get MSI-X interrupts first */
+	if (!txgbe_acquire_msix_vectors(adapter))
+		return;
+
+	/* recalculate number of queues now that many features have been
+	 * changed or disabled.
+	 */
+	txgbe_set_num_queues(adapter);
+	adapter->num_q_vectors = 1;
+
+	err = pci_enable_msi(adapter->pdev);
+	if (err)
+		txgbe_dev_warn("Failed to allocate MSI interrupt, falling back to legacy. Error: %d\n",
+			       err);
+	else
+		adapter->flags |= TXGBE_FLAG_MSI_ENABLED;
+}
+
+/**
+ * txgbe_init_interrupt_scheme - Determine proper interrupt scheme
+ * @adapter: board private structure to initialize
+ *
+ * We determine which interrupt scheme to use based on...
+ * - Kernel support (MSI, MSI-X)
+ *   - which can be user-defined (via MODULE_PARAM)
+ * - Hardware queue count (num_*_queues)
+ *   - defined by miscellaneous hardware support/features (RSS, etc.)
+ **/
+int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter)
+{
+	int err;
+
+	/* Number of supported queues */
+	txgbe_set_num_queues(adapter);
+
+	/* Set interrupt mode */
+	txgbe_set_interrupt_capability(adapter);
+
+	/* Allocate memory for queues */
+	err = txgbe_alloc_q_vectors(adapter);
+	if (err) {
+		txgbe_err(probe, "Unable to allocate memory for queue vectors\n");
+		txgbe_reset_interrupt_capability(adapter);
+		return err;
+	}
+
+	txgbe_cache_ring_register(adapter);
+
+	set_bit(__TXGBE_DOWN, &adapter->state);
+
+	return 0;
+}
+
+/**
+ * txgbe_clear_interrupt_scheme - Clear the current interrupt scheme settings
+ * @adapter: board private structure to clear interrupt scheme on
+ *
+ * We go through and clear interrupt specific resources and reset the structure
+ * to pre-load conditions
+ **/
+void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter)
+{
+	txgbe_free_q_vectors(adapter);
+	txgbe_reset_interrupt_capability(adapter);
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index e49a31cefb67..385c0f35e82a 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -21,6 +21,11 @@ static const char txgbe_driver_string[] =
 
 static const char txgbe_copyright[] =
 	"Copyright (c) 2015 -2017 Beijing WangXun Technology Co., Ltd";
+static const char txgbe_overheat_msg[] =
+	"Network adapter has been stopped because it has over heated. "
+	"If the problem persists, restart the computer, or power off the system and replace the adapter";
+static const char txgbe_underheat_msg[] =
+	"Network adapter has been started again since the temperature has been back to normal state";
 
 /* txgbe_pci_tbl - PCI Device ID Table
  *
@@ -140,6 +145,510 @@ static void txgbe_get_hw_control(struct txgbe_adapter *adapter)
 	      TXGBE_CFG_PORT_CTL_DRV_LOAD, TXGBE_CFG_PORT_CTL_DRV_LOAD);
 }
 
+/**
+ * txgbe_set_ivar - set the IVAR registers, mapping interrupt causes to vectors
+ * @adapter: pointer to adapter struct
+ * @direction: 0 for Rx, 1 for Tx, -1 for other causes
+ * @queue: queue to map the corresponding interrupt to
+ * @msix_vector: the vector to map to the corresponding queue
+ *
+ **/
+static void txgbe_set_ivar(struct txgbe_adapter *adapter, s8 direction,
+			   u16 queue, u16 msix_vector)
+{
+	u32 ivar, index;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (direction == -1) {
+		/* other causes */
+		msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL;
+		index = 0;
+		ivar = rd32(&adapter->hw, TXGBE_PX_MISC_IVAR);
+		ivar &= ~(0xFF << index);
+		ivar |= (msix_vector << index);
+		wr32(&adapter->hw, TXGBE_PX_MISC_IVAR, ivar);
+	} else {
+		/* tx or rx causes */
+		msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL;
+		index = ((16 * (queue & 1)) + (8 * direction));
+		ivar = rd32(hw, TXGBE_PX_IVAR(queue >> 1));
+		ivar &= ~(0xFF << index);
+		ivar |= (msix_vector << index);
+		wr32(hw, TXGBE_PX_IVAR(queue >> 1), ivar);
+	}
+}
+
+/**
+ * txgbe_configure_msix - Configure MSI-X hardware
+ * @adapter: board private structure
+ *
+ * txgbe_configure_msix sets up the hardware to properly generate MSI-X
+ * interrupts.
+ **/
+static void txgbe_configure_msix(struct txgbe_adapter *adapter)
+{
+	u16 v_idx;
+
+	/* Populate MSIX to EITR Select */
+	wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0);
+
+	/* Populate the IVAR table and set the ITR values to the
+	 * corresponding register.
+	 */
+	for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
+		struct txgbe_ring *ring;
+
+		txgbe_for_each_ring(ring, q_vector->rx)
+			txgbe_set_ivar(adapter, 0, ring->reg_idx, v_idx);
+
+		txgbe_for_each_ring(ring, q_vector->tx)
+			txgbe_set_ivar(adapter, 1, ring->reg_idx, v_idx);
+
+		txgbe_write_eitr(q_vector);
+	}
+
+	txgbe_set_ivar(adapter, -1, 0, v_idx);
+
+	wr32(&adapter->hw, TXGBE_PX_ITR(v_idx), 1950);
+}
+
+/**
+ * txgbe_write_eitr - write EITR register in hardware specific way
+ * @q_vector: structure containing interrupt and ring information
+ *
+ * This function is made to be called by ethtool and by the driver
+ * when it needs to update EITR registers at runtime.  Hardware
+ * specific quirks/differences are taken care of here.
+ */
+void txgbe_write_eitr(struct txgbe_q_vector *q_vector)
+{
+	struct txgbe_adapter *adapter = q_vector->adapter;
+	struct txgbe_hw *hw = &adapter->hw;
+	int v_idx = q_vector->v_idx;
+	u32 itr_reg = q_vector->itr & TXGBE_MAX_EITR;
+
+	itr_reg |= TXGBE_PX_ITR_CNT_WDIS;
+
+	wr32(hw, TXGBE_PX_ITR(v_idx), itr_reg);
+}
+
+/**
+ * txgbe_check_overtemp_subtask - check for over temperature
+ * @adapter: pointer to adapter
+ **/
+static void txgbe_check_overtemp_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 eicr = adapter->interrupt_event;
+	s32 temp_state;
+
+	if (test_bit(__TXGBE_DOWN, &adapter->state))
+		return;
+	if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_EVENT))
+		return;
+
+	adapter->flags2 &= ~TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+
+	/* Since the warning interrupt is for both ports
+	 * we don't have to check if:
+	 *  - This interrupt wasn't for our port.
+	 *  - We may have missed the interrupt so always have to
+	 *    check if we  got a LSC
+	 */
+	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+		return;
+
+	temp_state = TCALL(hw, phy.ops.check_overtemp);
+	if (!temp_state || temp_state == TXGBE_NOT_IMPLEMENTED)
+		return;
+
+	if (temp_state == TXGBE_ERR_UNDERTEMP &&
+	    test_bit(__TXGBE_HANGING, &adapter->state)) {
+		txgbe_crit(drv, "%s\n", txgbe_underheat_msg);
+		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+		      TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
+		netif_carrier_on(adapter->netdev);
+		clear_bit(__TXGBE_HANGING, &adapter->state);
+	} else if (temp_state == TXGBE_ERR_OVERTEMP &&
+		!test_and_set_bit(__TXGBE_HANGING, &adapter->state)) {
+		txgbe_crit(drv, "%s\n", txgbe_overheat_msg);
+		netif_carrier_off(adapter->netdev);
+		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+		      TXGBE_RDB_PB_CTL_RXEN, 0);
+	}
+
+	adapter->interrupt_event = 0;
+}
+
+static void txgbe_check_overtemp_event(struct txgbe_adapter *adapter, u32 eicr)
+{
+	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+		return;
+
+	if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+		adapter->interrupt_event = eicr;
+		adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+		txgbe_service_event_schedule(adapter);
+	}
+}
+
+static void txgbe_check_sfp_event(struct txgbe_adapter *adapter, u32 eicr)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 eicr_mask = TXGBE_PX_MISC_IC_GPIO;
+	u32 reg;
+
+	if (eicr & eicr_mask) {
+		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+			wr32(hw, TXGBE_GPIO_INTMASK, 0xFF);
+			reg = rd32(hw, TXGBE_GPIO_INTSTATUS);
+			if (reg & TXGBE_GPIO_INTSTATUS_2) {
+				adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_2);
+				adapter->sfp_poll_time = 0;
+				txgbe_service_event_schedule(adapter);
+			}
+			if (reg & TXGBE_GPIO_INTSTATUS_3) {
+				adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_3);
+				txgbe_service_event_schedule(adapter);
+			}
+
+			if (reg & TXGBE_GPIO_INTSTATUS_6) {
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_6);
+				adapter->flags |=
+					TXGBE_FLAG_NEED_LINK_CONFIG;
+				txgbe_service_event_schedule(adapter);
+			}
+			wr32(hw, TXGBE_GPIO_INTMASK, 0x0);
+		}
+	}
+}
+
+static void txgbe_check_lsc(struct txgbe_adapter *adapter)
+{
+	adapter->lsc_int++;
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_service_event_schedule(adapter);
+}
+
+/**
+ * txgbe_irq_enable - Enable default interrupt generation settings
+ * @adapter: board private structure
+ **/
+void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
+{
+	u32 mask = 0;
+	struct txgbe_hw *hw = &adapter->hw;
+	u8 device_type = hw->subsystem_id & 0xF0;
+
+	/* enable gpio interrupt */
+	if (device_type != TXGBE_ID_MAC_XAUI &&
+	    device_type != TXGBE_ID_MAC_SGMII) {
+		mask |= TXGBE_GPIO_INTEN_2;
+		mask |= TXGBE_GPIO_INTEN_3;
+		mask |= TXGBE_GPIO_INTEN_6;
+	}
+	wr32(&adapter->hw, TXGBE_GPIO_INTEN, mask);
+
+	if (device_type != TXGBE_ID_MAC_XAUI &&
+	    device_type != TXGBE_ID_MAC_SGMII) {
+		mask = TXGBE_GPIO_INTTYPE_LEVEL_2 | TXGBE_GPIO_INTTYPE_LEVEL_3 |
+			TXGBE_GPIO_INTTYPE_LEVEL_6;
+	}
+	wr32(&adapter->hw, TXGBE_GPIO_INTTYPE_LEVEL, mask);
+
+	/* enable misc interrupt */
+	mask = TXGBE_PX_MISC_IEN_MASK;
+
+	mask |= TXGBE_PX_MISC_IEN_OVER_HEAT;
+
+	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
+
+	/* unmask interrupt */
+	txgbe_intr_enable(&adapter->hw, TXGBE_INTR_MISC(adapter));
+	if (queues)
+		txgbe_intr_enable(&adapter->hw, TXGBE_INTR_QALL(adapter));
+
+	/* flush configuration */
+	if (flush)
+		TXGBE_WRITE_FLUSH(&adapter->hw);
+}
+
+static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
+{
+	struct txgbe_adapter *adapter = data;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 eicr;
+	u32 ecc;
+
+	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+
+	if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+		txgbe_check_lsc(adapter);
+
+	if (eicr & TXGBE_PX_MISC_IC_INT_ERR) {
+		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
+		ecc = rd32(hw, TXGBE_MIS_ST);
+		if (((ecc & TXGBE_MIS_ST_LAN0_ECC) && hw->bus.lan_id == 0) ||
+		    ((ecc & TXGBE_MIS_ST_LAN1_ECC) && hw->bus.lan_id == 1))
+			adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+
+		txgbe_service_event_schedule(adapter);
+	}
+	if (eicr & TXGBE_PX_MISC_IC_DEV_RST) {
+		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		txgbe_service_event_schedule(adapter);
+	}
+	if ((eicr & TXGBE_PX_MISC_IC_STALL) ||
+	    (eicr & TXGBE_PX_MISC_IC_ETH_EVENT)) {
+		adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	txgbe_check_sfp_event(adapter, eicr);
+	txgbe_check_overtemp_event(adapter, eicr);
+
+	/* re-enable the original interrupt state, no lsc, no queues */
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_irq_enable(adapter, false, false);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t txgbe_msix_clean_rings(int __always_unused irq, void *data)
+{
+	struct txgbe_q_vector *q_vector = data;
+
+	/* EIAM disabled interrupts (on this vector) for us */
+
+	if (q_vector->rx.ring || q_vector->tx.ring)
+		napi_schedule_irqoff(&q_vector->napi);
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @adapter: board private structure
+ *
+ * txgbe_request_msix_irqs allocates MSI-X vectors and requests
+ * interrupts from the kernel.
+ **/
+static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	int vector, err;
+	int ri = 0, ti = 0;
+
+	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+		struct msix_entry *entry = &adapter->msix_entries[vector];
+
+		if (q_vector->tx.ring && q_vector->rx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-TxRx-%d", netdev->name, ri++);
+			ti++;
+		} else if (q_vector->rx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-rx-%d", netdev->name, ri++);
+		} else if (q_vector->tx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-tx-%d", netdev->name, ti++);
+		} else {
+			/* skip this unused q_vector */
+			continue;
+		}
+		err = request_irq(entry->vector, &txgbe_msix_clean_rings, 0,
+				  q_vector->name, q_vector);
+		if (err) {
+			txgbe_err(probe, "request_irq failed for MSIX interrupt '%s' Error: %d\n",
+				  q_vector->name, err);
+			goto free_queue_irqs;
+		}
+	}
+
+	err = request_irq(adapter->msix_entries[vector].vector,
+			  txgbe_msix_other, 0, netdev->name, adapter);
+	if (err) {
+		txgbe_err(probe, "request_irq for msix_other failed: %d\n", err);
+		goto free_queue_irqs;
+	}
+
+	return 0;
+
+free_queue_irqs:
+	while (vector) {
+		vector--;
+		irq_set_affinity_hint(adapter->msix_entries[vector].vector,
+				      NULL);
+		free_irq(adapter->msix_entries[vector].vector,
+			 adapter->q_vector[vector]);
+	}
+	adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+	pci_disable_msix(adapter->pdev);
+	kfree(adapter->msix_entries);
+	adapter->msix_entries = NULL;
+	return err;
+}
+
+/**
+ * txgbe_intr - legacy mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a network interface device structure
+ **/
+static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
+{
+	struct txgbe_adapter *adapter = data;
+	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+	u32 eicr;
+	u32 eicr_misc;
+
+	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
+	if (!eicr) {
+		/* shared interrupt alert!
+		 * the interrupt that we masked before the EICR read.
+		 */
+		if (!test_bit(__TXGBE_DOWN, &adapter->state))
+			txgbe_irq_enable(adapter, true, true);
+		return IRQ_NONE;        /* Not our interrupt */
+	}
+	adapter->isb_mem[TXGBE_ISB_VEC0] = 0;
+	if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED))
+		wr32(&adapter->hw, TXGBE_PX_INTA, 1);
+
+	eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+	if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+		txgbe_check_lsc(adapter);
+
+	if (eicr_misc & TXGBE_PX_MISC_IC_INT_ERR) {
+		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
+		adapter->flags2 |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	if (eicr_misc & TXGBE_PX_MISC_IC_DEV_RST) {
+		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		txgbe_service_event_schedule(adapter);
+	}
+	txgbe_check_sfp_event(adapter, eicr_misc);
+	txgbe_check_overtemp_event(adapter, eicr_misc);
+
+	adapter->isb_mem[TXGBE_ISB_MISC] = 0;
+	/* would disable interrupts here but it is auto disabled */
+	napi_schedule_irqoff(&q_vector->napi);
+
+	/* re-enable link(maybe) and non-queue interrupts, no flush.
+	 * txgbe_poll will re-enable the queue interrupts
+	 */
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_irq_enable(adapter, false, false);
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_request_irq - initialize interrupts
+ * @adapter: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+static int txgbe_request_irq(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	int err;
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		err = txgbe_request_msix_irqs(adapter);
+	else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED)
+		err = request_irq(adapter->pdev->irq, &txgbe_intr, 0,
+				  netdev->name, adapter);
+	else
+		err = request_irq(adapter->pdev->irq, &txgbe_intr, IRQF_SHARED,
+				  netdev->name, adapter);
+
+	if (err)
+		txgbe_err(probe, "request_irq failed, Error %d\n", err);
+
+	return err;
+}
+
+static void txgbe_free_irq(struct txgbe_adapter *adapter)
+{
+	int vector;
+
+	if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
+		free_irq(adapter->pdev->irq, adapter);
+		return;
+	}
+
+	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+		struct msix_entry *entry = &adapter->msix_entries[vector];
+
+		/* free only the irqs that were actually requested */
+		if (!q_vector->rx.ring && !q_vector->tx.ring)
+			continue;
+
+		/* clear the affinity_mask in the IRQ descriptor */
+		irq_set_affinity_hint(entry->vector, NULL);
+		free_irq(entry->vector, q_vector);
+	}
+
+	free_irq(adapter->msix_entries[vector++].vector, adapter);
+}
+
+/**
+ * txgbe_irq_disable - Mask off interrupt generation on the NIC
+ * @adapter: board private structure
+ **/
+void txgbe_irq_disable(struct txgbe_adapter *adapter)
+{
+	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, 0);
+	txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+
+	TXGBE_WRITE_FLUSH(&adapter->hw);
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+		int vector;
+
+		for (vector = 0; vector < adapter->num_q_vectors; vector++)
+			synchronize_irq(adapter->msix_entries[vector].vector);
+
+		synchronize_irq(adapter->msix_entries[vector++].vector);
+	} else {
+		synchronize_irq(adapter->pdev->irq);
+	}
+}
+
+/**
+ * txgbe_configure_msi_and_legacy - Initialize PIN (INTA...) and MSI interrupts
+ *
+ **/
+static void txgbe_configure_msi_and_legacy(struct txgbe_adapter *adapter)
+{
+	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+	struct txgbe_ring *ring;
+
+	txgbe_write_eitr(q_vector);
+
+	txgbe_for_each_ring(ring, q_vector->rx)
+		txgbe_set_ivar(adapter, 0, ring->reg_idx, 0);
+
+	txgbe_for_each_ring(ring, q_vector->tx)
+		txgbe_set_ivar(adapter, 1, ring->reg_idx, 0);
+
+	txgbe_set_ivar(adapter, -1, 0, 1);
+
+	txgbe_info(hw, "Legacy interrupt IVAR setup done\n");
+}
+
 static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
@@ -191,6 +700,21 @@ static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
 	txgbe_sync_mac_table(adapter);
 }
 
+void txgbe_configure_isb(struct txgbe_adapter *adapter)
+{
+	/* set ISB Address */
+	struct txgbe_hw *hw = &adapter->hw;
+
+	wr32(hw, TXGBE_PX_ISB_ADDR_L,
+	     adapter->isb_dma & DMA_BIT_MASK(32));
+	wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32);
+}
+
+static void txgbe_configure(struct txgbe_adapter *adapter)
+{
+	txgbe_configure_isb(adapter);
+}
+
 static bool txgbe_is_sfp(struct txgbe_hw *hw)
 {
 	switch (TCALL(hw, mac.ops.get_media_type)) {
@@ -227,12 +751,29 @@ static void txgbe_sfp_link_config(struct txgbe_adapter *adapter)
 	adapter->sfp_poll_time = 0;
 }
 
+static void txgbe_setup_gpie(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 gpie = 0;
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		gpie = TXGBE_PX_GPIE_MODEL;
+
+	wr32(hw, TXGBE_PX_GPIE, gpie);
+}
+
 static void txgbe_up_complete(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
 	u32 links_reg;
 
 	txgbe_get_hw_control(adapter);
+	txgbe_setup_gpie(adapter);
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		txgbe_configure_msix(adapter);
+	else
+		txgbe_configure_msi_and_legacy(adapter);
 
 	/* enable the optics for SFP+ fiber */
 	TCALL(hw, mac.ops.enable_tx_laser);
@@ -262,6 +803,22 @@ static void txgbe_up_complete(struct txgbe_adapter *adapter)
 		}
 	}
 
+	/* clear any pending interrupts, may auto mask */
+	rd32(hw, TXGBE_PX_IC(0));
+	rd32(hw, TXGBE_PX_IC(1));
+	rd32(hw, TXGBE_PX_MISC_IC);
+	if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI)
+		wr32(hw, TXGBE_GPIO_EOI, TXGBE_GPIO_EOI_6);
+	txgbe_irq_enable(adapter, true, true);
+
+	/* bring the link up in the watchdog, this could race with our first
+	 * link up interrupt but shouldn't be a problem
+	 */
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+
+	mod_timer(&adapter->service_timer, jiffies);
+
 	if (hw->bus.lan_id == 0) {
 		wr32m(hw, TXGBE_MIS_PRB_CTL,
 		      TXGBE_MIS_PRB_CTL_LAN0_UP, TXGBE_MIS_PRB_CTL_LAN0_UP);
@@ -278,6 +835,26 @@ static void txgbe_up_complete(struct txgbe_adapter *adapter)
 	      TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD);
 }
 
+void txgbe_reinit_locked(struct txgbe_adapter *adapter)
+{
+	/* put off any impending NetWatchDogTimeout */
+	netif_trans_update(adapter->netdev);
+
+	while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
+		usleep_range(1000, 2000);
+	txgbe_down(adapter);
+	txgbe_up(adapter);
+	clear_bit(__TXGBE_RESETTING, &adapter->state);
+}
+
+void txgbe_up(struct txgbe_adapter *adapter)
+{
+	/* hardware has been reset, we need to reload some things */
+	txgbe_configure(adapter);
+
+	txgbe_up_complete(adapter);
+}
+
 void txgbe_reset(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
@@ -336,6 +913,10 @@ void txgbe_disable_device(struct txgbe_adapter *adapter)
 	netif_carrier_off(netdev);
 	netif_tx_disable(netdev);
 
+	txgbe_irq_disable(adapter);
+
+	adapter->flags2 &= ~(TXGBE_FLAG2_PF_RESET_REQUESTED |
+			     TXGBE_FLAG2_GLOBAL_RESET_REQUESTED);
 	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
 
 	del_timer_sync(&adapter->service_timer);
@@ -451,6 +1032,41 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 	return err;
 }
 
+/**
+ * txgbe_setup_isb_resources - allocate interrupt status resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_isb_resources(struct txgbe_adapter *adapter)
+{
+	struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+	adapter->isb_mem = dma_alloc_coherent(dev,
+					      sizeof(u32) * TXGBE_ISB_MAX,
+					      &adapter->isb_dma,
+					      GFP_KERNEL);
+	if (!adapter->isb_mem)
+		return -ENOMEM;
+	memset(adapter->isb_mem, 0, sizeof(u32) * TXGBE_ISB_MAX);
+	return 0;
+}
+
+/**
+ * txgbe_free_isb_resources - allocate all queues Rx resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static void txgbe_free_isb_resources(struct txgbe_adapter *adapter)
+{
+	struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+	dma_free_coherent(dev, sizeof(u32) * TXGBE_ISB_MAX,
+			  adapter->isb_mem, adapter->isb_dma);
+	adapter->isb_mem = NULL;
+}
+
 /**
  * txgbe_open - Called when a network interface is made active
  * @netdev: network interface device structure
@@ -466,12 +1082,39 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 int txgbe_open(struct net_device *netdev)
 {
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	int err;
 
 	netif_carrier_off(netdev);
 
+	err = txgbe_setup_isb_resources(adapter);
+	if (err)
+		goto err_req_isb;
+
+	txgbe_configure(adapter);
+
+	err = txgbe_request_irq(adapter);
+	if (err)
+		goto err_req_irq;
+
+	/* Notify the stack of the actual queue counts. */
+	err = netif_set_real_num_tx_queues(netdev, adapter->num_tx_queues);
+	if (err)
+		goto err_set_queues;
+
+	err = netif_set_real_num_rx_queues(netdev, adapter->num_rx_queues);
+	if (err)
+		goto err_set_queues;
+
 	txgbe_up_complete(adapter);
 
 	return 0;
+
+err_set_queues:
+	txgbe_free_irq(adapter);
+err_req_irq:
+	txgbe_free_isb_resources(adapter);
+err_req_isb:
+	return err;
 }
 
 /**
@@ -488,6 +1131,9 @@ static void txgbe_close_suspend(struct txgbe_adapter *adapter)
 	txgbe_disable_device(adapter);
 	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
 		TCALL(hw, mac.ops.disable_tx_laser);
+	txgbe_free_irq(adapter);
+
+	txgbe_free_isb_resources(adapter);
 }
 
 /**
@@ -506,6 +1152,9 @@ int txgbe_close(struct net_device *netdev)
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
 
 	txgbe_down(adapter);
+	txgbe_free_irq(adapter);
+
+	txgbe_free_isb_resources(adapter);
 
 	txgbe_release_hw_control(adapter);
 
@@ -524,6 +1173,8 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 		txgbe_close_suspend(adapter);
 	rtnl_unlock();
 
+	txgbe_clear_interrupt_scheme(adapter);
+
 	txgbe_release_hw_control(adapter);
 
 	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
@@ -811,6 +1462,83 @@ static void txgbe_service_timer(struct timer_list *t)
 	txgbe_service_event_schedule(adapter);
 }
 
+static void txgbe_reset_subtask(struct txgbe_adapter *adapter)
+{
+	u32 reset_flag = 0;
+	u32 value = 0;
+
+	if (!(adapter->flags2 & (TXGBE_FLAG2_PF_RESET_REQUESTED |
+				 TXGBE_FLAG2_GLOBAL_RESET_REQUESTED |
+				 TXGBE_FLAG2_RESET_INTR_RECEIVED)))
+		return;
+
+	/* If we're already down, just bail */
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_REMOVING, &adapter->state))
+		return;
+
+	netdev_err(adapter->netdev, "Reset adapter\n");
+
+	rtnl_lock();
+	if (adapter->flags2 & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+		reset_flag |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+		adapter->flags2 &= ~TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+	}
+	if (adapter->flags2 & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+		reset_flag |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+		adapter->flags2 &= ~TXGBE_FLAG2_PF_RESET_REQUESTED;
+	}
+
+	if (adapter->flags2 & TXGBE_FLAG2_RESET_INTR_RECEIVED) {
+		/* If there's a recovery already waiting, it takes
+		 * precedence before starting a new reset sequence.
+		 */
+		adapter->flags2 &= ~TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		value = rd32m(&adapter->hw, TXGBE_MIS_RST_ST,
+			      TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK) >>
+			TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT;
+		if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST) {
+			adapter->hw.reset_type = TXGBE_SW_RESET;
+			/* errata 7 */
+			if (txgbe_mng_present(&adapter->hw) &&
+			    adapter->hw.revision_id == TXGBE_SP_MPW)
+				adapter->flags2 |=
+					TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
+		} else if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST) {
+			adapter->hw.reset_type = TXGBE_GLOBAL_RESET;
+		}
+		adapter->hw.force_full_reset = true;
+		txgbe_reinit_locked(adapter);
+		adapter->hw.force_full_reset = false;
+		goto unlock;
+	}
+
+	if (reset_flag & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+		/*debug to up*/
+		txgbe_reinit_locked(adapter);
+	} else if (reset_flag & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+		/* Request a Global Reset
+		 *
+		 * This will start the chip's countdown to the actual full
+		 * chip reset event, and a warning interrupt to be sent
+		 * to all PFs, including the requestor.  Our handler
+		 * for the warning interrupt will deal with the shutdown
+		 * and recovery of the switch setup.
+		 */
+		/*debug to up*/
+		pci_save_state(adapter->pdev);
+		if (txgbe_mng_present(&adapter->hw))
+			txgbe_reset_hostif(&adapter->hw);
+		else
+			wr32m(&adapter->hw, TXGBE_MIS_RST,
+			      TXGBE_MIS_RST_GLOBAL_RST,
+			      TXGBE_MIS_RST_GLOBAL_RST);
+	}
+
+unlock:
+	rtnl_unlock();
+}
+
 /**
  * txgbe_service_task - manages and runs subtasks
  * @work: pointer to work_struct containing our data
@@ -830,8 +1558,10 @@ static void txgbe_service_task(struct work_struct *work)
 		return;
 	}
 
+	txgbe_reset_subtask(adapter);
 	txgbe_sfp_detection_subtask(adapter);
 	txgbe_sfp_link_config_subtask(adapter);
+	txgbe_check_overtemp_subtask(adapter);
 	txgbe_watchdog_subtask(adapter);
 
 	txgbe_service_event_complete(adapter);
@@ -1053,6 +1783,10 @@ static int txgbe_probe(struct pci_dev *pdev,
 	set_bit(__TXGBE_SERVICE_INITED, &adapter->state);
 	clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
 
+	err = txgbe_init_interrupt_scheme(adapter);
+	if (err)
+		goto err_sw_init;
+
 	/* Save off EEPROM version number and Option Rom version which
 	 * together make a unique identify for the eeprom
 	 */
@@ -1167,6 +1901,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 	return 0;
 
 err_register:
+	txgbe_clear_interrupt_scheme(adapter);
 	txgbe_release_hw_control(adapter);
 err_sw_init:
 	kfree(adapter->mac_table);
@@ -1215,6 +1950,7 @@ static void txgbe_remove(struct pci_dev *pdev)
 		adapter->netdev_registered = false;
 	}
 
+	txgbe_clear_interrupt_scheme(adapter);
 	txgbe_release_hw_control(adapter);
 
 	iounmap(adapter->io_addr);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
index bd4c5e117358..1e3137506812 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
@@ -399,3 +399,25 @@ s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
 				       data, true);
 }
 
+/**
+ *  txgbe_tn_check_overtemp - Checks if an overtemp occurred.
+ *  @hw: pointer to hardware structure
+ *
+ *  Checks if the LASI temp alarm status was triggered due to overtemp
+ **/
+s32 txgbe_check_overtemp(struct txgbe_hw *hw)
+{
+	s32 status = 0;
+	u32 ts_state;
+
+	/* Check that the LASI temp alarm status was triggered */
+	ts_state = rd32(hw, TXGBE_TS_ALARM_ST);
+
+	if (ts_state & TXGBE_TS_ALARM_ST_DALARM)
+		status = TXGBE_ERR_UNDERTEMP;
+	else if (ts_state & TXGBE_TS_ALARM_ST_ALARM)
+		status = TXGBE_ERR_OVERTEMP;
+
+	return status;
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
index 4798623fb735..3efb4abef03a 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
@@ -43,6 +43,7 @@ s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
 
 s32 txgbe_identify_module(struct txgbe_hw *hw);
 s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
+s32 txgbe_check_overtemp(struct txgbe_hw *hw);
 s32 txgbe_init_i2c(struct txgbe_hw *hw);
 s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr);
 s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index c068fa933ab6..ff2b7a4f028b 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -2002,6 +2002,7 @@ struct txgbe_phy_operations {
 			     u8 dev_addr, u8 *data);
 	s32 (*read_i2c_eeprom)(struct txgbe_hw *hw, u8 byte_offset,
 			       u8 *eeprom_data);
+	s32 (*check_overtemp)(struct txgbe_hw *hw);
 };
 
 struct txgbe_eeprom_info {
@@ -2032,6 +2033,7 @@ struct txgbe_mac_info {
 	u32 orig_sr_an_mmd_adv_reg2;
 	u32 orig_vr_xs_or_pcs_mmd_digi_ctl1;
 	u8  san_mac_rar_index;
+	u16 max_msix_vectors;
 	bool orig_link_settings_stored;
 	bool autotry_restart;
 	struct txgbe_thermal_sensor_data  thermal_sensor_data;
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 06/14] net: txgbe: Support to receive and tranmit packets
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (4 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 05/14] net: txgbe: Add interrupt support Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 07/14] net: txgbe: Support flow control Jiawen Wu
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add support for passing traffic. It is enabled to RSS on Rx queues.
Tx path support TSO, checksum, etc.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  308 +
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c |  751 ++-
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   75 +
 .../net/ethernet/wangxun/txgbe/txgbe_lib.c    |   62 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 6003 ++++++++++++++---
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  372 +-
 6 files changed, 6510 insertions(+), 1061 deletions(-)

diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index c5a02f1950ae..487e38007ccc 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -7,6 +7,8 @@
 #include <net/ip.h>
 #include <linux/pci.h>
 #include <linux/vmalloc.h>
+#include <linux/if_vlan.h>
+#include <linux/sctp.h>
 #include <linux/etherdevice.h>
 #include <linux/timecounter.h>
 #include <linux/clocksource.h>
@@ -18,26 +20,238 @@
 #define MAX_REQUEST_SIZE 256
 #endif
 
+/* Ether Types */
+#define TXGBE_ETH_P_CNM                         0x22E7
+
+/* TX/RX descriptor defines */
+#define TXGBE_DEFAULT_TXD               512
+#define TXGBE_DEFAULT_TX_WORK   256
+#define TXGBE_MAX_TXD                   8192
+#define TXGBE_MIN_TXD                   128
+
+#if (PAGE_SIZE < 8192)
+#define TXGBE_DEFAULT_RXD               512
+#define TXGBE_DEFAULT_RX_WORK   256
+#else
+#define TXGBE_DEFAULT_RXD               256
+#define TXGBE_DEFAULT_RX_WORK   128
+#endif
+
+#define TXGBE_MAX_RXD                   8192
+#define TXGBE_MIN_RXD                   128
+
+/* Supported Rx Buffer Sizes */
+#define TXGBE_RXBUFFER_256       256  /* Used for skb receive header */
+#define TXGBE_RXBUFFER_2K       2048
+#define TXGBE_RXBUFFER_3K       3072
+#define TXGBE_RXBUFFER_4K       4096
+#define TXGBE_MAX_RXBUFFER      16384  /* largest size for single descriptor */
+
+/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we
+ * reserve 64 more, and skb_shared_info adds an additional 320 bytes more,
+ * this adds up to 448 bytes of extra data.
+ *
+ * Since netdev_alloc_skb now allocates a page fragment we can use a value
+ * of 256 and the resultant skb will have a truesize of 960 or less.
+ */
+#define TXGBE_RX_HDR_SIZE       TXGBE_RXBUFFER_256
+
+/* How many Rx Buffers do we bundle into one write to the hardware ? */
+#define TXGBE_RX_BUFFER_WRITE   16      /* Must be power of 2 */
+#define TXGBE_RX_DMA_ATTR \
+	(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+
+enum txgbe_tx_flags {
+	/* cmd_type flags */
+	TXGBE_TX_FLAGS_HW_VLAN  = 0x01,
+	TXGBE_TX_FLAGS_TSO      = 0x02,
+	TXGBE_TX_FLAGS_TSTAMP   = 0x04,
+
+	/* olinfo flags */
+	TXGBE_TX_FLAGS_CC       = 0x08,
+	TXGBE_TX_FLAGS_IPV4     = 0x10,
+	TXGBE_TX_FLAGS_CSUM     = 0x20,
+	TXGBE_TX_FLAGS_OUTER_IPV4 = 0x100,
+	TXGBE_TX_FLAGS_LINKSEC	= 0x200,
+	TXGBE_TX_FLAGS_IPSEC    = 0x400,
+
+	/* software defined flags */
+	TXGBE_TX_FLAGS_SW_VLAN  = 0x40,
+	TXGBE_TX_FLAGS_FCOE     = 0x80,
+};
+
+/* VLAN info */
+#define TXGBE_TX_FLAGS_VLAN_MASK        0xffff0000
+#define TXGBE_TX_FLAGS_VLAN_PRIO_MASK   0xe0000000
+#define TXGBE_TX_FLAGS_VLAN_PRIO_SHIFT  29
+#define TXGBE_TX_FLAGS_VLAN_SHIFT       16
+
+#define TXGBE_MAX_RX_DESC_POLL          10
+
+#define TXGBE_MAX_PF_MACVLANS           15
+
+#define TXGBE_MAX_TXD_PWR       14
+#define TXGBE_MAX_DATA_PER_TXD  (1 << TXGBE_MAX_TXD_PWR)
+
+/* Tx Descriptors needed, worst case */
+#define TXD_USE_COUNT(S)        DIV_ROUND_UP((S), TXGBE_MAX_DATA_PER_TXD)
+#ifndef MAX_SKB_FRAGS
+#define DESC_NEEDED     4
+#elif (MAX_SKB_FRAGS < 16)
+#define DESC_NEEDED     ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4)
+#else
+#define DESC_NEEDED     (MAX_SKB_FRAGS + 4)
+#endif
+
+/* wrapper around a pointer to a socket buffer,
+ * so a DMA handle can be stored along with the buffer
+ */
+struct txgbe_tx_buffer {
+	union txgbe_tx_desc *next_to_watch;
+	struct sk_buff *skb;
+	unsigned int bytecount;
+	unsigned short gso_segs;
+	__be16 protocol;
+	DEFINE_DMA_UNMAP_ADDR(dma);
+	DEFINE_DMA_UNMAP_LEN(len);
+	u32 tx_flags;
+};
+
+struct txgbe_rx_buffer {
+	struct sk_buff *skb;
+	dma_addr_t dma;
+	dma_addr_t page_dma;
+	struct page *page;
+	unsigned int page_offset;
+};
+
+struct txgbe_queue_stats {
+	u64 packets;
+	u64 bytes;
+};
+
+struct txgbe_tx_queue_stats {
+	u64 restart_queue;
+	u64 tx_busy;
+	u64 tx_done_old;
+};
+
+struct txgbe_rx_queue_stats {
+	u64 rsc_count;
+	u64 rsc_flush;
+	u64 non_eop_descs;
+	u64 alloc_rx_page_failed;
+	u64 alloc_rx_buff_failed;
+	u64 csum_good_cnt;
+	u64 csum_err;
+};
+
+enum txgbe_ring_state_t {
+	__TXGBE_RX_BUILD_SKB_ENABLED,
+	__TXGBE_TX_XPS_INIT_DONE,
+	__TXGBE_TX_DETECT_HANG,
+	__TXGBE_HANG_CHECK_ARMED,
+	__TXGBE_RX_RSC_ENABLED,
+};
+
+struct txgbe_fwd_adapter {
+	unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+	struct txgbe_adapter *adapter;
+};
+
+#define ring_uses_build_skb(ring) \
+	test_bit(__TXGBE_RX_BUILD_SKB_ENABLED, &(ring)->state)
+
+#define check_for_tx_hang(ring) \
+	test_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define set_check_for_tx_hang(ring) \
+	set_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define clear_check_for_tx_hang(ring) \
+	clear_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define ring_is_rsc_enabled(ring) \
+	test_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+#define set_ring_rsc_enabled(ring) \
+	set_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+#define clear_ring_rsc_enabled(ring) \
+	clear_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+
 struct txgbe_ring {
 	struct txgbe_ring *next;        /* pointer to next ring in q_vector */
 	struct txgbe_q_vector *q_vector; /* backpointer to host q_vector */
 	struct net_device *netdev;      /* netdev ring belongs to */
 	struct device *dev;             /* device for DMA mapping */
+	struct txgbe_fwd_adapter *accel;
+	void *desc;                     /* descriptor ring memory */
+	union {
+		struct txgbe_tx_buffer *tx_buffer_info;
+		struct txgbe_rx_buffer *rx_buffer_info;
+	};
+	unsigned long state;
+	u8 __iomem *tail;
+	dma_addr_t dma;                 /* phys. address of descriptor ring */
+	unsigned int size;              /* length in bytes */
+
 	u16 count;                      /* amount of descriptors */
 
 	u8 queue_index; /* needed for multiqueue queue management */
 	u8 reg_idx;
+	u16 next_to_use;
+	u16 next_to_clean;
+	u16 rx_buf_len;
+	u16 next_to_alloc;
+	struct txgbe_queue_stats stats;
+	struct u64_stats_sync syncp;
+
+	union {
+		struct txgbe_tx_queue_stats tx_stats;
+		struct txgbe_rx_queue_stats rx_stats;
+	};
 } ____cacheline_internodealigned_in_smp;
 
+enum txgbe_ring_f_enum {
+	RING_F_NONE = 0,
+	RING_F_RSS,
+	RING_F_ARRAY_SIZE  /* must be last in enum set */
+};
+
+#define TXGBE_MAX_RSS_INDICES           63
 #define TXGBE_MAX_FDIR_INDICES          63
 
 #define MAX_RX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
 #define MAX_TX_QUEUES   (TXGBE_MAX_FDIR_INDICES + 1)
 
+#define TXGBE_MAX_MACVLANS      32
+
+struct txgbe_ring_feature {
+	u16 limit;      /* upper limit on feature indices */
+	u16 indices;    /* current value of indices */
+	u16 mask;       /* Mask used for feature to ring mapping */
+	u16 offset;     /* offset to start of feature */
+};
+
+static inline unsigned int txgbe_rx_bufsz(struct txgbe_ring __maybe_unused *ring)
+{
+#if MAX_SKB_FRAGS < 8
+	return ALIGN(TXGBE_MAX_RXBUFFER / MAX_SKB_FRAGS, 1024);
+#else
+	return TXGBE_RXBUFFER_2K;
+#endif
+}
+
+static inline unsigned int txgbe_rx_pg_order(struct txgbe_ring __maybe_unused *ring)
+{
+	return 0;
+}
+
+#define txgbe_rx_pg_size(_ring) (PAGE_SIZE << txgbe_rx_pg_order(_ring))
+
 struct txgbe_ring_container {
 	struct txgbe_ring *ring;        /* pointer to linked list of rings */
+	unsigned int total_bytes;       /* total bytes processed this int */
+	unsigned int total_packets;     /* total packets processed this int */
 	u16 work_limit;                 /* total work allowed per interrupt */
 	u8 count;                       /* total number of rings in vector */
+	u8 itr;                         /* current ITR setting for ring */
 };
 
 /* iterator for handling rings in ring container */
@@ -70,11 +284,37 @@ struct txgbe_q_vector {
 /* microsecond values for various ITR rates shifted by 2 to fit itr register
  * with the first 3 bits reserved 0
  */
+#define TXGBE_MIN_RSC_ITR       24
 #define TXGBE_100K_ITR          40
 #define TXGBE_20K_ITR           200
 #define TXGBE_16K_ITR           248
 #define TXGBE_12K_ITR           336
 
+/* txgbe_test_staterr - tests bits in Rx descriptor status and error fields */
+static inline __le32 txgbe_test_staterr(union txgbe_rx_desc *rx_desc,
+					const u32 stat_err_bits)
+{
+	return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits);
+}
+
+/* txgbe_desc_unused - calculate if we have unused descriptors */
+static inline u16 txgbe_desc_unused(struct txgbe_ring *ring)
+{
+	u16 ntc = ring->next_to_clean;
+	u16 ntu = ring->next_to_use;
+
+	return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;
+}
+
+#define TXGBE_RX_DESC(R, i)     \
+	(&(((union txgbe_rx_desc *)((R)->desc))[i]))
+#define TXGBE_TX_DESC(R, i)     \
+	(&(((union txgbe_tx_desc *)((R)->desc))[i]))
+#define TXGBE_TX_CTXTDESC(R, i) \
+	(&(((struct txgbe_tx_context_desc *)((R)->desc))[i]))
+
+#define TXGBE_MAX_JUMBO_FRAME_SIZE      9432 /* max payload 9414 */
+
 #define TCP_TIMER_VECTOR        0
 #define OTHER_VECTOR    1
 #define NON_Q_VECTORS   (OTHER_VECTOR + TCP_TIMER_VECTOR)
@@ -108,6 +348,8 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG_NEED_LINK_CONFIG             ((u32)(1 << 1))
 #define TXGBE_FLAG_MSI_ENABLED                  ((u32)(1 << 2))
 #define TXGBE_FLAG_MSIX_ENABLED                 ((u32)(1 << 3))
+#define TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE        ((u32)(1 << 4))
+#define TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE         ((u32)(1 << 5))
 
 /**
  * txgbe_adapter.flag2
@@ -118,6 +360,16 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG2_PF_RESET_REQUESTED          (1U << 3)
 #define TXGBE_FLAG2_RESET_INTR_RECEIVED         (1U << 4)
 #define TXGBE_FLAG2_GLOBAL_RESET_REQUESTED      (1U << 5)
+#define TXGBE_FLAG2_RSC_CAPABLE                 (1U << 6)
+#define TXGBE_FLAG2_RSC_ENABLED                 (1U << 7)
+#define TXGBE_FLAG2_RSS_FIELD_IPV4_UDP          (1U << 8)
+#define TXGBE_FLAG2_RSS_FIELD_IPV6_UDP          (1U << 9)
+#define TXGBE_FLAG2_RSS_ENABLED                 (1U << 10)
+
+#define TXGBE_SET_FLAG(_input, _flag, _result) \
+	((_flag <= _result) ? \
+	 ((u32)(_input & _flag) * (_result / _flag)) : \
+	 ((u32)(_input & _flag) / (_flag / _result)))
 
 enum txgbe_isb_idx {
 	TXGBE_ISB_HEADER,
@@ -129,6 +381,7 @@ enum txgbe_isb_idx {
 
 /* board specific private data structure */
 struct txgbe_adapter {
+	unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
 	/* OS defined structs */
 	struct net_device *netdev;
 	struct pci_dev *pdev;
@@ -155,20 +408,34 @@ struct txgbe_adapter {
 	/* TX */
 	struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
 
+	u64 restart_queue;
 	u64 lsc_int;
+	u32 tx_timeout_count;
 
 	/* RX */
 	struct txgbe_ring *rx_ring[MAX_RX_QUEUES];
+	u64 hw_csum_rx_error;
+	u64 hw_csum_rx_good;
+	u64 hw_rx_no_dma_resources;
+	u64 rsc_total_count;
+	u64 rsc_total_flush;
+	u64 non_eop_descs;
+	u32 alloc_rx_page_failed;
+	u32 alloc_rx_buff_failed;
+
 	struct txgbe_q_vector *q_vector[MAX_MSIX_Q_VECTORS];
 
 	int num_q_vectors;      /* current number of q_vectors for device */
 	int max_q_vectors;      /* upper limit of q_vectors for device */
+	struct txgbe_ring_feature ring_feature[RING_F_ARRAY_SIZE];
 	struct msix_entry *msix_entries;
 
 	/* structs defined in txgbe_hw.h */
 	struct txgbe_hw hw;
 	u16 msg_enable;
+	struct txgbe_hw_stats stats;
 
+	u64 tx_busy;
 	unsigned int tx_ring_count;
 	unsigned int rx_ring_count;
 
@@ -189,6 +456,14 @@ struct txgbe_adapter {
 
 	struct txgbe_mac_addr *mac_table;
 
+	__le16 vxlan_port;
+	unsigned long fwd_bitmask; /* bitmask indicating in use pools */
+
+#define TXGBE_MAX_RETA_ENTRIES 128
+	u8 rss_indir_tbl[TXGBE_MAX_RETA_ENTRIES];
+#define TXGBE_RSS_KEY_SIZE     40
+	u32 rss_key[TXGBE_RSS_KEY_SIZE / sizeof(u32)];
+
 	/* misc interrupt status block */
 	dma_addr_t isb_dma;
 	u32 *isb_mem;
@@ -223,18 +498,49 @@ enum txgbe_state_t {
 	__TXGBE_PTP_TX_IN_PROGRESS,
 };
 
+struct txgbe_cb {
+	dma_addr_t dma;
+	u16     append_cnt;      /* number of skb's appended */
+	bool    page_released;
+	bool    dma_released;
+};
+
+#define TXGBE_CB(skb) ((struct txgbe_cb *)(skb)->cb)
+
 void txgbe_irq_disable(struct txgbe_adapter *adapter);
 void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush);
 int txgbe_open(struct net_device *netdev);
 int txgbe_close(struct net_device *netdev);
 void txgbe_up(struct txgbe_adapter *adapter);
 void txgbe_down(struct txgbe_adapter *adapter);
+int txgbe_setup_rx_resources(struct txgbe_ring *rx_ring);
+int txgbe_setup_tx_resources(struct txgbe_ring *tx_ring);
+void txgbe_free_rx_resources(struct txgbe_ring *rx_ring);
+void txgbe_free_tx_resources(struct txgbe_ring *tx_ring);
+void txgbe_configure_rx_ring(struct txgbe_adapter *adapter,
+			     struct txgbe_ring *ring);
+void txgbe_configure_tx_ring(struct txgbe_adapter *adapter,
+			     struct txgbe_ring *ring);
 int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter);
 void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter);
 void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter);
 void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter);
+netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
+				  struct txgbe_adapter *adapter,
+				  struct txgbe_ring *tx_ring);
+void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *ring,
+				      struct txgbe_tx_buffer *tx_buffer);
+void txgbe_alloc_rx_buffers(struct txgbe_ring *rx_ring, u16 cleaned_count);
+void txgbe_set_rx_mode(struct net_device *netdev);
+void txgbe_tx_ctxtdesc(struct txgbe_ring *tx_ring, u32 vlan_macip_lens,
+		       u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx);
 void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
+int txgbe_poll(struct napi_struct *napi, int budget);
 
+static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
+{
+	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
+}
 
 /**
  * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
@@ -274,6 +580,8 @@ static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask)
 	/* skip the flush */
 }
 
+#define TXGBE_RING_SIZE(R) ((R)->count < TXGBE_MAX_TXD ? (R)->count / 128 : 0)
+
 #define TXGBE_CPU_TO_BE16(_x) cpu_to_be16(_x)
 #define TXGBE_BE16_TO_CPU(_x) be16_to_cpu(_x)
 #define TXGBE_CPU_TO_BE32(_x) cpu_to_be32(_x)
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 1e30878f7c6f..d3ea9cb82690 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -9,6 +9,9 @@
 #define TXGBE_SP_MAX_TX_QUEUES  128
 #define TXGBE_SP_MAX_RX_QUEUES  128
 #define TXGBE_SP_RAR_ENTRIES    128
+#define TXGBE_SP_MC_TBL_SIZE    128
+#define TXGBE_SP_VFT_TBL_SIZE   128
+#define TXGBE_SP_RX_PB_SIZE     512
 
 static s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw);
 static void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw);
@@ -105,6 +108,56 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_clear_hw_cntrs - Generic clear hardware counters
+ *  @hw: pointer to hardware structure
+ *
+ *  Clears all hardware statistics counters by reading them from the hardware
+ *  Statistics counters are clear on read.
+ **/
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
+{
+	u16 i = 0;
+
+	rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW);
+	for (i = 0; i < 8; i++)
+		rd32(hw, TXGBE_RDB_MPCNT(i));
+
+	rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW);
+	rd32(hw, TXGBE_RDB_LXONTXC);
+	rd32(hw, TXGBE_RDB_LXOFFTXC);
+	rd32(hw, TXGBE_MAC_LXONRXC);
+	rd32(hw, TXGBE_MAC_LXOFFRXC);
+
+	for (i = 0; i < 8; i++) {
+		rd32(hw, TXGBE_RDB_PXONTXC(i));
+		rd32(hw, TXGBE_RDB_PXOFFTXC(i));
+		rd32(hw, TXGBE_MAC_PXONRXC(i));
+		wr32m(hw, TXGBE_MMC_CONTROL,
+		      TXGBE_MMC_CONTROL_UP, i << 16);
+		rd32(hw, TXGBE_MAC_PXOFFRXC);
+	}
+	for (i = 0; i < 8; i++)
+		rd32(hw, TXGBE_RDB_PXON2OFFCNT(i));
+	for (i = 0; i < 128; i++)
+		wr32(hw, TXGBE_PX_MPRC(i), 0);
+
+	rd32(hw, TXGBE_PX_GPRC);
+	rd32(hw, TXGBE_PX_GPTC);
+	rd32(hw, TXGBE_PX_GORC_MSB);
+	rd32(hw, TXGBE_PX_GOTC_MSB);
+
+	rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW);
+	rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD);
+	rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD);
+	rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW);
+	rd32(hw, TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW);
+	rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW);
+	rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW);
+	rd32(hw, TXGBE_RDM_DRP_PKT);
+	return 0;
+}
+
 /**
  *  txgbe_read_pba_string - Reads part number string from EEPROM
  *  @hw: pointer to hardware structure
@@ -673,6 +726,130 @@ s32 txgbe_init_rx_addrs(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_mta_vector - Determines bit-vector in multicast table to set
+ *  @hw: pointer to hardware structure
+ *  @mc_addr: the multicast address
+ *
+ *  Extracts the 12 bits, from a multicast address, to determine which
+ *  bit-vector to set in the multicast table. The hardware uses 12 bits, from
+ *  incoming rx multicast addresses, to determine the bit-vector to check in
+ *  the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set
+ *  by the MO field of the MCSTCTRL. The MO field is set during initialization
+ *  to mc_filter_type.
+ **/
+static s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr)
+{
+	u32 vector = 0;
+
+	switch (hw->mac.mc_filter_type) {
+	case 0:   /* use bits [47:36] of the address */
+		vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4));
+		break;
+	case 1:   /* use bits [46:35] of the address */
+		vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5));
+		break;
+	case 2:   /* use bits [45:34] of the address */
+		vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6));
+		break;
+	case 3:   /* use bits [43:32] of the address */
+		vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8));
+		break;
+	default:  /* Invalid mc_filter_type */
+		DEBUGOUT("MC filter type param set incorrectly\n");
+		break;
+	}
+
+	/* vector can only be 12-bits or boundary will be exceeded */
+	vector &= 0xFFF;
+	return vector;
+}
+
+/**
+ *  txgbe_set_mta - Set bit-vector in multicast table
+ *  @hw: pointer to hardware structure
+ *  @hash_value: Multicast address hash value
+ *
+ *  Sets the bit-vector in the multicast table.
+ **/
+void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr)
+{
+	u32 vector;
+	u32 vector_bit;
+	u32 vector_reg;
+
+	hw->addr_ctrl.mta_in_use++;
+
+	vector = txgbe_mta_vector(hw, mc_addr);
+	DEBUGOUT1(" bit-vector = 0x%03X\n", vector);
+
+	/* The MTA is a register array of 128 32-bit registers. It is treated
+	 * like an array of 4096 bits.  We want to set bit
+	 * BitArray[vector_value]. So we figure out what register the bit is
+	 * in, read it, OR in the new bit, then write back the new value.  The
+	 * register is determined by the upper 7 bits of the vector value and
+	 * the bit within that register are determined by the lower 5 bits of
+	 * the value.
+	 */
+	vector_reg = (vector >> 5) & 0x7F;
+	vector_bit = vector & 0x1F;
+	hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+}
+
+/**
+ *  txgbe_update_mc_addr_list - Updates MAC list of multicast addresses
+ *  @hw: pointer to hardware structure
+ *  @mc_addr_list: the list of new multicast addresses
+ *  @mc_addr_count: number of addresses
+ *  @next: iterator function to walk the multicast address list
+ *  @clear: flag, when set clears the table beforehand
+ *
+ *  When the clear flag is set, the given list replaces any existing list.
+ *  Hashes the given addresses into the multicast table.
+ **/
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+			      u32 mc_addr_count, txgbe_mc_addr_itr next,
+			      bool clear)
+{
+	u32 i;
+	u32 vmdq;
+	u32 psrctl;
+
+	/* Set the new number of MC addresses that we are being requested to
+	 * use.
+	 */
+	hw->addr_ctrl.num_mc_addrs = mc_addr_count;
+	hw->addr_ctrl.mta_in_use = 0;
+
+	/* Clear mta_shadow */
+	if (clear) {
+		DEBUGOUT(" Clearing MTA\n");
+		memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow));
+	}
+
+	/* Update mta_shadow */
+	for (i = 0; i < mc_addr_count; i++) {
+		DEBUGOUT(" Adding the multicast addresses:\n");
+		txgbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq));
+	}
+
+	/* Enable mta */
+	for (i = 0; i < hw->mac.mcft_size; i++)
+		wr32a(hw, TXGBE_PSR_MC_TBL(0), i,
+		      hw->mac.mta_shadow[i]);
+
+	if (hw->addr_ctrl.mta_in_use > 0) {
+		psrctl = rd32(hw, TXGBE_PSR_CTL);
+		psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+		psrctl |= TXGBE_PSR_CTL_MFE |
+			(hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT);
+		wr32(hw, TXGBE_PSR_CTL, psrctl);
+	}
+
+	DEBUGOUT("txgbe update mc addr list Complete\n");
+	return 0;
+}
+
 /**
  *  txgbe_disable_pcie_master - Disable PCI-express master access
  *  @hw: pointer to hardware structure
@@ -772,6 +949,52 @@ void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask)
 	txgbe_release_eeprom_semaphore(hw);
 }
 
+/**
+ *  txgbe_disable_sec_rx_path - Stops the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Stops the receive data path and waits for the HW to internally empty
+ *  the Rx security block
+ **/
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw)
+{
+#define TXGBE_MAX_SECRX_POLL 40
+
+	int i;
+	int secrxreg;
+
+	wr32m(hw, TXGBE_RSC_CTL,
+	      TXGBE_RSC_CTL_RX_DIS, TXGBE_RSC_CTL_RX_DIS);
+	for (i = 0; i < TXGBE_MAX_SECRX_POLL; i++) {
+		secrxreg = rd32(hw, TXGBE_RSC_ST);
+		if (!(secrxreg & TXGBE_RSC_ST_RSEC_RDY))
+			/* Use interrupt-safe sleep just in case */
+			usec_delay(1000);
+		else
+			break;
+	}
+
+	/* For informational purposes only */
+	if (i >= TXGBE_MAX_SECRX_POLL)
+		DEBUGOUT("Rx unit being enabled before security path fully disabled. Continuing with init.\n");
+
+	return 0;
+}
+
+/**
+ *  txgbe_enable_sec_rx_path - Enables the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Enables the receive data path.
+ **/
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw)
+{
+	wr32m(hw, TXGBE_RSC_CTL, TXGBE_RSC_CTL_RX_DIS, 0);
+	TXGBE_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
 /**
  *  txgbe_get_san_mac_addr_offset - Get SAN MAC address offset from the EEPROM
  *  @hw: pointer to hardware structure
@@ -925,6 +1148,82 @@ s32 txgbe_init_uta_tables(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_set_vfta - Set VLAN filter table
+ *  @hw: pointer to hardware structure
+ *  @vlan: VLAN id to write to VLAN filter
+ *  @vind: VMDq output index that maps queue to VLAN id in VFVFB
+ *  @vlan_on: boolean flag to turn on/off VLAN in VFVF
+ *
+ *  Turn on/off specified VLAN in the VLAN filter table.
+ **/
+s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan, u32 vind,
+		   bool vlan_on)
+{
+	s32 regindex;
+	u32 bitindex;
+	u32 vfta;
+	u32 targetbit;
+	bool vfta_changed = false;
+
+	if (vlan > 4095)
+		return TXGBE_ERR_PARAM;
+
+	/* The VFTA is a bitstring made up of 128 32-bit registers
+	 * that enable the particular VLAN id, much like the MTA:
+	 *    bits[11-5]: which register
+	 *    bits[4-0]:  which bit in the register
+	 */
+	regindex = (vlan >> 5) & 0x7F;
+	bitindex = vlan & 0x1F;
+	targetbit = (1 << bitindex);
+	/* errata 5 */
+	vfta = hw->mac.vft_shadow[regindex];
+	if (vlan_on) {
+		if (!(vfta & targetbit)) {
+			vfta |= targetbit;
+			vfta_changed = true;
+		}
+	} else {
+		if ((vfta & targetbit)) {
+			vfta &= ~targetbit;
+			vfta_changed = true;
+		}
+	}
+
+	if (vfta_changed)
+		wr32(hw, TXGBE_PSR_VLAN_TBL(regindex), vfta);
+	/* errata 5 */
+	hw->mac.vft_shadow[regindex] = vfta;
+	return 0;
+}
+
+/**
+ *  txgbe_clear_vfta - Clear VLAN filter table
+ *  @hw: pointer to hardware structure
+ *
+ *  Clears the VLAN filer table, and the VMDq index associated with the filter
+ **/
+s32 txgbe_clear_vfta(struct txgbe_hw *hw)
+{
+	u32 offset;
+
+	for (offset = 0; offset < hw->mac.vft_size; offset++) {
+		wr32(hw, TXGBE_PSR_VLAN_TBL(offset), 0);
+		/* errata 5 */
+		hw->mac.vft_shadow[offset] = 0;
+	}
+
+	for (offset = 0; offset < TXGBE_PSR_VLAN_SWC_ENTRIES; offset++) {
+		wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, offset);
+		wr32(hw, TXGBE_PSR_VLAN_SWC, 0);
+		wr32(hw, TXGBE_PSR_VLAN_SWC_VM_L, 0);
+		wr32(hw, TXGBE_PSR_VLAN_SWC_VM_H, 0);
+	}
+
+	return 0;
+}
+
 /**
  *  Get alternative WWNN/WWPN prefix from the EEPROM
  *  @hw: pointer to hardware structure
@@ -1244,6 +1543,66 @@ s32 txgbe_reset_hostif(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ * txgbe_set_rxpba - Initialize Rx packet buffer
+ * @hw: pointer to hardware structure
+ * @num_pb: number of packet buffers to allocate
+ * @headroom: reserve n KB of headroom
+ * @strategy: packet buffer allocation strategy
+ **/
+void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+		     int strategy)
+{
+	u32 pbsize = hw->mac.rx_pb_size;
+	int i = 0;
+	u32 rxpktsize, txpktsize, txpbthresh;
+
+	/* Reserve headroom */
+	pbsize -= headroom;
+
+	if (!num_pb)
+		num_pb = 1;
+
+	/* Divide remaining packet buffer space amongst the number of packet
+	 * buffers requested using supplied strategy.
+	 */
+	switch (strategy) {
+	case PBA_STRATEGY_WEIGHTED:
+		/* txgbe_dcb_pba_80_48 strategy weight first half of packet
+		 * buffer with 5/8 of the packet buffer space.
+		 */
+		rxpktsize = (pbsize * 5) / (num_pb * 4);
+		pbsize -= rxpktsize * (num_pb / 2);
+		rxpktsize <<= TXGBE_RDB_PB_SZ_SHIFT;
+		for (; i < (num_pb / 2); i++)
+			wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize);
+		/* Fall through to configure remaining packet buffers */
+		fallthrough;
+	case PBA_STRATEGY_EQUAL:
+		rxpktsize = (pbsize / (num_pb - i)) << TXGBE_RDB_PB_SZ_SHIFT;
+		for (; i < num_pb; i++)
+			wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize);
+		break;
+	default:
+		break;
+	}
+
+	/* Only support an equally distributed Tx packet buffer strategy. */
+	txpktsize = TXGBE_TDB_PB_SZ_MAX / num_pb;
+	txpbthresh = (txpktsize / 1024) - TXGBE_TXPKT_SIZE_MAX;
+	for (i = 0; i < num_pb; i++) {
+		wr32(hw, TXGBE_TDB_PB_SZ(i), txpktsize);
+		wr32(hw, TXGBE_TDM_PB_THRE(i), txpbthresh);
+	}
+
+	/* Clear unused TCs, if any, to zero buffer size*/
+	for (; i < TXGBE_MAX_PB; i++) {
+		wr32(hw, TXGBE_RDB_PB_SZ(i), 0);
+		wr32(hw, TXGBE_TDB_PB_SZ(i), 0);
+		wr32(hw, TXGBE_TDM_PB_THRE(i), 0);
+	}
+}
+
 /**
  *  txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
  *  @hw: pointer to hardware structure
@@ -1303,6 +1662,25 @@ void txgbe_disable_rx(struct txgbe_hw *hw)
 	}
 }
 
+void txgbe_enable_rx(struct txgbe_hw *hw)
+{
+	u32 pfdtxgswc;
+
+	/* enable mac receiver */
+	wr32m(hw, TXGBE_MAC_RX_CFG,
+	      TXGBE_MAC_RX_CFG_RE, TXGBE_MAC_RX_CFG_RE);
+
+	wr32m(hw, TXGBE_RDB_PB_CTL,
+	      TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
+
+	if (hw->mac.set_lben) {
+		pfdtxgswc = rd32(hw, TXGBE_PSR_CTL);
+		pfdtxgswc |= TXGBE_PSR_CTL_SW_EN;
+		wr32(hw, TXGBE_PSR_CTL, pfdtxgswc);
+		hw->mac.set_lben = false;
+	}
+}
+
 /**
  * txgbe_mng_present - returns true when management capability is present
  * @hw: pointer to hardware structure
@@ -1501,6 +1879,328 @@ int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 	return err;
 }
 
+/* The txgbe_ptype_lookup is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT txgbe_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF txgbe_ptype_lookup[ptype].mac == TXGBE_DEC_PTYPE_MAC_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum txgbe_l2_ptypes to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define TXGBE_PTT(ptype, mac, ip, etype, eip, proto, layer)\
+	{       ptype, \
+		1, \
+		/* mac     */ TXGBE_DEC_PTYPE_MAC_##mac, \
+		/* ip      */ TXGBE_DEC_PTYPE_IP_##ip, \
+		/* etype   */ TXGBE_DEC_PTYPE_ETYPE_##etype, \
+		/* eip     */ TXGBE_DEC_PTYPE_IP_##eip, \
+		/* proto   */ TXGBE_DEC_PTYPE_PROT_##proto, \
+		/* layer   */ TXGBE_DEC_PTYPE_LAYER_##layer }
+
+#define TXGBE_UKN(ptype) \
+		{ ptype, 0, 0, 0, 0, 0, 0, 0 }
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct txgbe_dptype txgbe_ptype_lookup[256] = {
+	TXGBE_UKN(0x00),
+	TXGBE_UKN(0x01),
+	TXGBE_UKN(0x02),
+	TXGBE_UKN(0x03),
+	TXGBE_UKN(0x04),
+	TXGBE_UKN(0x05),
+	TXGBE_UKN(0x06),
+	TXGBE_UKN(0x07),
+	TXGBE_UKN(0x08),
+	TXGBE_UKN(0x09),
+	TXGBE_UKN(0x0A),
+	TXGBE_UKN(0x0B),
+	TXGBE_UKN(0x0C),
+	TXGBE_UKN(0x0D),
+	TXGBE_UKN(0x0E),
+	TXGBE_UKN(0x0F),
+
+	/* L2: mac */
+	TXGBE_UKN(0x10),
+	TXGBE_PTT(0x11, L2, NONE, NONE, NONE, NONE, PAY2),
+	TXGBE_PTT(0x12, L2, NONE, NONE, NONE, TS,   PAY2),
+	TXGBE_PTT(0x13, L2, NONE, NONE, NONE, NONE, PAY2),
+	TXGBE_PTT(0x14, L2, NONE, NONE, NONE, NONE, PAY2),
+	TXGBE_PTT(0x15, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x16, L2, NONE, NONE, NONE, NONE, PAY2),
+	TXGBE_PTT(0x17, L2, NONE, NONE, NONE, NONE, NONE),
+
+	/* L2: ethertype filter */
+	TXGBE_PTT(0x18, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x19, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1A, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1B, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1C, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1D, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1E, L2, NONE, NONE, NONE, NONE, NONE),
+	TXGBE_PTT(0x1F, L2, NONE, NONE, NONE, NONE, NONE),
+
+	/* L3: ip non-tunnel */
+	TXGBE_UKN(0x20),
+	TXGBE_PTT(0x21, IP, FGV4, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x22, IP, IPV4, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x23, IP, IPV4, NONE, NONE, UDP,  PAY4),
+	TXGBE_PTT(0x24, IP, IPV4, NONE, NONE, TCP,  PAY4),
+	TXGBE_PTT(0x25, IP, IPV4, NONE, NONE, SCTP, PAY4),
+	TXGBE_UKN(0x26),
+	TXGBE_UKN(0x27),
+	TXGBE_UKN(0x28),
+	TXGBE_PTT(0x29, IP, FGV6, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x2A, IP, IPV6, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x2B, IP, IPV6, NONE, NONE, UDP,  PAY3),
+	TXGBE_PTT(0x2C, IP, IPV6, NONE, NONE, TCP,  PAY4),
+	TXGBE_PTT(0x2D, IP, IPV6, NONE, NONE, SCTP, PAY4),
+	TXGBE_UKN(0x2E),
+	TXGBE_UKN(0x2F),
+
+	/* L2: fcoe */
+	TXGBE_PTT(0x30, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x31, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x32, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x33, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x34, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_UKN(0x35),
+	TXGBE_UKN(0x36),
+	TXGBE_UKN(0x37),
+	TXGBE_PTT(0x38, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x39, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x3A, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x3B, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_PTT(0x3C, FCOE, NONE, NONE, NONE, NONE, PAY3),
+	TXGBE_UKN(0x3D),
+	TXGBE_UKN(0x3E),
+	TXGBE_UKN(0x3F),
+
+	TXGBE_UKN(0x40),
+	TXGBE_UKN(0x41),
+	TXGBE_UKN(0x42),
+	TXGBE_UKN(0x43),
+	TXGBE_UKN(0x44),
+	TXGBE_UKN(0x45),
+	TXGBE_UKN(0x46),
+	TXGBE_UKN(0x47),
+	TXGBE_UKN(0x48),
+	TXGBE_UKN(0x49),
+	TXGBE_UKN(0x4A),
+	TXGBE_UKN(0x4B),
+	TXGBE_UKN(0x4C),
+	TXGBE_UKN(0x4D),
+	TXGBE_UKN(0x4E),
+	TXGBE_UKN(0x4F),
+	TXGBE_UKN(0x50),
+	TXGBE_UKN(0x51),
+	TXGBE_UKN(0x52),
+	TXGBE_UKN(0x53),
+	TXGBE_UKN(0x54),
+	TXGBE_UKN(0x55),
+	TXGBE_UKN(0x56),
+	TXGBE_UKN(0x57),
+	TXGBE_UKN(0x58),
+	TXGBE_UKN(0x59),
+	TXGBE_UKN(0x5A),
+	TXGBE_UKN(0x5B),
+	TXGBE_UKN(0x5C),
+	TXGBE_UKN(0x5D),
+	TXGBE_UKN(0x5E),
+	TXGBE_UKN(0x5F),
+	TXGBE_UKN(0x60),
+	TXGBE_UKN(0x61),
+	TXGBE_UKN(0x62),
+	TXGBE_UKN(0x63),
+	TXGBE_UKN(0x64),
+	TXGBE_UKN(0x65),
+	TXGBE_UKN(0x66),
+	TXGBE_UKN(0x67),
+	TXGBE_UKN(0x68),
+	TXGBE_UKN(0x69),
+	TXGBE_UKN(0x6A),
+	TXGBE_UKN(0x6B),
+	TXGBE_UKN(0x6C),
+	TXGBE_UKN(0x6D),
+	TXGBE_UKN(0x6E),
+	TXGBE_UKN(0x6F),
+	TXGBE_UKN(0x70),
+	TXGBE_UKN(0x71),
+	TXGBE_UKN(0x72),
+	TXGBE_UKN(0x73),
+	TXGBE_UKN(0x74),
+	TXGBE_UKN(0x75),
+	TXGBE_UKN(0x76),
+	TXGBE_UKN(0x77),
+	TXGBE_UKN(0x78),
+	TXGBE_UKN(0x79),
+	TXGBE_UKN(0x7A),
+	TXGBE_UKN(0x7B),
+	TXGBE_UKN(0x7C),
+	TXGBE_UKN(0x7D),
+	TXGBE_UKN(0x7E),
+	TXGBE_UKN(0x7F),
+
+	/* IPv4 --> IPv4/IPv6 */
+	TXGBE_UKN(0x80),
+	TXGBE_PTT(0x81, IP, IPV4, IPIP, FGV4, NONE, PAY3),
+	TXGBE_PTT(0x82, IP, IPV4, IPIP, IPV4, NONE, PAY3),
+	TXGBE_PTT(0x83, IP, IPV4, IPIP, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0x84, IP, IPV4, IPIP, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0x85, IP, IPV4, IPIP, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0x86),
+	TXGBE_UKN(0x87),
+	TXGBE_UKN(0x88),
+	TXGBE_PTT(0x89, IP, IPV4, IPIP, FGV6, NONE, PAY3),
+	TXGBE_PTT(0x8A, IP, IPV4, IPIP, IPV6, NONE, PAY3),
+	TXGBE_PTT(0x8B, IP, IPV4, IPIP, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0x8C, IP, IPV4, IPIP, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0x8D, IP, IPV4, IPIP, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0x8E),
+	TXGBE_UKN(0x8F),
+
+	/* IPv4 --> GRE/NAT --> NONE/IPv4/IPv6 */
+	TXGBE_PTT(0x90, IP, IPV4, IG, NONE, NONE, PAY3),
+	TXGBE_PTT(0x91, IP, IPV4, IG, FGV4, NONE, PAY3),
+	TXGBE_PTT(0x92, IP, IPV4, IG, IPV4, NONE, PAY3),
+	TXGBE_PTT(0x93, IP, IPV4, IG, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0x94, IP, IPV4, IG, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0x95, IP, IPV4, IG, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0x96),
+	TXGBE_UKN(0x97),
+	TXGBE_UKN(0x98),
+	TXGBE_PTT(0x99, IP, IPV4, IG, FGV6, NONE, PAY3),
+	TXGBE_PTT(0x9A, IP, IPV4, IG, IPV6, NONE, PAY3),
+	TXGBE_PTT(0x9B, IP, IPV4, IG, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0x9C, IP, IPV4, IG, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0x9D, IP, IPV4, IG, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0x9E),
+	TXGBE_UKN(0x9F),
+
+	/* IPv4 --> GRE/NAT --> MAC --> NONE/IPv4/IPv6 */
+	TXGBE_PTT(0xA0, IP, IPV4, IGM, NONE, NONE, PAY3),
+	TXGBE_PTT(0xA1, IP, IPV4, IGM, FGV4, NONE, PAY3),
+	TXGBE_PTT(0xA2, IP, IPV4, IGM, IPV4, NONE, PAY3),
+	TXGBE_PTT(0xA3, IP, IPV4, IGM, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xA4, IP, IPV4, IGM, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xA5, IP, IPV4, IGM, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xA6),
+	TXGBE_UKN(0xA7),
+	TXGBE_UKN(0xA8),
+	TXGBE_PTT(0xA9, IP, IPV4, IGM, FGV6, NONE, PAY3),
+	TXGBE_PTT(0xAA, IP, IPV4, IGM, IPV6, NONE, PAY3),
+	TXGBE_PTT(0xAB, IP, IPV4, IGM, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xAC, IP, IPV4, IGM, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xAD, IP, IPV4, IGM, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xAE),
+	TXGBE_UKN(0xAF),
+
+	/* IPv4 --> GRE/NAT --> MAC+VLAN --> NONE/IPv4/IPv6 */
+	TXGBE_PTT(0xB0, IP, IPV4, IGMV, NONE, NONE, PAY3),
+	TXGBE_PTT(0xB1, IP, IPV4, IGMV, FGV4, NONE, PAY3),
+	TXGBE_PTT(0xB2, IP, IPV4, IGMV, IPV4, NONE, PAY3),
+	TXGBE_PTT(0xB3, IP, IPV4, IGMV, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xB4, IP, IPV4, IGMV, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xB5, IP, IPV4, IGMV, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xB6),
+	TXGBE_UKN(0xB7),
+	TXGBE_UKN(0xB8),
+	TXGBE_PTT(0xB9, IP, IPV4, IGMV, FGV6, NONE, PAY3),
+	TXGBE_PTT(0xBA, IP, IPV4, IGMV, IPV6, NONE, PAY3),
+	TXGBE_PTT(0xBB, IP, IPV4, IGMV, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xBC, IP, IPV4, IGMV, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xBD, IP, IPV4, IGMV, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xBE),
+	TXGBE_UKN(0xBF),
+
+	/* IPv6 --> IPv4/IPv6 */
+	TXGBE_UKN(0xC0),
+	TXGBE_PTT(0xC1, IP, IPV6, IPIP, FGV4, NONE, PAY3),
+	TXGBE_PTT(0xC2, IP, IPV6, IPIP, IPV4, NONE, PAY3),
+	TXGBE_PTT(0xC3, IP, IPV6, IPIP, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xC4, IP, IPV6, IPIP, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xC5, IP, IPV6, IPIP, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xC6),
+	TXGBE_UKN(0xC7),
+	TXGBE_UKN(0xC8),
+	TXGBE_PTT(0xC9, IP, IPV6, IPIP, FGV6, NONE, PAY3),
+	TXGBE_PTT(0xCA, IP, IPV6, IPIP, IPV6, NONE, PAY3),
+	TXGBE_PTT(0xCB, IP, IPV6, IPIP, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xCC, IP, IPV6, IPIP, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xCD, IP, IPV6, IPIP, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xCE),
+	TXGBE_UKN(0xCF),
+
+	/* IPv6 --> GRE/NAT -> NONE/IPv4/IPv6 */
+	TXGBE_PTT(0xD0, IP, IPV6, IG,   NONE, NONE, PAY3),
+	TXGBE_PTT(0xD1, IP, IPV6, IG,   FGV4, NONE, PAY3),
+	TXGBE_PTT(0xD2, IP, IPV6, IG,   IPV4, NONE, PAY3),
+	TXGBE_PTT(0xD3, IP, IPV6, IG,   IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xD4, IP, IPV6, IG,   IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xD5, IP, IPV6, IG,   IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xD6),
+	TXGBE_UKN(0xD7),
+	TXGBE_UKN(0xD8),
+	TXGBE_PTT(0xD9, IP, IPV6, IG,   FGV6, NONE, PAY3),
+	TXGBE_PTT(0xDA, IP, IPV6, IG,   IPV6, NONE, PAY3),
+	TXGBE_PTT(0xDB, IP, IPV6, IG,   IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xDC, IP, IPV6, IG,   IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xDD, IP, IPV6, IG,   IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xDE),
+	TXGBE_UKN(0xDF),
+
+	/* IPv6 --> GRE/NAT -> MAC -> NONE/IPv4/IPv6 */
+	TXGBE_PTT(0xE0, IP, IPV6, IGM,  NONE, NONE, PAY3),
+	TXGBE_PTT(0xE1, IP, IPV6, IGM,  FGV4, NONE, PAY3),
+	TXGBE_PTT(0xE2, IP, IPV6, IGM,  IPV4, NONE, PAY3),
+	TXGBE_PTT(0xE3, IP, IPV6, IGM,  IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xE4, IP, IPV6, IGM,  IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xE5, IP, IPV6, IGM,  IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xE6),
+	TXGBE_UKN(0xE7),
+	TXGBE_UKN(0xE8),
+	TXGBE_PTT(0xE9, IP, IPV6, IGM,  FGV6, NONE, PAY3),
+	TXGBE_PTT(0xEA, IP, IPV6, IGM,  IPV6, NONE, PAY3),
+	TXGBE_PTT(0xEB, IP, IPV6, IGM,  IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xEC, IP, IPV6, IGM,  IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xED, IP, IPV6, IGM,  IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xEE),
+	TXGBE_UKN(0xEF),
+
+	/* IPv6 --> GRE/NAT -> MAC--> NONE/IPv */
+	TXGBE_PTT(0xF0, IP, IPV6, IGMV, NONE, NONE, PAY3),
+	TXGBE_PTT(0xF1, IP, IPV6, IGMV, FGV4, NONE, PAY3),
+	TXGBE_PTT(0xF2, IP, IPV6, IGMV, IPV4, NONE, PAY3),
+	TXGBE_PTT(0xF3, IP, IPV6, IGMV, IPV4, UDP,  PAY4),
+	TXGBE_PTT(0xF4, IP, IPV6, IGMV, IPV4, TCP,  PAY4),
+	TXGBE_PTT(0xF5, IP, IPV6, IGMV, IPV4, SCTP, PAY4),
+	TXGBE_UKN(0xF6),
+	TXGBE_UKN(0xF7),
+	TXGBE_UKN(0xF8),
+	TXGBE_PTT(0xF9, IP, IPV6, IGMV, FGV6, NONE, PAY3),
+	TXGBE_PTT(0xFA, IP, IPV6, IGMV, IPV6, NONE, PAY3),
+	TXGBE_PTT(0xFB, IP, IPV6, IGMV, IPV6, UDP,  PAY4),
+	TXGBE_PTT(0xFC, IP, IPV6, IGMV, IPV6, TCP,  PAY4),
+	TXGBE_PTT(0xFD, IP, IPV6, IGMV, IPV6, SCTP, PAY4),
+	TXGBE_UKN(0xFE),
+	TXGBE_UKN(0xFF),
+};
+
 void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
 {
 	struct txgbe_mac_info *mac = &hw->mac;
@@ -1581,6 +2281,7 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 
 	/* MAC */
 	mac->ops.init_hw = txgbe_init_hw;
+	mac->ops.clear_hw_cntrs = txgbe_clear_hw_cntrs;
 	mac->ops.get_mac_addr = txgbe_get_mac_addr;
 	mac->ops.stop_adapter = txgbe_stop_adapter;
 	mac->ops.get_bus_info = txgbe_get_bus_info;
@@ -1589,6 +2290,9 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->ops.release_swfw_sync = txgbe_release_swfw_sync;
 	mac->ops.reset_hw = txgbe_reset_hw;
 	mac->ops.get_media_type = txgbe_get_media_type;
+	mac->ops.disable_sec_rx_path = txgbe_disable_sec_rx_path;
+	mac->ops.enable_sec_rx_path = txgbe_enable_sec_rx_path;
+	mac->ops.enable_rx_dma = txgbe_enable_rx_dma;
 	mac->ops.start_hw = txgbe_start_hw;
 	mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr;
 	mac->ops.get_wwn_prefix = txgbe_get_wwn_prefix;
@@ -1597,17 +2301,27 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->ops.led_on = txgbe_led_on;
 	mac->ops.led_off = txgbe_led_off;
 
-	/* RAR */
+	/* RAR, Multicast, VLAN */
 	mac->ops.set_rar = txgbe_set_rar;
 	mac->ops.clear_rar = txgbe_clear_rar;
 	mac->ops.init_rx_addrs = txgbe_init_rx_addrs;
+	mac->ops.update_mc_addr_list = txgbe_update_mc_addr_list;
+	mac->ops.enable_rx = txgbe_enable_rx;
+	mac->ops.disable_rx = txgbe_disable_rx;
+	mac->ops.set_vmdq_san_mac = txgbe_set_vmdq_san_mac;
+	mac->ops.set_vfta = txgbe_set_vfta;
+	mac->ops.clear_vfta = txgbe_clear_vfta;
 	mac->ops.init_uta_tables = txgbe_init_uta_tables;
 
 
 	/* Link */
 	mac->ops.get_link_capabilities = txgbe_get_link_capabilities;
 	mac->ops.check_link = txgbe_check_mac_link;
+	mac->ops.setup_rxpba = txgbe_set_rxpba;
+	mac->mcft_size          = TXGBE_SP_MC_TBL_SIZE;
+	mac->vft_size           = TXGBE_SP_VFT_TBL_SIZE;
 	mac->num_rar_entries    = TXGBE_SP_RAR_ENTRIES;
+	mac->rx_pb_size         = TXGBE_SP_RX_PB_SIZE;
 	mac->max_rx_queues      = TXGBE_SP_MAX_RX_QUEUES;
 	mac->max_tx_queues      = TXGBE_SP_MAX_TX_QUEUES;
 	mac->max_msix_vectors   = txgbe_get_pcie_msix_count(hw);
@@ -3004,6 +3718,14 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 	/* Set the media type */
 	hw->phy.media_type = TCALL(hw, mac.ops.get_media_type);
 
+	/* Clear the VLAN filter table */
+	TCALL(hw, mac.ops.clear_vfta);
+
+	/* Clear statistics registers */
+	TCALL(hw, mac.ops.clear_hw_cntrs);
+
+	TXGBE_WRITE_FLUSH(hw);
+
 	/* Clear the rate limiters */
 	for (i = 0; i < hw->mac.max_tx_queues; i++) {
 		wr32(hw, TXGBE_TDM_RP_IDX, i);
@@ -3052,6 +3774,33 @@ s32 txgbe_identify_phy(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_enable_rx_dma - Enable the Rx DMA unit on sapphire
+ *  @hw: pointer to hardware structure
+ *  @regval: register value to write to RXCTRL
+ *
+ *  Enables the Rx DMA unit for sapphire
+ **/
+s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval)
+{
+	/* Workaround for sapphire silicon errata when enabling the Rx datapath.
+	 * If traffic is incoming before we enable the Rx unit, it could hang
+	 * the Rx DMA unit.  Therefore, make sure the security engine is
+	 * completely disabled prior to enabling the Rx unit.
+	 */
+
+	TCALL(hw, mac.ops.disable_sec_rx_path);
+
+	if (regval & TXGBE_RDB_PB_CTL_RXEN)
+		TCALL(hw, mac.ops.enable_rx);
+	else
+		TCALL(hw, mac.ops.disable_rx);
+
+	TCALL(hw, mac.ops.enable_sec_rx_path);
+
+	return 0;
+}
+
 /**
  *  txgbe_init_eeprom_params - Initialize EEPROM params
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index 2273afaea4e2..e400b0333cc4 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -4,8 +4,71 @@
 #ifndef _TXGBE_HW_H_
 #define _TXGBE_HW_H_
 
+/**
+ * Packet Type decoding
+ **/
+/* txgbe_dptype.mac: outer mac */
+enum txgbe_dec_ptype_mac {
+	TXGBE_DEC_PTYPE_MAC_IP = 0,
+	TXGBE_DEC_PTYPE_MAC_L2 = 2,
+	TXGBE_DEC_PTYPE_MAC_FCOE = 3,
+};
+
+/* txgbe_dptype.[e]ip: outer&encaped ip */
+#define TXGBE_DEC_PTYPE_IP_FRAG (0x4)
+enum txgbe_dec_ptype_ip {
+	TXGBE_DEC_PTYPE_IP_NONE = 0,
+	TXGBE_DEC_PTYPE_IP_IPV4 = 1,
+	TXGBE_DEC_PTYPE_IP_IPV6 = 2,
+	TXGBE_DEC_PTYPE_IP_FGV4 =
+		(TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV4),
+	TXGBE_DEC_PTYPE_IP_FGV6 =
+		(TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV6),
+};
+
+/* txgbe_dptype.etype: encaped type */
+enum txgbe_dec_ptype_etype {
+	TXGBE_DEC_PTYPE_ETYPE_NONE = 0,
+	TXGBE_DEC_PTYPE_ETYPE_IPIP = 1, /* IP+IP */
+	TXGBE_DEC_PTYPE_ETYPE_IG = 2, /* IP+GRE */
+	TXGBE_DEC_PTYPE_ETYPE_IGM = 3, /* IP+GRE+MAC */
+	TXGBE_DEC_PTYPE_ETYPE_IGMV = 4, /* IP+GRE+MAC+VLAN */
+};
+
+/* txgbe_dptype.proto: payload proto */
+enum txgbe_dec_ptype_prot {
+	TXGBE_DEC_PTYPE_PROT_NONE = 0,
+	TXGBE_DEC_PTYPE_PROT_UDP = 1,
+	TXGBE_DEC_PTYPE_PROT_TCP = 2,
+	TXGBE_DEC_PTYPE_PROT_SCTP = 3,
+	TXGBE_DEC_PTYPE_PROT_ICMP = 4,
+	TXGBE_DEC_PTYPE_PROT_TS = 5, /* time sync */
+};
+
+/* txgbe_dptype.layer: payload layer */
+enum txgbe_dec_ptype_layer {
+	TXGBE_DEC_PTYPE_LAYER_NONE = 0,
+	TXGBE_DEC_PTYPE_LAYER_PAY2 = 1,
+	TXGBE_DEC_PTYPE_LAYER_PAY3 = 2,
+	TXGBE_DEC_PTYPE_LAYER_PAY4 = 3,
+};
+
+struct txgbe_dptype {
+	u32 ptype:8;
+	u32 known:1;
+	u32 mac:2; /* outer mac */
+	u32 ip:3; /* outer ip*/
+	u32 etype:3; /* encaped type */
+	u32 eip:3; /* encaped ip */
+	u32 prot:4; /* payload proto */
+	u32 layer:3; /* payload layer */
+};
+
+extern struct txgbe_dptype txgbe_ptype_lookup[256];
+
 s32 txgbe_init_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw(struct txgbe_hw *hw);
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw);
 s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num,
 			  u32 pba_num_size);
 s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
@@ -21,6 +84,11 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
 		  u32 enable_addr);
 s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
 s32 txgbe_init_rx_addrs(struct txgbe_hw *hw);
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+			      u32 mc_addr_count,
+			      txgbe_mc_addr_itr func, bool clear);
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
 
 s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
 void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
@@ -31,10 +99,15 @@ s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
 s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq);
 s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq);
 s32 txgbe_init_uta_tables(struct txgbe_hw *hw);
+s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan,
+		   u32 vind, bool vlan_on);
+s32 txgbe_clear_vfta(struct txgbe_hw *hw);
 
 s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
 			 u16 *wwpn_prefix);
 
+void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+		     int strategy);
 s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
 			 u8 build, u8 ver);
 s32 txgbe_reset_hostif(struct txgbe_hw *hw);
@@ -46,6 +119,7 @@ bool txgbe_mng_present(struct txgbe_hw *hw);
 bool txgbe_check_mng_access(struct txgbe_hw *hw);
 
 s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
+void txgbe_enable_rx(struct txgbe_hw *hw);
 void txgbe_disable_rx(struct txgbe_hw *hw);
 s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
 					  u32 speed,
@@ -68,6 +142,7 @@ int txgbe_reset_misc(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_identify_phy(struct txgbe_hw *hw);
 s32 txgbe_init_phy_ops(struct txgbe_hw *hw);
+s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval);
 s32 txgbe_init_ops(struct txgbe_hw *hw);
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
index 514a38941488..c7eafce35e87 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
@@ -25,6 +25,39 @@ static void txgbe_cache_ring_register(struct txgbe_adapter *adapter)
 		adapter->tx_ring[i]->reg_idx = i;
 }
 
+#define TXGBE_RSS_64Q_MASK      0x3F
+#define TXGBE_RSS_16Q_MASK      0xF
+#define TXGBE_RSS_8Q_MASK       0x7
+#define TXGBE_RSS_4Q_MASK       0x3
+#define TXGBE_RSS_2Q_MASK       0x1
+#define TXGBE_RSS_DISABLED_MASK 0x0
+
+/**
+ * txgbe_set_rss_queues: Allocate queues for RSS
+ * @adapter: board private structure to initialize
+ *
+ * This is our "base" multiqueue mode.  RSS (Receive Side Scaling) will try
+ * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU.
+ *
+ **/
+static bool txgbe_set_rss_queues(struct txgbe_adapter *adapter)
+{
+	struct txgbe_ring_feature *f;
+	u16 rss_i;
+
+	/* set mask for 16 queue limit of RSS */
+	f = &adapter->ring_feature[RING_F_RSS];
+	rss_i = f->limit;
+
+	f->indices = rss_i;
+	f->mask = TXGBE_RSS_64Q_MASK;
+
+	adapter->num_rx_queues = rss_i;
+	adapter->num_tx_queues = rss_i;
+
+	return true;
+}
+
 /**
  * txgbe_set_num_queues: Allocate queues for device, feature dependent
  * @adapter: board private structure to initialize
@@ -34,6 +67,8 @@ static void txgbe_set_num_queues(struct txgbe_adapter *adapter)
 	/* Start with base case */
 	adapter->num_rx_queues = 1;
 	adapter->num_tx_queues = 1;
+
+	txgbe_set_rss_queues(adapter);
 }
 
 /**
@@ -165,6 +200,10 @@ static int txgbe_alloc_q_vector(struct txgbe_adapter *adapter,
 	/* initialize CPU for DCA */
 	q_vector->cpu = -1;
 
+	/* initialize NAPI */
+	netif_napi_add(adapter->netdev, &q_vector->napi,
+		       txgbe_poll, 64);
+
 	/* tie q_vector and adapter together */
 	adapter->q_vector[v_idx] = q_vector;
 	q_vector->adapter = adapter;
@@ -268,6 +307,7 @@ static void txgbe_free_q_vector(struct txgbe_adapter *adapter, int v_idx)
 		adapter->rx_ring[ring->queue_index] = NULL;
 
 	adapter->q_vector[v_idx] = NULL;
+	netif_napi_del(&q_vector->napi);
 	kfree_rcu(q_vector, rcu);
 }
 
@@ -378,6 +418,10 @@ void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter)
 	if (!txgbe_acquire_msix_vectors(adapter))
 		return;
 
+	/* Disable RSS */
+	txgbe_dev_warn("Disabling RSS support\n");
+	adapter->ring_feature[RING_F_RSS].limit = 1;
+
 	/* recalculate number of queues now that many features have been
 	 * changed or disabled.
 	 */
@@ -440,3 +484,21 @@ void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter)
 	txgbe_reset_interrupt_capability(adapter);
 }
 
+void txgbe_tx_ctxtdesc(struct txgbe_ring *tx_ring, u32 vlan_macip_lens,
+		       u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx)
+{
+	struct txgbe_tx_context_desc *context_desc;
+	u16 i = tx_ring->next_to_use;
+
+	context_desc = TXGBE_TX_CTXTDESC(tx_ring, i);
+
+	i++;
+	tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
+
+	/* set bits to identify this as an advanced context descriptor */
+	type_tucmd |= TXGBE_TXD_DTYP_CTXT;
+	context_desc->vlan_macip_lens   = cpu_to_le32(vlan_macip_lens);
+	context_desc->seqnum_seed       = cpu_to_le32(fcoe_sof_eof);
+	context_desc->type_tucmd_mlhl   = cpu_to_le32(type_tucmd);
+	context_desc->mss_l4len_idx     = cpu_to_le32(mss_l4len_idx);
+}
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 385c0f35e82a..550af406de8d 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -8,7 +8,15 @@
 #include <linux/vmalloc.h>
 #include <linux/highmem.h>
 #include <linux/string.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
 #include <linux/aer.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <net/vxlan.h>
 #include <linux/etherdevice.h>
 
 #include "txgbe.h"
@@ -53,6 +61,21 @@ static struct workqueue_struct *txgbe_wq;
 
 static bool txgbe_is_sfp(struct txgbe_hw *hw);
 static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev);
+static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring);
+static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring);
+static void txgbe_napi_enable_all(struct txgbe_adapter *adapter);
+static void txgbe_napi_disable_all(struct txgbe_adapter *adapter);
+
+static inline struct txgbe_dptype txgbe_decode_ptype(const u8 ptype)
+{
+	return txgbe_ptype_lookup[ptype];
+}
+
+static inline struct txgbe_dptype
+decode_rx_desc_ptype(const union txgbe_rx_desc *rx_desc)
+{
+	return txgbe_decode_ptype(TXGBE_RXD_PKTTYPE(rx_desc));
+}
 
 static void txgbe_check_minimum_link(struct txgbe_adapter *adapter,
 				     int expected_gts)
@@ -178,1393 +201,5043 @@ static void txgbe_set_ivar(struct txgbe_adapter *adapter, s8 direction,
 	}
 }
 
-/**
- * txgbe_configure_msix - Configure MSI-X hardware
- * @adapter: board private structure
- *
- * txgbe_configure_msix sets up the hardware to properly generate MSI-X
- * interrupts.
- **/
-static void txgbe_configure_msix(struct txgbe_adapter *adapter)
+void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *ring,
+				      struct txgbe_tx_buffer *tx_buffer)
 {
-	u16 v_idx;
+	if (tx_buffer->skb) {
+		dev_kfree_skb_any(tx_buffer->skb);
+		if (dma_unmap_len(tx_buffer, len))
+			dma_unmap_single(ring->dev,
+					 dma_unmap_addr(tx_buffer, dma),
+					 dma_unmap_len(tx_buffer, len),
+					 DMA_TO_DEVICE);
+	} else if (dma_unmap_len(tx_buffer, len)) {
+		dma_unmap_page(ring->dev,
+			       dma_unmap_addr(tx_buffer, dma),
+			       dma_unmap_len(tx_buffer, len),
+			       DMA_TO_DEVICE);
+	}
+	tx_buffer->next_to_watch = NULL;
+	tx_buffer->skb = NULL;
+	dma_unmap_len_set(tx_buffer, len, 0);
+	/* tx_buffer must be completely set up in the transmit path */
+}
 
-	/* Populate MSIX to EITR Select */
-	wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0);
+static u64 txgbe_get_tx_completed(struct txgbe_ring *ring)
+{
+	return ring->stats.packets;
+}
 
-	/* Populate the IVAR table and set the ITR values to the
-	 * corresponding register.
-	 */
-	for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) {
-		struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
-		struct txgbe_ring *ring;
+static u64 txgbe_get_tx_pending(struct txgbe_ring *ring)
+{
+	struct txgbe_adapter *adapter;
+	struct txgbe_hw *hw;
+	u32 head, tail;
 
-		txgbe_for_each_ring(ring, q_vector->rx)
-			txgbe_set_ivar(adapter, 0, ring->reg_idx, v_idx);
+	if (ring->accel)
+		adapter = ring->accel->adapter;
+	else
+		adapter = ring->q_vector->adapter;
 
-		txgbe_for_each_ring(ring, q_vector->tx)
-			txgbe_set_ivar(adapter, 1, ring->reg_idx, v_idx);
+	hw = &adapter->hw;
+	head = rd32(hw, TXGBE_PX_TR_RP(ring->reg_idx));
+	tail = rd32(hw, TXGBE_PX_TR_WP(ring->reg_idx));
 
-		txgbe_write_eitr(q_vector);
-	}
+	return ((head <= tail) ? tail : tail + ring->count) - head;
+}
 
-	txgbe_set_ivar(adapter, -1, 0, v_idx);
+static inline bool txgbe_check_tx_hang(struct txgbe_ring *tx_ring)
+{
+	u64 tx_done = txgbe_get_tx_completed(tx_ring);
+	u64 tx_done_old = tx_ring->tx_stats.tx_done_old;
+	u64 tx_pending = txgbe_get_tx_pending(tx_ring);
+
+	clear_check_for_tx_hang(tx_ring);
+
+	/* Check for a hung queue, but be thorough. This verifies
+	 * that a transmit has been completed since the previous
+	 * check AND there is at least one packet pending. The
+	 * ARMED bit is set to indicate a potential hang. The
+	 * bit is cleared if a pause frame is received to remove
+	 * false hang detection due to PFC or 802.3x frames. By
+	 * requiring this to fail twice we avoid races with
+	 * pfc clearing the ARMED bit and conditions where we
+	 * run the check_tx_hang logic with a transmit completion
+	 * pending but without time to complete it yet.
+	 */
+	if (tx_done_old == tx_done && tx_pending)
+		/* make sure it is true for two checks in a row */
+		return test_and_set_bit(__TXGBE_HANG_CHECK_ARMED,
+					&tx_ring->state);
+	/* update completed stats and continue */
+	tx_ring->tx_stats.tx_done_old = tx_done;
+	/* reset the countdown */
+	clear_bit(__TXGBE_HANG_CHECK_ARMED, &tx_ring->state);
 
-	wr32(&adapter->hw, TXGBE_PX_ITR(v_idx), 1950);
+	return false;
 }
 
 /**
- * txgbe_write_eitr - write EITR register in hardware specific way
- * @q_vector: structure containing interrupt and ring information
- *
- * This function is made to be called by ethtool and by the driver
- * when it needs to update EITR registers at runtime.  Hardware
- * specific quirks/differences are taken care of here.
- */
-void txgbe_write_eitr(struct txgbe_q_vector *q_vector)
+ * txgbe_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ **/
+static void txgbe_tx_timeout(struct net_device *netdev, unsigned int txqueue)
 {
-	struct txgbe_adapter *adapter = q_vector->adapter;
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
 	struct txgbe_hw *hw = &adapter->hw;
-	int v_idx = q_vector->v_idx;
-	u32 itr_reg = q_vector->itr & TXGBE_MAX_EITR;
+	bool real_tx_hang = false;
+	int i;
+	u16 value = 0;
+	u32 value2 = 0, value3 = 0;
+	u32 head, tail;
 
-	itr_reg |= TXGBE_PX_ITR_CNT_WDIS;
+#define TX_TIMEO_LIMIT 16000
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		struct txgbe_ring *tx_ring = adapter->tx_ring[i];
 
-	wr32(hw, TXGBE_PX_ITR(v_idx), itr_reg);
+		if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring))
+			real_tx_hang = true;
+	}
+
+	/* Dump the relevant registers to determine the cause of a timeout event. */
+	pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", value);
+	pci_read_config_word(adapter->pdev, PCI_COMMAND, &value);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", value);
+
+	value2 = rd32(&adapter->hw, 0x10000);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "reg 0x10000 value is 0x%08x\n", value2);
+	value2 = rd32(&adapter->hw, 0x180d0);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "reg 0x180d0 value is 0x%08x\n", value2);
+	value2 = rd32(&adapter->hw, 0x180d4);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "reg 0x180d4 value is 0x%08x\n", value2);
+	value2 = rd32(&adapter->hw, 0x180d8);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "reg 0x180d8 value is 0x%08x\n", value2);
+	value2 = rd32(&adapter->hw, 0x180dc);
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "reg 0x180dc value is 0x%08x\n", value2);
+
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		head = rd32(&adapter->hw, TXGBE_PX_TR_RP(adapter->tx_ring[i]->reg_idx));
+		tail = rd32(&adapter->hw, TXGBE_PX_TR_WP(adapter->tx_ring[i]->reg_idx));
+
+		ERROR_REPORT1(TXGBE_ERROR_POLLING,
+			      "tx ring %d next_to_use is %d, next_to_clean is %d\n",
+			      i, adapter->tx_ring[i]->next_to_use,
+			      adapter->tx_ring[i]->next_to_clean);
+		ERROR_REPORT1(TXGBE_ERROR_POLLING,
+			      "tx ring %d hw rp is 0x%x, wp is 0x%x\n",
+			      i, head, tail);
+	}
+
+	value2 = rd32(&adapter->hw, TXGBE_PX_IMS(0));
+	value3 = rd32(&adapter->hw, TXGBE_PX_IMS(1));
+	ERROR_REPORT1(TXGBE_ERROR_POLLING,
+		      "PX_IMS0 value is 0x%08x, PX_IMS1 value is 0x%08x\n",
+		      value2, value3);
+
+	if (value2 || value3) {
+		ERROR_REPORT1(TXGBE_ERROR_POLLING, "clear interrupt mask.\n");
+		wr32(&adapter->hw, TXGBE_PX_ICS(0), value2);
+		wr32(&adapter->hw, TXGBE_PX_IMC(0), value2);
+		wr32(&adapter->hw, TXGBE_PX_ICS(1), value3);
+		wr32(&adapter->hw, TXGBE_PX_IMC(1), value3);
+	}
 }
 
 /**
- * txgbe_check_overtemp_subtask - check for over temperature
- * @adapter: pointer to adapter
+ * txgbe_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
  **/
-static void txgbe_check_overtemp_subtask(struct txgbe_adapter *adapter)
+static bool txgbe_clean_tx_irq(struct txgbe_q_vector *q_vector,
+			       struct txgbe_ring *tx_ring)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 eicr = adapter->interrupt_event;
-	s32 temp_state;
+	struct txgbe_adapter *adapter = q_vector->adapter;
+	struct txgbe_tx_buffer *tx_buffer;
+	union txgbe_tx_desc *tx_desc;
+	unsigned int total_bytes = 0, total_packets = 0;
+	unsigned int budget = q_vector->tx.work_limit;
+	unsigned int i = tx_ring->next_to_clean;
 
 	if (test_bit(__TXGBE_DOWN, &adapter->state))
-		return;
-	if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_EVENT))
-		return;
+		return true;
 
-	adapter->flags2 &= ~TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+	tx_buffer = &tx_ring->tx_buffer_info[i];
+	tx_desc = TXGBE_TX_DESC(tx_ring, i);
+	i -= tx_ring->count;
+
+	do {
+		union txgbe_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+		/* if next_to_watch is not set then there is no work pending */
+		if (!eop_desc)
+			break;
+
+		/* prevent any other reads prior to eop_desc */
+		smp_rmb();
+
+		/* if DD is not set pending work has not been completed */
+		if (!(eop_desc->wb.status & cpu_to_le32(TXGBE_TXD_STAT_DD)))
+			break;
+
+		/* clear next_to_watch to prevent false hangs */
+		tx_buffer->next_to_watch = NULL;
+
+		/* update the statistics for this packet */
+		total_bytes += tx_buffer->bytecount;
+		total_packets += tx_buffer->gso_segs;
+
+		/* free the skb */
+		dev_consume_skb_any(tx_buffer->skb);
+
+		/* unmap skb header data */
+		dma_unmap_single(tx_ring->dev,
+				 dma_unmap_addr(tx_buffer, dma),
+				 dma_unmap_len(tx_buffer, len),
+				 DMA_TO_DEVICE);
+
+		/* clear tx_buffer data */
+		tx_buffer->skb = NULL;
+		dma_unmap_len_set(tx_buffer, len, 0);
+
+		/* unmap remaining buffers */
+		while (tx_desc != eop_desc) {
+			tx_buffer++;
+			tx_desc++;
+			i++;
+			if (unlikely(!i)) {
+				i -= tx_ring->count;
+				tx_buffer = tx_ring->tx_buffer_info;
+				tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+			}
 
-	/* Since the warning interrupt is for both ports
-	 * we don't have to check if:
-	 *  - This interrupt wasn't for our port.
-	 *  - We may have missed the interrupt so always have to
-	 *    check if we  got a LSC
-	 */
-	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
-		return;
+			/* unmap any remaining paged data */
+			if (dma_unmap_len(tx_buffer, len)) {
+				dma_unmap_page(tx_ring->dev,
+					       dma_unmap_addr(tx_buffer, dma),
+					       dma_unmap_len(tx_buffer, len),
+					       DMA_TO_DEVICE);
+				dma_unmap_len_set(tx_buffer, len, 0);
+			}
+		}
 
-	temp_state = TCALL(hw, phy.ops.check_overtemp);
-	if (!temp_state || temp_state == TXGBE_NOT_IMPLEMENTED)
-		return;
+		/* move us one more past the eop_desc for start of next pkt */
+		tx_buffer++;
+		tx_desc++;
+		i++;
+		if (unlikely(!i)) {
+			i -= tx_ring->count;
+			tx_buffer = tx_ring->tx_buffer_info;
+			tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+		}
 
-	if (temp_state == TXGBE_ERR_UNDERTEMP &&
-	    test_bit(__TXGBE_HANGING, &adapter->state)) {
-		txgbe_crit(drv, "%s\n", txgbe_underheat_msg);
-		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
-		      TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
-		netif_carrier_on(adapter->netdev);
-		clear_bit(__TXGBE_HANGING, &adapter->state);
-	} else if (temp_state == TXGBE_ERR_OVERTEMP &&
-		!test_and_set_bit(__TXGBE_HANGING, &adapter->state)) {
-		txgbe_crit(drv, "%s\n", txgbe_overheat_msg);
-		netif_carrier_off(adapter->netdev);
-		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
-		      TXGBE_RDB_PB_CTL_RXEN, 0);
+		/* issue prefetch for next Tx descriptor */
+		prefetch(tx_desc);
+
+		/* update budget accounting */
+		budget--;
+	} while (likely(budget));
+
+	i += tx_ring->count;
+	tx_ring->next_to_clean = i;
+	u64_stats_update_begin(&tx_ring->syncp);
+	tx_ring->stats.bytes += total_bytes;
+	tx_ring->stats.packets += total_packets;
+	u64_stats_update_end(&tx_ring->syncp);
+	q_vector->tx.total_bytes += total_bytes;
+	q_vector->tx.total_packets += total_packets;
+
+	if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring)) {
+	/* schedule immediate reset if we believe we hung */
+		struct txgbe_hw *hw = &adapter->hw;
+		u16 value = 0;
+
+		txgbe_err(drv, "Detected Tx Unit Hang\n"
+			"  Tx Queue             <%d>\n"
+			"  TDH, TDT             <%x>, <%x>\n"
+			"  next_to_use          <%x>\n"
+			"  next_to_clean        <%x>\n"
+			"tx_buffer_info[next_to_clean]\n"
+			"  jiffies              <%lx>\n",
+			tx_ring->queue_index,
+			rd32(hw, TXGBE_PX_TR_RP(tx_ring->reg_idx)),
+			rd32(hw, TXGBE_PX_TR_WP(tx_ring->reg_idx)),
+			tx_ring->next_to_use, i,
+			jiffies);
+
+		pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value);
+		if (value == TXGBE_FAILED_READ_CFG_WORD)
+			txgbe_info(hw, "pcie link has been lost.\n");
+
+		netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+		txgbe_info(probe,
+			   "tx hang %d detected on queue %d, resetting adapter\n",
+			   adapter->tx_timeout_count + 1, tx_ring->queue_index);
+
+		/* the adapter is about to reset, no point in enabling stuff */
+		return true;
 	}
 
-	adapter->interrupt_event = 0;
-}
-
-static void txgbe_check_overtemp_event(struct txgbe_adapter *adapter, u32 eicr)
-{
-	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
-		return;
+	netdev_tx_completed_queue(txring_txq(tx_ring),
+				  total_packets, total_bytes);
 
-	if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
-		adapter->interrupt_event = eicr;
-		adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_EVENT;
-		txgbe_service_event_schedule(adapter);
+#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
+	if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+		     (txgbe_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+		/* Make sure that anybody stopping the queue after this
+		 * sees the new next_to_clean.
+		 */
+		smp_mb();
+
+		if (__netif_subqueue_stopped(tx_ring->netdev,
+					     tx_ring->queue_index) &&
+		    !test_bit(__TXGBE_DOWN, &adapter->state)) {
+			netif_wake_subqueue(tx_ring->netdev,
+					    tx_ring->queue_index);
+			++tx_ring->tx_stats.restart_queue;
+		}
 	}
+
+	return !!budget;
 }
 
-static void txgbe_check_sfp_event(struct txgbe_adapter *adapter, u32 eicr)
+#define TXGBE_RSS_L4_TYPES_MASK \
+	((1ul << TXGBE_RXD_RSSTYPE_IPV4_TCP) | \
+	 (1ul << TXGBE_RXD_RSSTYPE_IPV4_UDP) | \
+	 (1ul << TXGBE_RXD_RSSTYPE_IPV4_SCTP) | \
+	 (1ul << TXGBE_RXD_RSSTYPE_IPV6_TCP) | \
+	 (1ul << TXGBE_RXD_RSSTYPE_IPV6_UDP) | \
+	 (1ul << TXGBE_RXD_RSSTYPE_IPV6_SCTP))
+
+static inline void txgbe_rx_hash(struct txgbe_ring *ring,
+				 union txgbe_rx_desc *rx_desc,
+				 struct sk_buff *skb)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 eicr_mask = TXGBE_PX_MISC_IC_GPIO;
-	u32 reg;
+	u16 rss_type;
 
-	if (eicr & eicr_mask) {
-		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
-			wr32(hw, TXGBE_GPIO_INTMASK, 0xFF);
-			reg = rd32(hw, TXGBE_GPIO_INTSTATUS);
-			if (reg & TXGBE_GPIO_INTSTATUS_2) {
-				adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
-				wr32(hw, TXGBE_GPIO_EOI,
-				     TXGBE_GPIO_EOI_2);
-				adapter->sfp_poll_time = 0;
-				txgbe_service_event_schedule(adapter);
-			}
-			if (reg & TXGBE_GPIO_INTSTATUS_3) {
-				adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
-				wr32(hw, TXGBE_GPIO_EOI,
-				     TXGBE_GPIO_EOI_3);
-				txgbe_service_event_schedule(adapter);
-			}
+	if (!(ring->netdev->features & NETIF_F_RXHASH))
+		return;
 
-			if (reg & TXGBE_GPIO_INTSTATUS_6) {
-				wr32(hw, TXGBE_GPIO_EOI,
-				     TXGBE_GPIO_EOI_6);
-				adapter->flags |=
-					TXGBE_FLAG_NEED_LINK_CONFIG;
-				txgbe_service_event_schedule(adapter);
-			}
-			wr32(hw, TXGBE_GPIO_INTMASK, 0x0);
-		}
-	}
-}
+	rss_type = le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) &
+		   TXGBE_RXD_RSSTYPE_MASK;
 
-static void txgbe_check_lsc(struct txgbe_adapter *adapter)
-{
-	adapter->lsc_int++;
-	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
-	adapter->link_check_timeout = jiffies;
-	if (!test_bit(__TXGBE_DOWN, &adapter->state))
-		txgbe_service_event_schedule(adapter);
+	if (!rss_type)
+		return;
+
+	skb_set_hash(skb, le32_to_cpu(rx_desc->wb.lower.hi_dword.rss),
+		     (TXGBE_RSS_L4_TYPES_MASK & (1ul << rss_type)) ?
+		     PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
 }
 
 /**
- * txgbe_irq_enable - Enable default interrupt generation settings
- * @adapter: board private structure
+ * txgbe_rx_checksum - indicate in skb if hw indicated a good cksum
+ * @ring: structure containing ring specific data
+ * @rx_desc: current Rx descriptor being processed
+ * @skb: skb currently being received and modified
  **/
-void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
+static inline void txgbe_rx_checksum(struct txgbe_ring *ring,
+				     union txgbe_rx_desc *rx_desc,
+				     struct sk_buff *skb)
 {
-	u32 mask = 0;
-	struct txgbe_hw *hw = &adapter->hw;
-	u8 device_type = hw->subsystem_id & 0xF0;
+	struct txgbe_dptype dptype = decode_rx_desc_ptype(rx_desc);
 
-	/* enable gpio interrupt */
-	if (device_type != TXGBE_ID_MAC_XAUI &&
-	    device_type != TXGBE_ID_MAC_SGMII) {
-		mask |= TXGBE_GPIO_INTEN_2;
-		mask |= TXGBE_GPIO_INTEN_3;
-		mask |= TXGBE_GPIO_INTEN_6;
-	}
-	wr32(&adapter->hw, TXGBE_GPIO_INTEN, mask);
+	skb->ip_summed = CHECKSUM_NONE;
 
-	if (device_type != TXGBE_ID_MAC_XAUI &&
-	    device_type != TXGBE_ID_MAC_SGMII) {
-		mask = TXGBE_GPIO_INTTYPE_LEVEL_2 | TXGBE_GPIO_INTTYPE_LEVEL_3 |
-			TXGBE_GPIO_INTTYPE_LEVEL_6;
-	}
-	wr32(&adapter->hw, TXGBE_GPIO_INTTYPE_LEVEL, mask);
+	skb_checksum_none_assert(skb);
 
-	/* enable misc interrupt */
-	mask = TXGBE_PX_MISC_IEN_MASK;
+	/* Rx csum disabled */
+	if (!(ring->netdev->features & NETIF_F_RXCSUM))
+		return;
 
-	mask |= TXGBE_PX_MISC_IEN_OVER_HEAT;
+	/* if IPv4 header checksum error */
+	if ((txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_IPCS) &&
+	     txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_IPE)) ||
+	    (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_OUTERIPCS) &&
+	     txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_OUTERIPER))) {
+		ring->rx_stats.csum_err++;
+		return;
+	}
 
-	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
+	/* L4 checksum offload flag must set for the below code to work */
+	if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_L4CS))
+		return;
 
-	/* unmask interrupt */
-	txgbe_intr_enable(&adapter->hw, TXGBE_INTR_MISC(adapter));
-	if (queues)
-		txgbe_intr_enable(&adapter->hw, TXGBE_INTR_QALL(adapter));
+	/*likely incorrect csum if IPv6 Dest Header found */
+	if (dptype.prot != TXGBE_DEC_PTYPE_PROT_SCTP && TXGBE_RXD_IPV6EX(rx_desc))
+		return;
 
-	/* flush configuration */
-	if (flush)
-		TXGBE_WRITE_FLUSH(&adapter->hw);
+	/* if L4 checksum error */
+	if (txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_TCPE)) {
+		ring->rx_stats.csum_err++;
+		return;
+	}
+	/* If there is an outer header present that might contain a checksum
+	 * we need to bump the checksum level by 1 to reflect the fact that
+	 * we are indicating we validated the inner checksum.
+	 */
+	if (dptype.etype >= TXGBE_DEC_PTYPE_ETYPE_IG) {
+		skb->csum_level = 1;
+		/* FIXME :does skb->csum_level skb->encapsulation can both set ? */
+		skb->encapsulation = 1;
+	}
+
+	/* It must be a TCP or UDP or SCTP packet with a valid checksum */
+	skb->ip_summed = CHECKSUM_UNNECESSARY;
+	ring->rx_stats.csum_good_cnt++;
 }
 
-static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
+static bool txgbe_alloc_mapped_page(struct txgbe_ring *rx_ring,
+				    struct txgbe_rx_buffer *bi)
 {
-	struct txgbe_adapter *adapter = data;
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 eicr;
-	u32 ecc;
+	struct page *page = bi->page;
+	dma_addr_t dma;
 
-	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+	/* since we are recycling buffers we should seldom need to alloc */
+	if (likely(page))
+		return true;
 
-	if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
-		txgbe_check_lsc(adapter);
+	/* alloc new page for storage */
+	page = dev_alloc_pages(txgbe_rx_pg_order(rx_ring));
+	if (unlikely(!page)) {
+		rx_ring->rx_stats.alloc_rx_page_failed++;
+		return false;
+	}
 
-	if (eicr & TXGBE_PX_MISC_IC_INT_ERR) {
-		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
-		ecc = rd32(hw, TXGBE_MIS_ST);
-		if (((ecc & TXGBE_MIS_ST_LAN0_ECC) && hw->bus.lan_id == 0) ||
-		    ((ecc & TXGBE_MIS_ST_LAN1_ECC) && hw->bus.lan_id == 1))
-			adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+	/* map page for use */
+	dma = dma_map_page(rx_ring->dev, page, 0,
+			   txgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE);
 
-		txgbe_service_event_schedule(adapter);
-	}
-	if (eicr & TXGBE_PX_MISC_IC_DEV_RST) {
-		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
-		txgbe_service_event_schedule(adapter);
-	}
-	if ((eicr & TXGBE_PX_MISC_IC_STALL) ||
-	    (eicr & TXGBE_PX_MISC_IC_ETH_EVENT)) {
-		adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
-		txgbe_service_event_schedule(adapter);
-	}
+	/* if mapping failed free memory back to system since
+	 * there isn't much point in holding memory we can't use
+	 */
+	if (dma_mapping_error(rx_ring->dev, dma)) {
+		__free_pages(page, txgbe_rx_pg_order(rx_ring));
 
-	txgbe_check_sfp_event(adapter, eicr);
-	txgbe_check_overtemp_event(adapter, eicr);
+		rx_ring->rx_stats.alloc_rx_page_failed++;
+		return false;
+	}
 
-	/* re-enable the original interrupt state, no lsc, no queues */
-	if (!test_bit(__TXGBE_DOWN, &adapter->state))
-		txgbe_irq_enable(adapter, false, false);
+	bi->page_dma = dma;
+	bi->page = page;
+	bi->page_offset = 0;
 
-	return IRQ_HANDLED;
+	return true;
 }
 
-static irqreturn_t txgbe_msix_clean_rings(int __always_unused irq, void *data)
+/**
+ * txgbe_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+void txgbe_alloc_rx_buffers(struct txgbe_ring *rx_ring, u16 cleaned_count)
 {
-	struct txgbe_q_vector *q_vector = data;
+	union txgbe_rx_desc *rx_desc;
+	struct txgbe_rx_buffer *bi;
+	u16 i = rx_ring->next_to_use;
 
-	/* EIAM disabled interrupts (on this vector) for us */
+	/* nothing to do */
+	if (!cleaned_count)
+		return;
 
-	if (q_vector->rx.ring || q_vector->tx.ring)
-		napi_schedule_irqoff(&q_vector->napi);
+	rx_desc = TXGBE_RX_DESC(rx_ring, i);
+	bi = &rx_ring->rx_buffer_info[i];
+	i -= rx_ring->count;
+
+	do {
+		if (!txgbe_alloc_mapped_page(rx_ring, bi))
+			break;
+		rx_desc->read.pkt_addr =
+			cpu_to_le64(bi->page_dma + bi->page_offset);
+
+		rx_desc++;
+		bi++;
+		i++;
+		if (unlikely(!i)) {
+			rx_desc = TXGBE_RX_DESC(rx_ring, 0);
+			bi = rx_ring->rx_buffer_info;
+			i -= rx_ring->count;
+		}
 
-	return IRQ_HANDLED;
-}
+		/* clear the status bits for the next_to_use descriptor */
+		rx_desc->wb.upper.status_error = 0;
 
-/**
- * txgbe_request_msix_irqs - Initialize MSI-X interrupts
- * @adapter: board private structure
- *
- * txgbe_request_msix_irqs allocates MSI-X vectors and requests
- * interrupts from the kernel.
- **/
-static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter)
-{
-	struct net_device *netdev = adapter->netdev;
-	int vector, err;
-	int ri = 0, ti = 0;
+		cleaned_count--;
+	} while (cleaned_count);
 
-	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
-		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
-		struct msix_entry *entry = &adapter->msix_entries[vector];
+	i += rx_ring->count;
 
-		if (q_vector->tx.ring && q_vector->rx.ring) {
-			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
-				 "%s-TxRx-%d", netdev->name, ri++);
-			ti++;
-		} else if (q_vector->rx.ring) {
-			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
-				 "%s-rx-%d", netdev->name, ri++);
-		} else if (q_vector->tx.ring) {
-			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
-				 "%s-tx-%d", netdev->name, ti++);
-		} else {
-			/* skip this unused q_vector */
-			continue;
-		}
-		err = request_irq(entry->vector, &txgbe_msix_clean_rings, 0,
-				  q_vector->name, q_vector);
-		if (err) {
-			txgbe_err(probe, "request_irq failed for MSIX interrupt '%s' Error: %d\n",
-				  q_vector->name, err);
-			goto free_queue_irqs;
-		}
-	}
+	if (rx_ring->next_to_use != i) {
+		rx_ring->next_to_use = i;
+		/* update next to alloc since we have filled the ring */
+		rx_ring->next_to_alloc = i;
 
-	err = request_irq(adapter->msix_entries[vector].vector,
-			  txgbe_msix_other, 0, netdev->name, adapter);
-	if (err) {
-		txgbe_err(probe, "request_irq for msix_other failed: %d\n", err);
-		goto free_queue_irqs;
+		/* Force memory writes to complete before letting h/w
+		 * know there are new descriptors to fetch.  (Only
+		 * applicable for weak-ordered memory model archs,
+		 * such as IA-64).
+		 */
+		wmb();
+		writel(i, rx_ring->tail);
 	}
+}
 
-	return 0;
+static void txgbe_set_rsc_gso_size(struct txgbe_ring __maybe_unused *ring,
+				   struct sk_buff *skb)
+{
+	u16 hdr_len = eth_get_headlen(skb->dev, skb->data, skb_headlen(skb));
 
-free_queue_irqs:
-	while (vector) {
-		vector--;
-		irq_set_affinity_hint(adapter->msix_entries[vector].vector,
-				      NULL);
-		free_irq(adapter->msix_entries[vector].vector,
-			 adapter->q_vector[vector]);
-	}
-	adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
-	pci_disable_msix(adapter->pdev);
-	kfree(adapter->msix_entries);
-	adapter->msix_entries = NULL;
-	return err;
+	/* set gso_size to avoid messing up TCP MSS */
+	skb_shinfo(skb)->gso_size = DIV_ROUND_UP((skb->len - hdr_len),
+						 TXGBE_CB(skb)->append_cnt);
+	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
 }
 
-/**
- * txgbe_intr - legacy mode Interrupt Handler
- * @irq: interrupt number
- * @data: pointer to a network interface device structure
- **/
-static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
+static void txgbe_update_rsc_stats(struct txgbe_ring *rx_ring,
+				   struct sk_buff *skb)
 {
-	struct txgbe_adapter *adapter = data;
-	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
-	u32 eicr;
-	u32 eicr_misc;
+	/* if append_cnt is 0 then frame is not RSC */
+	if (!TXGBE_CB(skb)->append_cnt)
+		return;
 
-	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
-	if (!eicr) {
-		/* shared interrupt alert!
-		 * the interrupt that we masked before the EICR read.
-		 */
-		if (!test_bit(__TXGBE_DOWN, &adapter->state))
-			txgbe_irq_enable(adapter, true, true);
-		return IRQ_NONE;        /* Not our interrupt */
-	}
-	adapter->isb_mem[TXGBE_ISB_VEC0] = 0;
-	if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED))
-		wr32(&adapter->hw, TXGBE_PX_INTA, 1);
+	rx_ring->rx_stats.rsc_count += TXGBE_CB(skb)->append_cnt;
+	rx_ring->rx_stats.rsc_flush++;
 
-	eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
-	if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
-		txgbe_check_lsc(adapter);
+	txgbe_set_rsc_gso_size(rx_ring, skb);
 
-	if (eicr_misc & TXGBE_PX_MISC_IC_INT_ERR) {
-		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
-		adapter->flags2 |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
-		txgbe_service_event_schedule(adapter);
-	}
+	/* gso_size is computed using append_cnt so always clear it last */
+	TXGBE_CB(skb)->append_cnt = 0;
+}
 
-	if (eicr_misc & TXGBE_PX_MISC_IC_DEV_RST) {
-		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
-		txgbe_service_event_schedule(adapter);
+static void txgbe_rx_vlan(struct txgbe_ring *ring,
+			  union txgbe_rx_desc *rx_desc,
+			  struct sk_buff *skb)
+{
+	u8 idx = 0;
+	u16 ethertype;
+
+	if ((ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+	    txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_VP)) {
+		idx = (le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) &
+		       TXGBE_RXD_TPID_MASK) >> TXGBE_RXD_TPID_SHIFT;
+		ethertype = ring->q_vector->adapter->hw.tpid[idx];
+		__vlan_hwaccel_put_tag(skb,
+				       htons(ethertype),
+				       le16_to_cpu(rx_desc->wb.upper.vlan));
 	}
-	txgbe_check_sfp_event(adapter, eicr_misc);
-	txgbe_check_overtemp_event(adapter, eicr_misc);
+}
 
-	adapter->isb_mem[TXGBE_ISB_MISC] = 0;
-	/* would disable interrupts here but it is auto disabled */
-	napi_schedule_irqoff(&q_vector->napi);
+/**
+ * txgbe_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
+ **/
+static void txgbe_process_skb_fields(struct txgbe_ring *rx_ring,
+				     union txgbe_rx_desc *rx_desc,
+				     struct sk_buff *skb)
+{
+	txgbe_update_rsc_stats(rx_ring, skb);
+	txgbe_rx_hash(rx_ring, rx_desc, skb);
+	txgbe_rx_checksum(rx_ring, rx_desc, skb);
 
-	/* re-enable link(maybe) and non-queue interrupts, no flush.
-	 * txgbe_poll will re-enable the queue interrupts
-	 */
-	if (!test_bit(__TXGBE_DOWN, &adapter->state))
-		txgbe_irq_enable(adapter, false, false);
+	txgbe_rx_vlan(rx_ring, rx_desc, skb);
 
-	return IRQ_HANDLED;
+	skb_record_rx_queue(skb, rx_ring->queue_index);
+
+	skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+}
+
+static void txgbe_rx_skb(struct txgbe_q_vector *q_vector,
+			 struct sk_buff *skb)
+{
+	napi_gro_receive(&q_vector->napi, skb);
 }
 
 /**
- * txgbe_request_irq - initialize interrupts
- * @adapter: board private structure
+ * txgbe_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
  *
- * Attempts to configure interrupts using the best available
- * capabilities of the hardware and kernel.
+ * This function updates next to clean.  If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
  **/
-static int txgbe_request_irq(struct txgbe_adapter *adapter)
+static bool txgbe_is_non_eop(struct txgbe_ring *rx_ring,
+			     union txgbe_rx_desc *rx_desc,
+			     struct sk_buff *skb)
 {
-	struct net_device *netdev = adapter->netdev;
-	int err;
+	u32 ntc = rx_ring->next_to_clean + 1;
 
-	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
-		err = txgbe_request_msix_irqs(adapter);
-	else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED)
-		err = request_irq(adapter->pdev->irq, &txgbe_intr, 0,
-				  netdev->name, adapter);
-	else
-		err = request_irq(adapter->pdev->irq, &txgbe_intr, IRQF_SHARED,
-				  netdev->name, adapter);
+	/* fetch, update, and store next to clean */
+	ntc = (ntc < rx_ring->count) ? ntc : 0;
+	rx_ring->next_to_clean = ntc;
 
-	if (err)
-		txgbe_err(probe, "request_irq failed, Error %d\n", err);
+	prefetch(TXGBE_RX_DESC(rx_ring, ntc));
 
-	return err;
+	/* update RSC append count if present */
+	if (ring_is_rsc_enabled(rx_ring)) {
+		__le32 rsc_enabled = rx_desc->wb.lower.lo_dword.data &
+				     cpu_to_le32(TXGBE_RXD_RSCCNT_MASK);
+
+		if (unlikely(rsc_enabled)) {
+			u32 rsc_cnt = le32_to_cpu(rsc_enabled);
+
+			rsc_cnt >>= TXGBE_RXD_RSCCNT_SHIFT;
+			TXGBE_CB(skb)->append_cnt += rsc_cnt - 1;
+
+			/* update ntc based on RSC value */
+			ntc = le32_to_cpu(rx_desc->wb.upper.status_error);
+			ntc &= TXGBE_RXD_NEXTP_MASK;
+			ntc >>= TXGBE_RXD_NEXTP_SHIFT;
+		}
+	}
+
+	/* if we are the last buffer then there is nothing else to do */
+	if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)))
+		return false;
+
+	rx_ring->rx_buffer_info[ntc].skb = skb;
+	rx_ring->rx_stats.non_eop_descs++;
+
+	return true;
 }
 
-static void txgbe_free_irq(struct txgbe_adapter *adapter)
+static void txgbe_pull_tail(struct sk_buff *skb)
 {
-	int vector;
+	skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+	unsigned char *va;
+	unsigned int pull_len;
 
-	if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
-		free_irq(adapter->pdev->irq, adapter);
-		return;
-	}
+	/* it is valid to use page_address instead of kmap since we are
+	 * working with pages allocated out of the lomem pool per
+	 * alloc_page(GFP_ATOMIC)
+	 */
+	va = skb_frag_address(frag);
 
-	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
-		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
-		struct msix_entry *entry = &adapter->msix_entries[vector];
+	/* we need the header to contain the greater of either ETH_HLEN or
+	 * 60 bytes if the skb->len is less than 60 for skb_pad.
+	 */
+	pull_len = eth_get_headlen(skb->dev, va, TXGBE_RX_HDR_SIZE);
 
-		/* free only the irqs that were actually requested */
-		if (!q_vector->rx.ring && !q_vector->tx.ring)
-			continue;
+	/* align pull length to size of long to optimize memcpy performance */
+	skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
 
-		/* clear the affinity_mask in the IRQ descriptor */
-		irq_set_affinity_hint(entry->vector, NULL);
-		free_irq(entry->vector, q_vector);
+	/* update all of the pointers */
+	skb_frag_size_sub(frag, pull_len);
+	skb_frag_off_add(frag, pull_len);
+	skb->data_len -= pull_len;
+	skb->tail += pull_len;
+}
+
+/**
+ * txgbe_dma_sync_frag - perform DMA sync for first frag of SKB
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being updated
+ *
+ * This function provides a basic DMA sync up for the first fragment of an
+ * skb.  The reason for doing this is that the first fragment cannot be
+ * unmapped until we have reached the end of packet descriptor for a buffer
+ * chain.
+ */
+static void txgbe_dma_sync_frag(struct txgbe_ring *rx_ring,
+				struct sk_buff *skb)
+{
+	if (ring_uses_build_skb(rx_ring)) {
+		unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
+
+		dma_sync_single_range_for_cpu(rx_ring->dev,
+					      TXGBE_CB(skb)->dma,
+					      offset,
+					      skb_headlen(skb),
+					      DMA_FROM_DEVICE);
+	} else {
+		skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+
+		dma_sync_single_range_for_cpu(rx_ring->dev,
+					      TXGBE_CB(skb)->dma,
+					      skb_frag_off(frag),
+					      skb_frag_size(frag),
+					      DMA_FROM_DEVICE);
 	}
 
-	free_irq(adapter->msix_entries[vector++].vector, adapter);
+	/* If the page was released, just unmap it. */
+	if (unlikely(TXGBE_CB(skb)->page_released)) {
+		dma_unmap_page_attrs(rx_ring->dev, TXGBE_CB(skb)->dma,
+				     txgbe_rx_pg_size(rx_ring),
+				     DMA_FROM_DEVICE,
+				     TXGBE_RX_DMA_ATTR);
+	}
 }
 
 /**
- * txgbe_irq_disable - Mask off interrupt generation on the NIC
- * @adapter: board private structure
+ * txgbe_cleanup_headers - Correct corrupted or empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being fixed
+ *
+ * Check for corrupted packet headers caused by senders on the local L2
+ * embedded NIC switch not setting up their Tx Descriptors right.  These
+ * should be very rare.
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
  **/
-void txgbe_irq_disable(struct txgbe_adapter *adapter)
+static bool txgbe_cleanup_headers(struct txgbe_ring *rx_ring,
+				  union txgbe_rx_desc *rx_desc,
+				  struct sk_buff *skb)
 {
-	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, 0);
-	txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+	struct net_device *netdev = rx_ring->netdev;
 
-	TXGBE_WRITE_FLUSH(&adapter->hw);
-	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
-		int vector;
+	/* verify that the packet does not have any known errors */
+	if (unlikely(txgbe_test_staterr(rx_desc,
+					TXGBE_RXD_ERR_FRAME_ERR_MASK) &&
+		     !(netdev->features & NETIF_F_RXALL))) {
+		dev_kfree_skb_any(skb);
+		return true;
+	}
 
-		for (vector = 0; vector < adapter->num_q_vectors; vector++)
-			synchronize_irq(adapter->msix_entries[vector].vector);
+	/* place header in linear portion of buffer */
+	if (skb_is_nonlinear(skb)  && !skb_headlen(skb))
+		txgbe_pull_tail(skb);
 
-		synchronize_irq(adapter->msix_entries[vector++].vector);
-	} else {
-		synchronize_irq(adapter->pdev->irq);
-	}
+	/* if eth_skb_pad returns an error the skb was freed */
+	if (eth_skb_pad(skb))
+		return true;
+
+	return false;
 }
 
 /**
- * txgbe_configure_msi_and_legacy - Initialize PIN (INTA...) and MSI interrupts
+ * txgbe_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
  *
+ * Synchronizes page for reuse by the adapter
  **/
-static void txgbe_configure_msi_and_legacy(struct txgbe_adapter *adapter)
+static void txgbe_reuse_rx_page(struct txgbe_ring *rx_ring,
+				struct txgbe_rx_buffer *old_buff)
 {
-	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
-	struct txgbe_ring *ring;
-
-	txgbe_write_eitr(q_vector);
+	struct txgbe_rx_buffer *new_buff;
+	u16 nta = rx_ring->next_to_alloc;
 
-	txgbe_for_each_ring(ring, q_vector->rx)
-		txgbe_set_ivar(adapter, 0, ring->reg_idx, 0);
+	new_buff = &rx_ring->rx_buffer_info[nta];
 
-	txgbe_for_each_ring(ring, q_vector->tx)
-		txgbe_set_ivar(adapter, 1, ring->reg_idx, 0);
+	/* update, and store next to alloc */
+	nta++;
+	rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
 
-	txgbe_set_ivar(adapter, -1, 0, 1);
+	/* transfer page from old buffer to new buffer */
+	new_buff->page_dma = old_buff->page_dma;
+	new_buff->page = old_buff->page;
+	new_buff->page_offset = old_buff->page_offset;
 
-	txgbe_info(hw, "Legacy interrupt IVAR setup done\n");
+	/* sync the buffer for use by the device */
+	dma_sync_single_range_for_device(rx_ring->dev, new_buff->page_dma,
+					 new_buff->page_offset,
+					 txgbe_rx_bufsz(rx_ring),
+					 DMA_FROM_DEVICE);
 }
 
-static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
+static inline bool txgbe_page_is_reserved(struct page *page)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	int i;
-
-	for (i = 0; i < hw->mac.num_rar_entries; i++) {
-		if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
-			if (adapter->mac_table[i].state &
-					TXGBE_MAC_STATE_IN_USE) {
-				TCALL(hw, mac.ops.set_rar, i,
-				      adapter->mac_table[i].addr,
-				      adapter->mac_table[i].pools,
-				      TXGBE_PSR_MAC_SWC_AD_H_AV);
-			} else {
-				TCALL(hw, mac.ops.clear_rar, i);
-			}
-			adapter->mac_table[i].state &=
-				~(TXGBE_MAC_STATE_MODIFIED);
-		}
-	}
+	return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
 }
 
-/* this function destroys the first RAR entry */
-static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
-					 u8 *addr)
+/**
+ * txgbe_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool txgbe_add_rx_frag(struct txgbe_ring *rx_ring,
+			      struct txgbe_rx_buffer *rx_buffer,
+			      union txgbe_rx_desc *rx_desc,
+			      struct sk_buff *skb)
 {
-	struct txgbe_hw *hw = &adapter->hw;
+	struct page *page = rx_buffer->page;
+	unsigned int size = le16_to_cpu(rx_desc->wb.upper.length);
+#if (PAGE_SIZE < 8192)
+	unsigned int truesize = txgbe_rx_bufsz(rx_ring);
+#else
+	unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+	unsigned int last_offset = txgbe_rx_pg_size(rx_ring) -
+				   txgbe_rx_bufsz(rx_ring);
+#endif
+
+	if (size <= TXGBE_RX_HDR_SIZE && !skb_is_nonlinear(skb)) {
+		unsigned char *va = page_address(page) + rx_buffer->page_offset;
+
+		memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
+
+		/* page is not reserved, we can reuse buffer as-is */
+		if (likely(!txgbe_page_is_reserved(page)))
+			return true;
+
+		/* this page cannot be reused so discard it */
+		__free_pages(page, txgbe_rx_pg_order(rx_ring));
+		return false;
+	}
 
-	memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
-	adapter->mac_table[0].pools = 1ULL;
-	adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
-				       TXGBE_MAC_STATE_IN_USE);
-	TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr,
-	      adapter->mac_table[0].pools,
-	      TXGBE_PSR_MAC_SWC_AD_H_AV);
+	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+			rx_buffer->page_offset, size, truesize);
+
+	/* avoid re-using remote pages */
+	if (unlikely(txgbe_page_is_reserved(page)))
+		return false;
+
+#if (PAGE_SIZE < 8192)
+	/* if we are only owner of page we can reuse it */
+	if (unlikely(page_count(page) != 1))
+		return false;
+
+	/* flip page offset to other buffer */
+	rx_buffer->page_offset ^= truesize;
+#else
+	/* move offset up to the next cache line */
+	rx_buffer->page_offset += truesize;
+
+	if (rx_buffer->page_offset > last_offset)
+		return false;
+#endif
+
+	/* Even if we own the page, we are not allowed to use atomic_set()
+	 * This would break get_page_unless_zero() users.
+	 */
+	page_ref_inc(page);
+
+	return true;
 }
 
-static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
+static struct sk_buff *txgbe_fetch_rx_buffer(struct txgbe_ring *rx_ring,
+					     union txgbe_rx_desc *rx_desc)
 {
-	u32 i;
-	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_rx_buffer *rx_buffer;
+	struct sk_buff *skb;
+	struct page *page;
+
+	rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+	page = rx_buffer->page;
+	prefetchw(page);
+
+	skb = rx_buffer->skb;
+
+	if (likely(!skb)) {
+		void *page_addr = page_address(page) +
+				  rx_buffer->page_offset;
+
+		/* prefetch first cache line of first page */
+		prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+		prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+		/* allocate a skb to store the frags */
+		skb = netdev_alloc_skb_ip_align(rx_ring->netdev,
+						TXGBE_RX_HDR_SIZE);
+		if (unlikely(!skb)) {
+			rx_ring->rx_stats.alloc_rx_buff_failed++;
+			return NULL;
+		}
 
-	for (i = 0; i < hw->mac.num_rar_entries; i++) {
-		adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
-		adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
-		memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
-		adapter->mac_table[i].pools = 0;
-	}
-	txgbe_sync_mac_table(adapter);
-}
-
-void txgbe_configure_isb(struct txgbe_adapter *adapter)
-{
-	/* set ISB Address */
-	struct txgbe_hw *hw = &adapter->hw;
-
-	wr32(hw, TXGBE_PX_ISB_ADDR_L,
-	     adapter->isb_dma & DMA_BIT_MASK(32));
-	wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32);
-}
+		/* we will be copying header into skb->data in
+		 * pskb_may_pull so it is in our interest to prefetch
+		 * it now to avoid a possible cache miss
+		 */
+		prefetchw(skb->data);
 
-static void txgbe_configure(struct txgbe_adapter *adapter)
-{
-	txgbe_configure_isb(adapter);
-}
+		/* Delay unmapping of the first packet. It carries the
+		 * header information, HW may still access the header
+		 * after the writeback.  Only unmap it when EOP is
+		 * reached
+		 */
+		if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)))
+			goto dma_sync;
 
-static bool txgbe_is_sfp(struct txgbe_hw *hw)
-{
-	switch (TCALL(hw, mac.ops.get_media_type)) {
-	case txgbe_media_type_fiber:
-		return true;
-	default:
-		return false;
+		TXGBE_CB(skb)->dma = rx_buffer->page_dma;
+	} else {
+		if (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP))
+			txgbe_dma_sync_frag(rx_ring, skb);
+
+dma_sync:
+		/* we are reusing so sync this buffer for CPU use */
+		dma_sync_single_range_for_cpu(rx_ring->dev,
+					      rx_buffer->page_dma,
+					      rx_buffer->page_offset,
+					      txgbe_rx_bufsz(rx_ring),
+					      DMA_FROM_DEVICE);
+
+		rx_buffer->skb = NULL;
 	}
-}
 
-static bool txgbe_is_backplane(struct txgbe_hw *hw)
-{
-	switch (TCALL(hw, mac.ops.get_media_type)) {
-	case txgbe_media_type_backplane:
-		return true;
-	default:
-		return false;
+	/* pull page into skb */
+	if (txgbe_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+		/* hand second half of page back to the ring */
+		txgbe_reuse_rx_page(rx_ring, rx_buffer);
+	} else if (TXGBE_CB(skb)->dma == rx_buffer->page_dma) {
+		/* the page has been released from the ring */
+		TXGBE_CB(skb)->page_released = true;
+	} else {
+		/* we are not reusing the buffer so unmap it */
+		dma_unmap_page(rx_ring->dev, rx_buffer->page_dma,
+			       txgbe_rx_pg_size(rx_ring),
+			       DMA_FROM_DEVICE);
 	}
+
+	/* clear contents of buffer_info */
+	rx_buffer->page = NULL;
+
+	return skb;
 }
 
 /**
- * txgbe_sfp_link_config - set up SFP+ link
- * @adapter: pointer to private adapter struct
+ * txgbe_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @q_vector: structure containing interrupt and ring information
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing.  The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed.
  **/
-static void txgbe_sfp_link_config(struct txgbe_adapter *adapter)
+static int txgbe_clean_rx_irq(struct txgbe_q_vector *q_vector,
+			      struct txgbe_ring *rx_ring,
+			      int budget)
 {
-	/* We are assuming the worst case scenerio here, and that
-	 * is that an SFP was inserted/removed after the reset
-	 * but before SFP detection was enabled.  As such the best
-	 * solution is to just start searching as soon as we start
-	 */
+	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+	u16 cleaned_count = txgbe_desc_unused(rx_ring);
 
-	adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
-	adapter->sfp_poll_time = 0;
-}
+	do {
+		union txgbe_rx_desc *rx_desc;
+		struct sk_buff *skb;
 
-static void txgbe_setup_gpie(struct txgbe_adapter *adapter)
-{
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 gpie = 0;
+		/* return some buffers to hardware, one at a time is too slow */
+		if (cleaned_count >= TXGBE_RX_BUFFER_WRITE) {
+			txgbe_alloc_rx_buffers(rx_ring, cleaned_count);
+			cleaned_count = 0;
+		}
 
-	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
-		gpie = TXGBE_PX_GPIE_MODEL;
+		rx_desc = TXGBE_RX_DESC(rx_ring, rx_ring->next_to_clean);
 
-	wr32(hw, TXGBE_PX_GPIE, gpie);
-}
+		if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD))
+			break;
 
-static void txgbe_up_complete(struct txgbe_adapter *adapter)
-{
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 links_reg;
+		/* This memory barrier is needed to keep us from reading
+		 * any other fields out of the rx_desc until we know the
+		 * descriptor has been written back
+		 */
+		dma_rmb();
 
-	txgbe_get_hw_control(adapter);
-	txgbe_setup_gpie(adapter);
+		/* retrieve a buffer from the ring */
+		skb = txgbe_fetch_rx_buffer(rx_ring, rx_desc);
 
-	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
-		txgbe_configure_msix(adapter);
-	else
-		txgbe_configure_msi_and_legacy(adapter);
+		/* exit if we failed to retrieve a buffer */
+		if (!skb)
+			break;
 
-	/* enable the optics for SFP+ fiber */
-	TCALL(hw, mac.ops.enable_tx_laser);
+		cleaned_count++;
 
-	smp_mb__before_atomic();
-	clear_bit(__TXGBE_DOWN, &adapter->state);
+		/* place incomplete frames back on ring for completion */
+		if (txgbe_is_non_eop(rx_ring, rx_desc, skb))
+			continue;
 
-	if (txgbe_is_sfp(hw)) {
-		txgbe_sfp_link_config(adapter);
-	} else if (txgbe_is_backplane(hw)) {
-		adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
-		txgbe_service_event_schedule(adapter);
-	}
+		/* verify the packet layout is correct */
+		if (txgbe_cleanup_headers(rx_ring, rx_desc, skb))
+			continue;
 
-	links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
-	if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
-		if (links_reg & TXGBE_CFG_PORT_ST_LINK_10G) {
-			wr32(hw, TXGBE_MAC_TX_CFG,
-			     (rd32(hw, TXGBE_MAC_TX_CFG) &
-			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
-			     TXGBE_MAC_TX_CFG_SPEED_10G);
-		} else if (links_reg & (TXGBE_CFG_PORT_ST_LINK_1G | TXGBE_CFG_PORT_ST_LINK_100M)) {
-			wr32(hw, TXGBE_MAC_TX_CFG,
-			     (rd32(hw, TXGBE_MAC_TX_CFG) &
-			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
-			     TXGBE_MAC_TX_CFG_SPEED_1G);
-		}
-	}
+		/* probably a little skewed due to removing CRC */
+		total_rx_bytes += skb->len;
 
-	/* clear any pending interrupts, may auto mask */
-	rd32(hw, TXGBE_PX_IC(0));
-	rd32(hw, TXGBE_PX_IC(1));
-	rd32(hw, TXGBE_PX_MISC_IC);
-	if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI)
-		wr32(hw, TXGBE_GPIO_EOI, TXGBE_GPIO_EOI_6);
-	txgbe_irq_enable(adapter, true, true);
+		/* populate checksum, timestamp, VLAN, and protocol */
+		txgbe_process_skb_fields(rx_ring, rx_desc, skb);
 
-	/* bring the link up in the watchdog, this could race with our first
-	 * link up interrupt but shouldn't be a problem
-	 */
-	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
-	adapter->link_check_timeout = jiffies;
+		txgbe_rx_skb(q_vector, skb);
 
-	mod_timer(&adapter->service_timer, jiffies);
+		/* update budget accounting */
+		total_rx_packets++;
+	} while (likely(total_rx_packets < budget));
 
-	if (hw->bus.lan_id == 0) {
-		wr32m(hw, TXGBE_MIS_PRB_CTL,
-		      TXGBE_MIS_PRB_CTL_LAN0_UP, TXGBE_MIS_PRB_CTL_LAN0_UP);
-	} else if (hw->bus.lan_id == 1) {
-		wr32m(hw, TXGBE_MIS_PRB_CTL,
-		      TXGBE_MIS_PRB_CTL_LAN1_UP, TXGBE_MIS_PRB_CTL_LAN1_UP);
-	} else {
-		txgbe_err(probe, "%s:invalid bus lan id %d\n",
-			  __func__, hw->bus.lan_id);
-	}
+	u64_stats_update_begin(&rx_ring->syncp);
+	rx_ring->stats.packets += total_rx_packets;
+	rx_ring->stats.bytes += total_rx_bytes;
+	u64_stats_update_end(&rx_ring->syncp);
+	q_vector->rx.total_packets += total_rx_packets;
+	q_vector->rx.total_bytes += total_rx_bytes;
 
-	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
-	wr32m(hw, TXGBE_CFG_PORT_CTL,
-	      TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD);
+	return total_rx_packets;
 }
 
-void txgbe_reinit_locked(struct txgbe_adapter *adapter)
+/**
+ * txgbe_configure_msix - Configure MSI-X hardware
+ * @adapter: board private structure
+ *
+ * txgbe_configure_msix sets up the hardware to properly generate MSI-X
+ * interrupts.
+ **/
+static void txgbe_configure_msix(struct txgbe_adapter *adapter)
 {
-	/* put off any impending NetWatchDogTimeout */
-	netif_trans_update(adapter->netdev);
+	u16 v_idx;
 
-	while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
-		usleep_range(1000, 2000);
-	txgbe_down(adapter);
-	txgbe_up(adapter);
-	clear_bit(__TXGBE_RESETTING, &adapter->state);
-}
+	/* Populate MSIX to EITR Select */
+	wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0);
 
-void txgbe_up(struct txgbe_adapter *adapter)
-{
-	/* hardware has been reset, we need to reload some things */
-	txgbe_configure(adapter);
+	/* Populate the IVAR table and set the ITR values to the
+	 * corresponding register.
+	 */
+	for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
+		struct txgbe_ring *ring;
 
-	txgbe_up_complete(adapter);
+		txgbe_for_each_ring(ring, q_vector->rx)
+			txgbe_set_ivar(adapter, 0, ring->reg_idx, v_idx);
+
+		txgbe_for_each_ring(ring, q_vector->tx)
+			txgbe_set_ivar(adapter, 1, ring->reg_idx, v_idx);
+
+		txgbe_write_eitr(q_vector);
+	}
+
+	txgbe_set_ivar(adapter, -1, 0, v_idx);
+
+	wr32(&adapter->hw, TXGBE_PX_ITR(v_idx), 1950);
 }
 
-void txgbe_reset(struct txgbe_adapter *adapter)
+enum latency_range {
+	lowest_latency = 0,
+	low_latency = 1,
+	bulk_latency = 2,
+	latency_invalid = 255
+};
+
+/**
+ * txgbe_update_itr - update the dynamic ITR value based on statistics
+ * @q_vector: structure containing interrupt and ring information
+ * @ring_container: structure containing ring performance data
+ *
+ * Stores a new ITR value based on packets and byte
+ * counts during the last interrupt.  The advantage of per interrupt
+ * computation is faster updates and more accurate ITR for the current
+ * traffic pattern.  Constants in this function were computed
+ * based on theoretical maximum wire speed and thresholds were set based
+ * on testing data as well as attempting to minimize response time
+ * while increasing bulk throughput.
+ **/
+static void txgbe_update_itr(struct txgbe_q_vector *q_vector,
+			     struct txgbe_ring_container *ring_container)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	struct net_device *netdev = adapter->netdev;
-	int err;
-	u8 old_addr[ETH_ALEN];
+	int bytes = ring_container->total_bytes;
+	int packets = ring_container->total_packets;
+	u32 timepassed_us;
+	u64 bytes_perint;
+	u8 itr_setting = ring_container->itr;
 
-	if (TXGBE_REMOVED(hw->hw_addr))
+	if (packets == 0)
 		return;
-	/* lock SFP init bit to prevent race conditions with the watchdog */
-	while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
-		usleep_range(1000, 2000);
 
-	/* clear all SFP and link config related flags while holding SFP_INIT */
-	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
-	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+	/* simple throttlerate management
+	 *   0-10MB/s   lowest (100000 ints/s)
+	 *  10-20MB/s   low    (20000 ints/s)
+	 *  20-1249MB/s bulk   (12000 ints/s)
+	 */
+	/* what was last interrupt timeslice? */
+	timepassed_us = q_vector->itr >> 2;
+	if (timepassed_us == 0)
+		return;
+	bytes_perint = bytes / timepassed_us; /* bytes/usec */
 
-	err = TCALL(hw, mac.ops.init_hw);
-	switch (err) {
-	case 0:
-	case TXGBE_ERR_SFP_NOT_PRESENT:
-	case TXGBE_ERR_SFP_NOT_SUPPORTED:
+	switch (itr_setting) {
+	case lowest_latency:
+		if (bytes_perint > 10)
+			itr_setting = low_latency;
 		break;
-	case TXGBE_ERR_MASTER_REQUESTS_PENDING:
-		txgbe_dev_err("master disable timed out\n");
+	case low_latency:
+		if (bytes_perint > 20)
+			itr_setting = bulk_latency;
+		else if (bytes_perint <= 10)
+			itr_setting = lowest_latency;
+		break;
+	case bulk_latency:
+		if (bytes_perint <= 20)
+			itr_setting = low_latency;
 		break;
-	default:
-		txgbe_dev_err("Hardware Error: %d\n", err);
 	}
 
-	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
-	/* do not flush user set addresses */
-	memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
-	txgbe_flush_sw_mac_table(adapter);
-	txgbe_mac_set_default_filter(adapter, old_addr);
+	/* clear work counters since we have the values we need */
+	ring_container->total_bytes = 0;
+	ring_container->total_packets = 0;
 
-	/* update SAN MAC vmdq pool selection */
-	TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
+	/* write updated itr to ring container */
+	ring_container->itr = itr_setting;
 }
 
-void txgbe_disable_device(struct txgbe_adapter *adapter)
+/**
+ * txgbe_write_eitr - write EITR register in hardware specific way
+ * @q_vector: structure containing interrupt and ring information
+ *
+ * This function is made to be called by ethtool and by the driver
+ * when it needs to update EITR registers at runtime.  Hardware
+ * specific quirks/differences are taken care of here.
+ */
+void txgbe_write_eitr(struct txgbe_q_vector *q_vector)
 {
-	struct net_device *netdev = adapter->netdev;
+	struct txgbe_adapter *adapter = q_vector->adapter;
 	struct txgbe_hw *hw = &adapter->hw;
-	u32 i;
+	int v_idx = q_vector->v_idx;
+	u32 itr_reg = q_vector->itr & TXGBE_MAX_EITR;
 
-	/* signal that we are down to the interrupt handler */
-	if (test_and_set_bit(__TXGBE_DOWN, &adapter->state))
-		return; /* do nothing if already down */
+	itr_reg |= TXGBE_PX_ITR_CNT_WDIS;
 
-	txgbe_disable_pcie_master(hw);
-	/* disable receives */
-	TCALL(hw, mac.ops.disable_rx);
+	wr32(hw, TXGBE_PX_ITR(v_idx), itr_reg);
+}
 
-	/* call carrier off first to avoid false dev_watchdog timeouts */
-	netif_carrier_off(netdev);
-	netif_tx_disable(netdev);
+static void txgbe_set_itr(struct txgbe_q_vector *q_vector)
+{
+	u16 new_itr = q_vector->itr;
+	u8 current_itr;
 
-	txgbe_irq_disable(adapter);
+	txgbe_update_itr(q_vector, &q_vector->tx);
+	txgbe_update_itr(q_vector, &q_vector->rx);
 
-	adapter->flags2 &= ~(TXGBE_FLAG2_PF_RESET_REQUESTED |
-			     TXGBE_FLAG2_GLOBAL_RESET_REQUESTED);
-	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+	current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
 
-	del_timer_sync(&adapter->service_timer);
+	switch (current_itr) {
+	/* counts and packets in update_itr are dependent on these numbers */
+	case lowest_latency:
+		new_itr = TXGBE_100K_ITR;
+		break;
+	case low_latency:
+		new_itr = TXGBE_20K_ITR;
+		break;
+	case bulk_latency:
+		new_itr = TXGBE_12K_ITR;
+		break;
+	default:
+		break;
+	}
 
-	if (hw->bus.lan_id == 0)
-		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN0_UP, 0);
-	else if (hw->bus.lan_id == 1)
-		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN1_UP, 0);
-	else
-		txgbe_dev_err("%s: invalid bus lan id %d\n", __func__,
-			      hw->bus.lan_id);
+	if (new_itr != q_vector->itr) {
+		/* do an exponential smoothing */
+		new_itr = (10 * new_itr * q_vector->itr) /
+			  ((9 * new_itr) + q_vector->itr);
 
-	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
-	      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
-		/* disable mac transmiter */
-		wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, 0);
-	}
-	/* disable transmits in the hardware now that interrupts are off */
-	for (i = 0; i < adapter->num_tx_queues; i++) {
-		u8 reg_idx = adapter->tx_ring[i]->reg_idx;
+		/* save the algorithm value here */
+		q_vector->itr = new_itr;
 
-		wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
+		txgbe_write_eitr(q_vector);
 	}
-
-	/* Disable the Tx DMA engine */
-	wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
 }
 
-void txgbe_down(struct txgbe_adapter *adapter)
+/**
+ * txgbe_check_overtemp_subtask - check for over temperature
+ * @adapter: pointer to adapter
+ **/
+static void txgbe_check_overtemp_subtask(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
+	u32 eicr = adapter->interrupt_event;
+	s32 temp_state;
 
-	txgbe_disable_device(adapter);
-	txgbe_reset(adapter);
+	if (test_bit(__TXGBE_DOWN, &adapter->state))
+		return;
+	if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_EVENT))
+		return;
 
-	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)))
-		/* power down the optics for SFP+ fiber */
-		TCALL(&adapter->hw, mac.ops.disable_tx_laser);
+	adapter->flags2 &= ~TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+
+	/* Since the warning interrupt is for both ports
+	 * we don't have to check if:
+	 *  - This interrupt wasn't for our port.
+	 *  - We may have missed the interrupt so always have to
+	 *    check if we  got a LSC
+	 */
+	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+		return;
+
+	temp_state = TCALL(hw, phy.ops.check_overtemp);
+	if (!temp_state || temp_state == TXGBE_NOT_IMPLEMENTED)
+		return;
+
+	if (temp_state == TXGBE_ERR_UNDERTEMP &&
+	    test_bit(__TXGBE_HANGING, &adapter->state)) {
+		txgbe_crit(drv, "%s\n", txgbe_underheat_msg);
+		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+		      TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
+		netif_carrier_on(adapter->netdev);
+		clear_bit(__TXGBE_HANGING, &adapter->state);
+	} else if (temp_state == TXGBE_ERR_OVERTEMP &&
+		!test_and_set_bit(__TXGBE_HANGING, &adapter->state)) {
+		txgbe_crit(drv, "%s\n", txgbe_overheat_msg);
+		netif_carrier_off(adapter->netdev);
+		wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+		      TXGBE_RDB_PB_CTL_RXEN, 0);
+	}
+
+	adapter->interrupt_event = 0;
 }
 
-/**
- *  txgbe_init_shared_code - Initialize the shared code
- *  @hw: pointer to hardware structure
- *
- *  This will assign function pointers and assign the MAC type and PHY code.
- **/
-s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+static void txgbe_check_overtemp_event(struct txgbe_adapter *adapter, u32 eicr)
 {
-	s32 status;
+	if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+		return;
 
-	status = txgbe_init_ops(hw);
-	return status;
+	if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+		adapter->interrupt_event = eicr;
+		adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+		txgbe_service_event_schedule(adapter);
+	}
 }
 
-/**
- * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
- * @adapter: board private structure to initialize
- *
- * txgbe_sw_init initializes the Adapter private data structure.
- * Fields are initialized based on PCI device information and
- * OS network device settings (MTU size).
- **/
-static int txgbe_sw_init(struct txgbe_adapter *adapter)
+static void txgbe_check_sfp_event(struct txgbe_adapter *adapter, u32 eicr)
 {
 	struct txgbe_hw *hw = &adapter->hw;
-	struct pci_dev *pdev = adapter->pdev;
-	int err;
-
-	/* PCI config space info */
-	hw->vendor_id = pdev->vendor;
-	hw->device_id = pdev->device;
-	pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
-	if (hw->revision_id == TXGBE_FAILED_READ_CFG_BYTE &&
-	    txgbe_check_cfg_remove(hw, pdev)) {
-		txgbe_err(probe, "read of revision id failed\n");
-		err = -ENODEV;
-		goto out;
-	}
-	hw->subsystem_vendor_id = pdev->subsystem_vendor;
-	hw->subsystem_device_id = pdev->subsystem_device;
+	u32 eicr_mask = TXGBE_PX_MISC_IC_GPIO;
+	u32 reg;
 
-	pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id);
-	if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) {
-		txgbe_err(probe, "read of subsystem id failed\n");
-		err = -ENODEV;
-		goto out;
-	}
+	if (eicr & eicr_mask) {
+		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+			wr32(hw, TXGBE_GPIO_INTMASK, 0xFF);
+			reg = rd32(hw, TXGBE_GPIO_INTSTATUS);
+			if (reg & TXGBE_GPIO_INTSTATUS_2) {
+				adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_2);
+				adapter->sfp_poll_time = 0;
+				txgbe_service_event_schedule(adapter);
+			}
+			if (reg & TXGBE_GPIO_INTSTATUS_3) {
+				adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_3);
+				txgbe_service_event_schedule(adapter);
+			}
 
-	err = txgbe_init_shared_code(hw);
-	if (err) {
-		txgbe_err(probe, "init_shared_code failed: %d\n", err);
-		goto out;
-	}
-	adapter->mac_table = kzalloc(sizeof(*adapter->mac_table) *
-				     hw->mac.num_rar_entries,
-				     GFP_ATOMIC);
-	if (!adapter->mac_table) {
-		err = TXGBE_ERR_OUT_OF_MEM;
-		txgbe_err(probe, "mac_table allocation failed: %d\n", err);
-		goto out;
+			if (reg & TXGBE_GPIO_INTSTATUS_6) {
+				wr32(hw, TXGBE_GPIO_EOI,
+				     TXGBE_GPIO_EOI_6);
+				adapter->flags |=
+					TXGBE_FLAG_NEED_LINK_CONFIG;
+				txgbe_service_event_schedule(adapter);
+			}
+			wr32(hw, TXGBE_GPIO_INTMASK, 0x0);
+		}
 	}
-
-	/* enable itr by default in dynamic mode */
-	adapter->rx_itr_setting = 1;
-	adapter->tx_itr_setting = 1;
-
-	adapter->atr_sample_rate = 20;
-
-	adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
-
-	set_bit(__TXGBE_DOWN, &adapter->state);
-out:
-	return err;
 }
 
-/**
- * txgbe_setup_isb_resources - allocate interrupt status resources
- * @adapter: board private structure
- *
- * Return 0 on success, negative on failure
- **/
-static int txgbe_setup_isb_resources(struct txgbe_adapter *adapter)
+static void txgbe_check_lsc(struct txgbe_adapter *adapter)
 {
-	struct device *dev = pci_dev_to_dev(adapter->pdev);
-
-	adapter->isb_mem = dma_alloc_coherent(dev,
-					      sizeof(u32) * TXGBE_ISB_MAX,
-					      &adapter->isb_dma,
-					      GFP_KERNEL);
-	if (!adapter->isb_mem)
-		return -ENOMEM;
-	memset(adapter->isb_mem, 0, sizeof(u32) * TXGBE_ISB_MAX);
-	return 0;
+	adapter->lsc_int++;
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_service_event_schedule(adapter);
 }
 
 /**
- * txgbe_free_isb_resources - allocate all queues Rx resources
+ * txgbe_irq_enable - Enable default interrupt generation settings
  * @adapter: board private structure
- *
- * Return 0 on success, negative on failure
  **/
-static void txgbe_free_isb_resources(struct txgbe_adapter *adapter)
+void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
 {
-	struct device *dev = pci_dev_to_dev(adapter->pdev);
+	u32 mask = 0;
+	struct txgbe_hw *hw = &adapter->hw;
+	u8 device_type = hw->subsystem_id & 0xF0;
 
-	dma_free_coherent(dev, sizeof(u32) * TXGBE_ISB_MAX,
-			  adapter->isb_mem, adapter->isb_dma);
-	adapter->isb_mem = NULL;
-}
+	/* enable gpio interrupt */
+	if (device_type != TXGBE_ID_MAC_XAUI &&
+	    device_type != TXGBE_ID_MAC_SGMII) {
+		mask |= TXGBE_GPIO_INTEN_2;
+		mask |= TXGBE_GPIO_INTEN_3;
+		mask |= TXGBE_GPIO_INTEN_6;
+	}
+	wr32(&adapter->hw, TXGBE_GPIO_INTEN, mask);
 
-/**
- * txgbe_open - Called when a network interface is made active
- * @netdev: network interface device structure
- *
- * Returns 0 on success, negative value on failure
- *
- * The open entry point is called when a network interface is made
- * active by the system (IFF_UP).  At this point all resources needed
- * for transmit and receive operations are allocated, the interrupt
- * handler is registered with the OS, the watchdog timer is started,
- * and the stack is notified that the interface is ready.
- **/
-int txgbe_open(struct net_device *netdev)
-{
-	struct txgbe_adapter *adapter = netdev_priv(netdev);
-	int err;
+	if (device_type != TXGBE_ID_MAC_XAUI &&
+	    device_type != TXGBE_ID_MAC_SGMII) {
+		mask = TXGBE_GPIO_INTTYPE_LEVEL_2 | TXGBE_GPIO_INTTYPE_LEVEL_3 |
+			TXGBE_GPIO_INTTYPE_LEVEL_6;
+	}
+	wr32(&adapter->hw, TXGBE_GPIO_INTTYPE_LEVEL, mask);
 
-	netif_carrier_off(netdev);
+	/* enable misc interrupt */
+	mask = TXGBE_PX_MISC_IEN_MASK;
 
-	err = txgbe_setup_isb_resources(adapter);
-	if (err)
-		goto err_req_isb;
+	mask |= TXGBE_PX_MISC_IEN_OVER_HEAT;
 
-	txgbe_configure(adapter);
+	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
 
-	err = txgbe_request_irq(adapter);
-	if (err)
-		goto err_req_irq;
+	/* unmask interrupt */
+	txgbe_intr_enable(&adapter->hw, TXGBE_INTR_MISC(adapter));
+	if (queues)
+		txgbe_intr_enable(&adapter->hw, TXGBE_INTR_QALL(adapter));
 
-	/* Notify the stack of the actual queue counts. */
-	err = netif_set_real_num_tx_queues(netdev, adapter->num_tx_queues);
-	if (err)
-		goto err_set_queues;
+	/* flush configuration */
+	if (flush)
+		TXGBE_WRITE_FLUSH(&adapter->hw);
+}
 
-	err = netif_set_real_num_rx_queues(netdev, adapter->num_rx_queues);
-	if (err)
-		goto err_set_queues;
+static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
+{
+	struct txgbe_adapter *adapter = data;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 eicr;
+	u32 ecc;
+
+	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+
+	if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+		txgbe_check_lsc(adapter);
+
+	if (eicr & TXGBE_PX_MISC_IC_INT_ERR) {
+		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
+		ecc = rd32(hw, TXGBE_MIS_ST);
+		if (((ecc & TXGBE_MIS_ST_LAN0_ECC) && hw->bus.lan_id == 0) ||
+		    ((ecc & TXGBE_MIS_ST_LAN1_ECC) && hw->bus.lan_id == 1))
+			adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+
+		txgbe_service_event_schedule(adapter);
+	}
+	if (eicr & TXGBE_PX_MISC_IC_DEV_RST) {
+		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		txgbe_service_event_schedule(adapter);
+	}
+	if ((eicr & TXGBE_PX_MISC_IC_STALL) ||
+	    (eicr & TXGBE_PX_MISC_IC_ETH_EVENT)) {
+		adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	txgbe_check_sfp_event(adapter, eicr);
+	txgbe_check_overtemp_event(adapter, eicr);
+
+	/* re-enable the original interrupt state, no lsc, no queues */
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_irq_enable(adapter, false, false);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t txgbe_msix_clean_rings(int __always_unused irq, void *data)
+{
+	struct txgbe_q_vector *q_vector = data;
+
+	/* EIAM disabled interrupts (on this vector) for us */
+
+	if (q_vector->rx.ring || q_vector->tx.ring)
+		napi_schedule_irqoff(&q_vector->napi);
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_poll - NAPI polling RX/TX cleanup routine
+ * @napi: napi struct with our devices info in it
+ * @budget: amount of work driver is allowed to do this pass, in packets
+ *
+ * This function will clean all queues associated with a q_vector.
+ **/
+int txgbe_poll(struct napi_struct *napi, int budget)
+{
+	struct txgbe_q_vector *q_vector =
+			       container_of(napi, struct txgbe_q_vector, napi);
+	struct txgbe_adapter *adapter = q_vector->adapter;
+	struct txgbe_ring *ring;
+	int per_ring_budget;
+	bool clean_complete = true;
+
+	txgbe_for_each_ring(ring, q_vector->tx) {
+		if (!txgbe_clean_tx_irq(q_vector, ring))
+			clean_complete = false;
+	}
+
+	/* Exit if we are called by netpoll */
+	if (budget <= 0)
+		return budget;
+
+	/* attempt to distribute budget to each queue fairly, but don't allow
+	 * the budget to go below 1 because we'll exit polling
+	 */
+	if (q_vector->rx.count > 1)
+		per_ring_budget = max(budget / q_vector->rx.count, 1);
+	else
+		per_ring_budget = budget;
+
+	txgbe_for_each_ring(ring, q_vector->rx) {
+		int cleaned = txgbe_clean_rx_irq(q_vector, ring,
+						 per_ring_budget);
+
+		if (cleaned >= per_ring_budget)
+			clean_complete = false;
+	}
+
+	/* If all work not completed, return budget and keep polling */
+	if (!clean_complete)
+		return budget;
+
+	/* all work done, exit the polling mode */
+	napi_complete(napi);
+	if (adapter->rx_itr_setting == 1)
+		txgbe_set_itr(q_vector);
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_intr_enable(&adapter->hw,
+				  TXGBE_INTR_Q(q_vector->v_idx));
+
+	return 0;
+}
+
+/**
+ * txgbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @adapter: board private structure
+ *
+ * txgbe_request_msix_irqs allocates MSI-X vectors and requests
+ * interrupts from the kernel.
+ **/
+static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	int vector, err;
+	int ri = 0, ti = 0;
+
+	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+		struct msix_entry *entry = &adapter->msix_entries[vector];
+
+		if (q_vector->tx.ring && q_vector->rx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-TxRx-%d", netdev->name, ri++);
+			ti++;
+		} else if (q_vector->rx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-rx-%d", netdev->name, ri++);
+		} else if (q_vector->tx.ring) {
+			snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+				 "%s-tx-%d", netdev->name, ti++);
+		} else {
+			/* skip this unused q_vector */
+			continue;
+		}
+		err = request_irq(entry->vector, &txgbe_msix_clean_rings, 0,
+				  q_vector->name, q_vector);
+		if (err) {
+			txgbe_err(probe, "request_irq failed for MSIX interrupt '%s' Error: %d\n",
+				  q_vector->name, err);
+			goto free_queue_irqs;
+		}
+	}
+
+	err = request_irq(adapter->msix_entries[vector].vector,
+			  txgbe_msix_other, 0, netdev->name, adapter);
+	if (err) {
+		txgbe_err(probe, "request_irq for msix_other failed: %d\n", err);
+		goto free_queue_irqs;
+	}
+
+	return 0;
+
+free_queue_irqs:
+	while (vector) {
+		vector--;
+		irq_set_affinity_hint(adapter->msix_entries[vector].vector,
+				      NULL);
+		free_irq(adapter->msix_entries[vector].vector,
+			 adapter->q_vector[vector]);
+	}
+	adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+	pci_disable_msix(adapter->pdev);
+	kfree(adapter->msix_entries);
+	adapter->msix_entries = NULL;
+	return err;
+}
+
+/**
+ * txgbe_intr - legacy mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a network interface device structure
+ **/
+static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
+{
+	struct txgbe_adapter *adapter = data;
+	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+	u32 eicr;
+	u32 eicr_misc;
+
+	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
+	if (!eicr) {
+		/* shared interrupt alert!
+		 * the interrupt that we masked before the EICR read.
+		 */
+		if (!test_bit(__TXGBE_DOWN, &adapter->state))
+			txgbe_irq_enable(adapter, true, true);
+		return IRQ_NONE;        /* Not our interrupt */
+	}
+	adapter->isb_mem[TXGBE_ISB_VEC0] = 0;
+	if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED))
+		wr32(&adapter->hw, TXGBE_PX_INTA, 1);
+
+	eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+	if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+		txgbe_check_lsc(adapter);
+
+	if (eicr_misc & TXGBE_PX_MISC_IC_INT_ERR) {
+		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
+		adapter->flags2 |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	if (eicr_misc & TXGBE_PX_MISC_IC_DEV_RST) {
+		adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		txgbe_service_event_schedule(adapter);
+	}
+	txgbe_check_sfp_event(adapter, eicr_misc);
+	txgbe_check_overtemp_event(adapter, eicr_misc);
+
+	adapter->isb_mem[TXGBE_ISB_MISC] = 0;
+	/* would disable interrupts here but it is auto disabled */
+	napi_schedule_irqoff(&q_vector->napi);
+
+	/* re-enable link(maybe) and non-queue interrupts, no flush.
+	 * txgbe_poll will re-enable the queue interrupts
+	 */
+	if (!test_bit(__TXGBE_DOWN, &adapter->state))
+		txgbe_irq_enable(adapter, false, false);
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_request_irq - initialize interrupts
+ * @adapter: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+static int txgbe_request_irq(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	int err;
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		err = txgbe_request_msix_irqs(adapter);
+	else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED)
+		err = request_irq(adapter->pdev->irq, &txgbe_intr, 0,
+				  netdev->name, adapter);
+	else
+		err = request_irq(adapter->pdev->irq, &txgbe_intr, IRQF_SHARED,
+				  netdev->name, adapter);
+
+	if (err)
+		txgbe_err(probe, "request_irq failed, Error %d\n", err);
+
+	return err;
+}
+
+static void txgbe_free_irq(struct txgbe_adapter *adapter)
+{
+	int vector;
+
+	if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
+		free_irq(adapter->pdev->irq, adapter);
+		return;
+	}
+
+	for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+		struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+		struct msix_entry *entry = &adapter->msix_entries[vector];
+
+		/* free only the irqs that were actually requested */
+		if (!q_vector->rx.ring && !q_vector->tx.ring)
+			continue;
+
+		/* clear the affinity_mask in the IRQ descriptor */
+		irq_set_affinity_hint(entry->vector, NULL);
+		free_irq(entry->vector, q_vector);
+	}
+
+	free_irq(adapter->msix_entries[vector++].vector, adapter);
+}
+
+/**
+ * txgbe_irq_disable - Mask off interrupt generation on the NIC
+ * @adapter: board private structure
+ **/
+void txgbe_irq_disable(struct txgbe_adapter *adapter)
+{
+	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, 0);
+	txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+
+	TXGBE_WRITE_FLUSH(&adapter->hw);
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+		int vector;
+
+		for (vector = 0; vector < adapter->num_q_vectors; vector++)
+			synchronize_irq(adapter->msix_entries[vector].vector);
+
+		synchronize_irq(adapter->msix_entries[vector++].vector);
+	} else {
+		synchronize_irq(adapter->pdev->irq);
+	}
+}
+
+/**
+ * txgbe_configure_msi_and_legacy - Initialize PIN (INTA...) and MSI interrupts
+ *
+ **/
+static void txgbe_configure_msi_and_legacy(struct txgbe_adapter *adapter)
+{
+	struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+	struct txgbe_ring *ring;
+
+	txgbe_write_eitr(q_vector);
+
+	txgbe_for_each_ring(ring, q_vector->rx)
+		txgbe_set_ivar(adapter, 0, ring->reg_idx, 0);
+
+	txgbe_for_each_ring(ring, q_vector->tx)
+		txgbe_set_ivar(adapter, 1, ring->reg_idx, 0);
+
+	txgbe_set_ivar(adapter, -1, 0, 1);
+
+	txgbe_info(hw, "Legacy interrupt IVAR setup done\n");
+}
+
+/**
+ * txgbe_configure_tx_ring - Configure Tx ring after Reset
+ * @adapter: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Tx descriptor ring after a reset.
+ **/
+void txgbe_configure_tx_ring(struct txgbe_adapter *adapter,
+			     struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u64 tdba = ring->dma;
+	int wait_loop = 10;
+	u32 txdctl = TXGBE_PX_TR_CFG_ENABLE;
+	u8 reg_idx = ring->reg_idx;
+
+	/* disable queue to avoid issues while updating state */
+	wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
+	TXGBE_WRITE_FLUSH(hw);
+
+	wr32(hw, TXGBE_PX_TR_BAL(reg_idx), tdba & DMA_BIT_MASK(32));
+	wr32(hw, TXGBE_PX_TR_BAH(reg_idx), tdba >> 32);
+
+	/* reset head and tail pointers */
+	wr32(hw, TXGBE_PX_TR_RP(reg_idx), 0);
+	wr32(hw, TXGBE_PX_TR_WP(reg_idx), 0);
+	ring->tail = adapter->io_addr + TXGBE_PX_TR_WP(reg_idx);
+
+	/* reset ntu and ntc to place SW in sync with hardwdare */
+	ring->next_to_clean = 0;
+	ring->next_to_use = 0;
+
+	txdctl |= TXGBE_RING_SIZE(ring) << TXGBE_PX_TR_CFG_TR_SIZE_SHIFT;
+
+	/* set WTHRESH to encourage burst writeback, it should not be set
+	 * higher than 1 when:
+	 * - ITR is 0 as it could cause false TX hangs
+	 * - ITR is set to > 100k int/sec and BQL is enabled
+	 *
+	 * In order to avoid issues WTHRESH + PTHRESH should always be equal
+	 * to or less than the number of on chip descriptors, which is
+	 * currently 40.
+	 */
+
+	txdctl |= 0x20 << TXGBE_PX_TR_CFG_WTHRESH_SHIFT;
+
+	/* initialize XPS */
+	if (!test_and_set_bit(__TXGBE_TX_XPS_INIT_DONE, &ring->state)) {
+		struct txgbe_q_vector *q_vector = ring->q_vector;
+
+		if (q_vector)
+			netif_set_xps_queue(adapter->netdev,
+					    &q_vector->affinity_mask,
+					    ring->queue_index);
+	}
+
+	clear_bit(__TXGBE_HANG_CHECK_ARMED, &ring->state);
+
+	/* enable queue */
+	wr32(hw, TXGBE_PX_TR_CFG(reg_idx), txdctl);
+
+	/* poll to verify queue is enabled */
+	do {
+		msleep(1);
+		txdctl = rd32(hw, TXGBE_PX_TR_CFG(reg_idx));
+	} while (--wait_loop && !(txdctl & TXGBE_PX_TR_CFG_ENABLE));
+	if (!wait_loop)
+		txgbe_err(drv, "Could not enable Tx Queue %d\n", reg_idx);
+}
+
+/**
+ * txgbe_configure_tx - Configure Transmit Unit after Reset
+ * @adapter: board private structure
+ *
+ * Configure the Tx unit of the MAC after a reset.
+ **/
+static void txgbe_configure_tx(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i;
+
+	/* TDM_CTL.TE must be before Tx queues are enabled */
+	wr32m(hw, TXGBE_TDM_CTL,
+	      TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE);
+
+	/* Setup the HW Tx Head and Tail descriptor pointers */
+	for (i = 0; i < adapter->num_tx_queues; i++)
+		txgbe_configure_tx_ring(adapter, adapter->tx_ring[i]);
+
+	wr32m(hw, TXGBE_TSC_BUF_AE, 0x3FF, 0x10);
+	/* enable mac transmitter */
+	wr32m(hw, TXGBE_MAC_TX_CFG,
+	      TXGBE_MAC_TX_CFG_TE, TXGBE_MAC_TX_CFG_TE);
+}
+
+static void txgbe_enable_rx_drop(struct txgbe_adapter *adapter,
+				 struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u16 reg_idx = ring->reg_idx;
+
+	u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+
+	srrctl |= TXGBE_PX_RR_CFG_DROP_EN;
+
+	wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+static void txgbe_disable_rx_drop(struct txgbe_adapter *adapter,
+				  struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u16 reg_idx = ring->reg_idx;
+
+	u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+
+	srrctl &= ~TXGBE_PX_RR_CFG_DROP_EN;
+
+	wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	/* We should set the drop enable bit if:
+	 *  Number of Rx queues > 1
+	 *
+	 *  This allows us to avoid head of line blocking for security
+	 *  and performance reasons.
+	 */
+	if (adapter->num_rx_queues > 1) {
+		for (i = 0; i < adapter->num_rx_queues; i++)
+			txgbe_enable_rx_drop(adapter, adapter->rx_ring[i]);
+	} else {
+		for (i = 0; i < adapter->num_rx_queues; i++)
+			txgbe_disable_rx_drop(adapter, adapter->rx_ring[i]);
+	}
+}
+
+static void txgbe_configure_srrctl(struct txgbe_adapter *adapter,
+				   struct txgbe_ring *rx_ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 srrctl;
+	u16 reg_idx = rx_ring->reg_idx;
+
+	srrctl = rd32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+		       ~(TXGBE_PX_RR_CFG_RR_HDR_SZ |
+		       TXGBE_PX_RR_CFG_RR_BUF_SZ |
+		       TXGBE_PX_RR_CFG_SPLIT_MODE));
+	/* configure header buffer length, needed for RSC */
+	srrctl |= TXGBE_RX_HDR_SIZE << TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT;
+
+	/* configure the packet buffer length */
+	srrctl |= txgbe_rx_bufsz(rx_ring) >> TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT;
+
+	wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+/**
+ * Write the RETA table to HW
+ *
+ * @adapter: device handle
+ *
+ * Write the RSS redirection table stored in adapter.rss_indir_tbl[] to HW.
+ */
+void txgbe_store_reta(struct txgbe_adapter *adapter)
+{
+	u32 i, reta_entries = 128;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 reta = 0;
+	u8 *indir_tbl = adapter->rss_indir_tbl;
+
+	/* Fill out the redirection table as follows:
+	 *  - 8 bit wide entries containing 4 bit RSS index
+	 */
+
+	/* Write redirection table to HW */
+	for (i = 0; i < reta_entries; i++) {
+		reta |= indir_tbl[i] << (i & 0x3) * 8;
+		if ((i & 3) == 3) {
+			wr32(hw, TXGBE_RDB_RSSTBL(i >> 2), reta);
+			reta = 0;
+		}
+	}
+}
+
+static void txgbe_setup_reta(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i, j;
+	u32 reta_entries = 128;
+	u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
+
+	/* Fill out hash function seeds */
+	for (i = 0; i < 10; i++)
+		wr32(hw, TXGBE_RDB_RSSRK(i), adapter->rss_key[i]);
+
+	/* Fill out redirection table */
+	memset(adapter->rss_indir_tbl, 0, sizeof(adapter->rss_indir_tbl));
+
+	for (i = 0, j = 0; i < reta_entries; i++, j++) {
+		if (j == rss_i)
+			j = 0;
+
+		adapter->rss_indir_tbl[i] = j;
+	}
+
+	txgbe_store_reta(adapter);
+}
+
+static void txgbe_setup_mrqc(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 rss_field = 0;
+
+	/* Disable indicating checksum in descriptor, enables RSS hash */
+	wr32m(hw, TXGBE_PSR_CTL,
+	      TXGBE_PSR_CTL_PCSD, TXGBE_PSR_CTL_PCSD);
+
+	/* Perform hash on these packet types */
+	rss_field = TXGBE_RDB_RA_CTL_RSS_IPV4 |
+		    TXGBE_RDB_RA_CTL_RSS_IPV4_TCP |
+		    TXGBE_RDB_RA_CTL_RSS_IPV6 |
+		    TXGBE_RDB_RA_CTL_RSS_IPV6_TCP;
+
+	if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+		rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP;
+	if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+		rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP;
+
+	netdev_rss_key_fill(adapter->rss_key, sizeof(adapter->rss_key));
+
+	txgbe_setup_reta(adapter);
+
+	if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)
+		rss_field |= TXGBE_RDB_RA_CTL_RSS_EN;
+
+	wr32(hw, TXGBE_RDB_RA_CTL, rss_field);
+}
+
+/**
+ * txgbe_configure_rscctl - enable RSC for the indicated ring
+ * @adapter:    address of board private structure
+ * @ring: structure containing ring specific data
+ **/
+void txgbe_configure_rscctl(struct txgbe_adapter *adapter,
+			    struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 rscctrl;
+	u8 reg_idx = ring->reg_idx;
+
+	if (!ring_is_rsc_enabled(ring))
+		return;
+
+	rscctrl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+	rscctrl |= TXGBE_PX_RR_CFG_RSC;
+	/* we must limit the number of descriptors so that the
+	 * total size of max desc * buf_len is not greater
+	 * than 65536
+	 */
+#if (MAX_SKB_FRAGS >= 16)
+	rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_16;
+#elif (MAX_SKB_FRAGS >= 8)
+	rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_8;
+#elif (MAX_SKB_FRAGS >= 4)
+	rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_4;
+#else
+	rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_1;
+#endif
+	wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rscctrl);
+}
+
+static void txgbe_rx_desc_queue_enable(struct txgbe_adapter *adapter,
+				       struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+	u32 rxdctl;
+	u8 reg_idx = ring->reg_idx;
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return;
+
+	do {
+		usleep_range(1000, 2000);
+		rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+	} while (--wait_loop && !(rxdctl & TXGBE_PX_RR_CFG_RR_EN));
+
+	if (!wait_loop)
+		txgbe_err(drv,
+			  "RXDCTL.ENABLE on Rx queue %d not set within the polling period\n",
+			  reg_idx);
+}
+
+/* disable the specified tx ring/queue */
+void txgbe_disable_tx_queue(struct txgbe_adapter *adapter,
+			    struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+	u32 rxdctl, reg_offset, enable_mask;
+	u8 reg_idx = ring->reg_idx;
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return;
+
+	reg_offset = TXGBE_PX_TR_CFG(reg_idx);
+	enable_mask = TXGBE_PX_TR_CFG_ENABLE;
+
+	/* write value back with TDCFG.ENABLE bit cleared */
+	wr32m(hw, reg_offset, enable_mask, 0);
+
+	/* the hardware may take up to 100us to really disable the tx queue */
+	do {
+		udelay(10);
+		rxdctl = rd32(hw, reg_offset);
+	} while (--wait_loop && (rxdctl & enable_mask));
+
+	if (!wait_loop)
+		txgbe_err(drv,
+			  "TDCFG.ENABLE on Tx queue %d not cleared within the polling period\n",
+			  reg_idx);
+}
+
+/* disable the specified rx ring/queue */
+void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
+			    struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+	u32 rxdctl;
+	u8 reg_idx = ring->reg_idx;
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return;
+
+	/* write value back with RXDCTL.ENABLE bit cleared */
+	wr32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+	      TXGBE_PX_RR_CFG_RR_EN, 0);
+
+	/* the hardware may take up to 100us to really disable the rx queue */
+	do {
+		udelay(10);
+		rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+	} while (--wait_loop && (rxdctl & TXGBE_PX_RR_CFG_RR_EN));
+
+	if (!wait_loop) {
+		txgbe_err(drv,
+			  "RXDCTL.ENABLE on Rx queue %d not cleared within the polling period\n",
+			  reg_idx);
+	}
+}
+
+void txgbe_configure_rx_ring(struct txgbe_adapter *adapter,
+			     struct txgbe_ring *ring)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u64 rdba = ring->dma;
+	u32 rxdctl;
+	u16 reg_idx = ring->reg_idx;
+
+	/* disable queue to avoid issues while updating state */
+	rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+	txgbe_disable_rx_queue(adapter, ring);
+
+	wr32(hw, TXGBE_PX_RR_BAL(reg_idx), rdba & DMA_BIT_MASK(32));
+	wr32(hw, TXGBE_PX_RR_BAH(reg_idx), rdba >> 32);
+
+	if (ring->count == TXGBE_MAX_RXD)
+		rxdctl |= 0 << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT;
+	else
+		rxdctl |= (ring->count / 128) << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT;
+
+	rxdctl |= 0x1 << TXGBE_PX_RR_CFG_RR_THER_SHIFT;
+	wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rxdctl);
+
+	/* reset head and tail pointers */
+	wr32(hw, TXGBE_PX_RR_RP(reg_idx), 0);
+	wr32(hw, TXGBE_PX_RR_WP(reg_idx), 0);
+	ring->tail = adapter->io_addr + TXGBE_PX_RR_WP(reg_idx);
+
+	/* reset ntu and ntc to place SW in sync with hardwdare */
+	ring->next_to_clean = 0;
+	ring->next_to_use = 0;
+	ring->next_to_alloc = 0;
+
+	txgbe_configure_srrctl(adapter, ring);
+	/* In ESX, RSCCTL configuration is done by on demand */
+	txgbe_configure_rscctl(adapter, ring);
+
+	/* enable receive descriptor ring */
+	wr32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+	      TXGBE_PX_RR_CFG_RR_EN, TXGBE_PX_RR_CFG_RR_EN);
+
+	txgbe_rx_desc_queue_enable(adapter, ring);
+	txgbe_alloc_rx_buffers(ring, txgbe_desc_unused(ring));
+}
+
+static void txgbe_setup_psrtype(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int rss_i = adapter->ring_feature[RING_F_RSS].indices;
+	int pool;
+
+	/* PSRTYPE must be initialized in adapters */
+	u32 psrtype = TXGBE_RDB_PL_CFG_L4HDR |
+		      TXGBE_RDB_PL_CFG_L3HDR |
+		      TXGBE_RDB_PL_CFG_L2HDR |
+		      TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR |
+		      TXGBE_RDB_PL_CFG_TUN_TUNHDR;
+
+	if (rss_i > 3)
+		psrtype |= 2 << 29;
+	else if (rss_i > 1)
+		psrtype |= 1 << 29;
+
+	for_each_set_bit(pool, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS)
+		wr32(hw, TXGBE_RDB_PL_CFG(pool), psrtype);
+}
+
+static void txgbe_set_rx_buffer_len(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct net_device *netdev = adapter->netdev;
+	u32 max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+	struct txgbe_ring *rx_ring;
+	int i;
+	u32 mhadd;
+
+	/* adjust max frame to be at least the size of a standard frame */
+	if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN))
+		max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN);
+
+	mhadd = rd32(hw, TXGBE_PSR_MAX_SZ);
+	if (max_frame != mhadd)
+		wr32(hw, TXGBE_PSR_MAX_SZ, max_frame);
+
+	/* Setup the HW Rx Head and Tail Descriptor Pointers and
+	 * the Base and Length of the Rx Descriptor Ring
+	 */
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		rx_ring = adapter->rx_ring[i];
+
+		if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+			set_ring_rsc_enabled(rx_ring);
+		else
+			clear_ring_rsc_enabled(rx_ring);
+	}
+}
+
+/**
+ * txgbe_configure_rx - Configure Receive Unit after Reset
+ * @adapter: board private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+static void txgbe_configure_rx(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i;
+	u32 rxctrl, psrctl;
+
+	/* disable receives while setting up the descriptors */
+	TCALL(hw, mac.ops.disable_rx);
+
+	txgbe_setup_psrtype(adapter);
+
+	/* enable hw crc stripping */
+	wr32m(hw, TXGBE_RSC_CTL,
+	      TXGBE_RSC_CTL_CRC_STRIP, TXGBE_RSC_CTL_CRC_STRIP);
+
+	/* RSC Setup */
+	psrctl = rd32m(hw, TXGBE_PSR_CTL, ~TXGBE_PSR_CTL_RSC_DIS);
+	psrctl |= TXGBE_PSR_CTL_RSC_ACK; /* Disable RSC for ACK packets */
+	if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED))
+		psrctl |= TXGBE_PSR_CTL_RSC_DIS;
+	wr32(hw, TXGBE_PSR_CTL, psrctl);
+
+	/* Program registers for the distribution of queues */
+	txgbe_setup_mrqc(adapter);
+
+	/* set_rx_buffer_len must be called before ring initialization */
+	txgbe_set_rx_buffer_len(adapter);
+
+	/* Setup the HW Rx Head and Tail Descriptor Pointers and
+	 * the Base and Length of the Rx Descriptor Ring
+	 */
+	for (i = 0; i < adapter->num_rx_queues; i++)
+		txgbe_configure_rx_ring(adapter, adapter->rx_ring[i]);
+
+	rxctrl = rd32(hw, TXGBE_RDB_PB_CTL);
+
+	/* enable all receives */
+	rxctrl |= TXGBE_RDB_PB_CTL_RXEN;
+	TCALL(hw, mac.ops.enable_rx_dma, rxctrl);
+}
+
+static int txgbe_vlan_rx_add_vid(struct net_device *netdev,
+				 __be16 proto, u16 vid)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	/* add VID to filter table */
+	if (hw->mac.ops.set_vfta)
+		TCALL(hw, mac.ops.set_vfta, vid, 0, true);
+
+	set_bit(vid, adapter->active_vlans);
+
+	return 0;
+}
+
+static int txgbe_vlan_rx_kill_vid(struct net_device *netdev,
+				  __be16 proto, u16 vid)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	/* remove VID from filter table */
+	if (hw->mac.ops.set_vfta)
+		TCALL(hw, mac.ops.set_vfta, vid, 0, false);
+
+	clear_bit(vid, adapter->active_vlans);
+
+	return 0;
+}
+
+/**
+ * txgbe_vlan_strip_disable - helper to disable vlan tag stripping
+ * @adapter: driver data
+ */
+void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i, j;
+
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		struct txgbe_ring *ring = adapter->rx_ring[i];
+
+		if (ring->accel)
+			continue;
+		j = ring->reg_idx;
+		wr32m(hw, TXGBE_PX_RR_CFG(j),
+		      TXGBE_PX_RR_CFG_VLAN, 0);
+	}
+}
+
+/**
+ * txgbe_vlan_strip_enable - helper to enable vlan tag stripping
+ * @adapter: driver data
+ */
+void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i, j;
+
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		struct txgbe_ring *ring = adapter->rx_ring[i];
+
+		if (ring->accel)
+			continue;
+		j = ring->reg_idx;
+		wr32m(hw, TXGBE_PX_RR_CFG(j),
+		      TXGBE_PX_RR_CFG_VLAN, TXGBE_PX_RR_CFG_VLAN);
+	}
+}
+
+void txgbe_vlan_mode(struct net_device *netdev, u32 features)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	bool enable;
+
+	enable = !!(features & (NETIF_F_HW_VLAN_CTAG_RX));
+
+	if (enable)
+		/* enable VLAN tag insert/strip */
+		txgbe_vlan_strip_enable(adapter);
+	else
+		/* disable VLAN tag insert/strip */
+		txgbe_vlan_strip_disable(adapter);
+}
+
+static void txgbe_restore_vlan(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	u16 vid;
+
+	txgbe_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), 0);
+	txgbe_vlan_mode(netdev, netdev->features);
+
+	for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)
+		txgbe_vlan_rx_add_vid(netdev, htons(ETH_P_8021Q), vid);
+}
+
+static u8 *txgbe_addr_list_itr(struct txgbe_hw __maybe_unused *hw,
+			       u8 **mc_addr_ptr, u32 *vmdq)
+{
+	struct netdev_hw_addr *mc_ptr;
+	u8 *addr = *mc_addr_ptr;
+
+	*vmdq = 0;
+
+	mc_ptr = container_of(addr, struct netdev_hw_addr, addr[0]);
+	if (mc_ptr->list.next) {
+		struct netdev_hw_addr *ha;
+
+		ha = list_entry(mc_ptr->list.next, struct netdev_hw_addr, list);
+		*mc_addr_ptr = ha->addr;
+	} else {
+		*mc_addr_ptr = NULL;
+	}
+
+	return addr;
+}
+
+/**
+ * txgbe_write_mc_addr_list - write multicast addresses to MTA
+ * @netdev: network interface device structure
+ *
+ * Writes multicast address list to the MTA hash table.
+ * Returns: -ENOMEM on failure
+ *                0 on no addresses written
+ *                X on writing X addresses to MTA
+ **/
+int txgbe_write_mc_addr_list(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	struct netdev_hw_addr *ha;
+	u8  *addr_list = NULL;
+	int addr_count = 0;
+
+	if (!hw->mac.ops.update_mc_addr_list)
+		return -ENOMEM;
+
+	if (!netif_running(netdev))
+		return 0;
+
+	if (netdev_mc_empty(netdev)) {
+		TCALL(hw, mac.ops.update_mc_addr_list, NULL, 0,
+		      txgbe_addr_list_itr, true);
+	} else {
+		ha = list_first_entry(&netdev->mc.list,
+				      struct netdev_hw_addr, list);
+		addr_list = ha->addr;
+		addr_count = netdev_mc_count(netdev);
+
+		TCALL(hw, mac.ops.update_mc_addr_list, addr_list, addr_count,
+		      txgbe_addr_list_itr, true);
+	}
+
+	return addr_count;
+}
+
+static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
+			if (adapter->mac_table[i].state &
+					TXGBE_MAC_STATE_IN_USE) {
+				TCALL(hw, mac.ops.set_rar, i,
+				      adapter->mac_table[i].addr,
+				      adapter->mac_table[i].pools,
+				      TXGBE_PSR_MAC_SWC_AD_H_AV);
+			} else {
+				TCALL(hw, mac.ops.clear_rar, i);
+			}
+			adapter->mac_table[i].state &=
+				~(TXGBE_MAC_STATE_MODIFIED);
+		}
+	}
+}
+
+int txgbe_available_rars(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i, count = 0;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		if (adapter->mac_table[i].state == 0)
+			count++;
+	}
+	return count;
+}
+
+/* this function destroys the first RAR entry */
+static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
+					 u8 *addr)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
+	adapter->mac_table[0].pools = 1ULL;
+	adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
+				       TXGBE_MAC_STATE_IN_USE);
+	TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr,
+	      adapter->mac_table[0].pools,
+	      TXGBE_PSR_MAC_SWC_AD_H_AV);
+}
+
+int txgbe_add_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i;
+
+	if (is_zero_ether_addr(addr))
+		return -EINVAL;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) {
+			if (ether_addr_equal(addr, adapter->mac_table[i].addr)) {
+				if (adapter->mac_table[i].pools != (1ULL << pool)) {
+					memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN);
+					adapter->mac_table[i].pools |= (1ULL << pool);
+					txgbe_sync_mac_table(adapter);
+					return i;
+				}
+			}
+		}
+
+		if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE)
+			continue;
+		adapter->mac_table[i].state |= (TXGBE_MAC_STATE_MODIFIED |
+						TXGBE_MAC_STATE_IN_USE);
+		memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN);
+		adapter->mac_table[i].pools |= (1ULL << pool);
+		txgbe_sync_mac_table(adapter);
+		return i;
+	}
+	return -ENOMEM;
+}
+
+static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
+{
+	u32 i;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+		adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+		memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
+		adapter->mac_table[i].pools = 0;
+	}
+	txgbe_sync_mac_table(adapter);
+}
+
+int txgbe_del_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool)
+{
+	/* search table for addr, if found, set to 0 and sync */
+	u32 i;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (is_zero_ether_addr(addr))
+		return -EINVAL;
+
+	for (i = 0; i < hw->mac.num_rar_entries; i++) {
+		if (ether_addr_equal(addr, adapter->mac_table[i].addr)) {
+			if (adapter->mac_table[i].pools & (1ULL << pool)) {
+				adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+				adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+				adapter->mac_table[i].pools &= ~(1ULL << pool);
+				txgbe_sync_mac_table(adapter);
+			}
+			return 0;
+		}
+
+		if (adapter->mac_table[i].pools != (1 << pool))
+			continue;
+		if (!ether_addr_equal(addr, adapter->mac_table[i].addr))
+			continue;
+
+		adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+		adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+		memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
+		adapter->mac_table[i].pools = 0;
+		txgbe_sync_mac_table(adapter);
+		return 0;
+	}
+	return -ENOMEM;
+}
+
+/**
+ * txgbe_write_uc_addr_list - write unicast addresses to RAR table
+ * @netdev: network interface device structure
+ *
+ * Writes unicast address list to the RAR table.
+ * Returns: -ENOMEM on failure/insufficient address space
+ *                0 on no addresses written
+ *                X on writing X addresses to the RAR table
+ **/
+int txgbe_write_uc_addr_list(struct net_device *netdev, int pool)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	int count = 0;
+
+	/* return ENOMEM indicating insufficient memory for addresses */
+	if (netdev_uc_count(netdev) > txgbe_available_rars(adapter))
+		return -ENOMEM;
+
+	if (!netdev_uc_empty(netdev)) {
+		struct netdev_hw_addr *ha;
+
+		netdev_for_each_uc_addr(ha, netdev) {
+			txgbe_del_mac_filter(adapter, ha->addr, pool);
+			txgbe_add_mac_filter(adapter, ha->addr, pool);
+			count++;
+		}
+	}
+	return count;
+}
+
+/**
+ * txgbe_set_rx_mode - Unicast, Multicast and Promiscuous mode set
+ * @netdev: network interface device structure
+ *
+ * The set_rx_method entry point is called whenever the unicast/multicast
+ * address list or the network interface flags are updated.  This routine is
+ * responsible for configuring the hardware for proper unicast, multicast and
+ * promiscuous mode.
+ **/
+void txgbe_set_rx_mode(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 fctrl, vmolr, vlnctrl;
+	int count;
+
+	/* Check for Promiscuous and All Multicast modes */
+	fctrl = rd32m(hw, TXGBE_PSR_CTL,
+		      ~(TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE));
+	vmolr = rd32m(hw, TXGBE_PSR_VM_L2CTL(0),
+		      ~(TXGBE_PSR_VM_L2CTL_UPE |
+			TXGBE_PSR_VM_L2CTL_MPE |
+			TXGBE_PSR_VM_L2CTL_ROPE |
+			TXGBE_PSR_VM_L2CTL_ROMPE));
+	vlnctrl = rd32m(hw, TXGBE_PSR_VLAN_CTL,
+			~(TXGBE_PSR_VLAN_CTL_VFE |
+			  TXGBE_PSR_VLAN_CTL_CFIEN));
+
+	/* set all bits that we expect to always be set */
+	fctrl |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_MFE;
+	vmolr |= TXGBE_PSR_VM_L2CTL_BAM |
+		 TXGBE_PSR_VM_L2CTL_AUPE |
+		 TXGBE_PSR_VM_L2CTL_VACC;
+	vlnctrl |= TXGBE_PSR_VLAN_CTL_VFE;
+
+	hw->addr_ctrl.user_set_promisc = false;
+	if (netdev->flags & IFF_PROMISC) {
+		hw->addr_ctrl.user_set_promisc = true;
+		fctrl |= (TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE);
+		/* pf don't want packets routing to vf, so clear UPE */
+		vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+		vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE;
+	}
+
+	if (netdev->flags & IFF_ALLMULTI) {
+		fctrl |= TXGBE_PSR_CTL_MPE;
+		vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+	}
+
+	/* This is useful for sniffing bad packets. */
+	if (netdev->features & NETIF_F_RXALL) {
+		vmolr |= (TXGBE_PSR_VM_L2CTL_UPE | TXGBE_PSR_VM_L2CTL_MPE);
+		vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE;
+		/* receive bad packets */
+		wr32m(hw, TXGBE_RSC_CTL,
+		      TXGBE_RSC_CTL_SAVE_MAC_ERR,
+		      TXGBE_RSC_CTL_SAVE_MAC_ERR);
+	} else {
+		vmolr |= TXGBE_PSR_VM_L2CTL_ROPE | TXGBE_PSR_VM_L2CTL_ROMPE;
+	}
+
+	/* Write addresses to available RAR registers, if there is not
+	 * sufficient space to store all the addresses then enable
+	 * unicast promiscuous mode
+	 */
+	count = txgbe_write_uc_addr_list(netdev, 0);
+	if (count < 0) {
+		vmolr &= ~TXGBE_PSR_VM_L2CTL_ROPE;
+		vmolr |= TXGBE_PSR_VM_L2CTL_UPE;
+	}
+
+	/* Write addresses to the MTA, if the attempt fails
+	 * then we should just turn on promiscuous mode so
+	 * that we can at least receive multicast traffic
+	 */
+	count = txgbe_write_mc_addr_list(netdev);
+	if (count < 0) {
+		vmolr &= ~TXGBE_PSR_VM_L2CTL_ROMPE;
+		vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+	}
+
+	wr32(hw, TXGBE_PSR_VLAN_CTL, vlnctrl);
+	wr32(hw, TXGBE_PSR_CTL, fctrl);
+	wr32(hw, TXGBE_PSR_VM_L2CTL(0), vmolr);
+
+	if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
+		txgbe_vlan_strip_enable(adapter);
+	else
+		txgbe_vlan_strip_disable(adapter);
+}
+
+static void txgbe_napi_enable_all(struct txgbe_adapter *adapter)
+{
+	struct txgbe_q_vector *q_vector;
+	int q_idx;
+
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+		q_vector = adapter->q_vector[q_idx];
+		napi_enable(&q_vector->napi);
+	}
+}
+
+static void txgbe_napi_disable_all(struct txgbe_adapter *adapter)
+{
+	struct txgbe_q_vector *q_vector;
+	int q_idx;
+
+	for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+		q_vector = adapter->q_vector[q_idx];
+		napi_disable(&q_vector->napi);
+	}
+}
+
+void txgbe_clear_vxlan_port(struct txgbe_adapter *adapter)
+{
+	adapter->vxlan_port = 0;
+	if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
+		return;
+	wr32(&adapter->hw, TXGBE_CFG_VXLAN, 0);
+}
+
+#define TXGBE_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+				    NETIF_F_GSO_GRE_CSUM | \
+				    NETIF_F_GSO_IPXIP4 | \
+				    NETIF_F_GSO_IPXIP6 | \
+				    NETIF_F_GSO_UDP_TUNNEL | \
+				    NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
+static void txgbe_configure_pb(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL);
+}
+
+void txgbe_configure_isb(struct txgbe_adapter *adapter)
+{
+	/* set ISB Address */
+	struct txgbe_hw *hw = &adapter->hw;
+
+	wr32(hw, TXGBE_PX_ISB_ADDR_L,
+	     adapter->isb_dma & DMA_BIT_MASK(32));
+	wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32);
+}
+
+void txgbe_configure_port(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 value, i;
+
+	value = TXGBE_CFG_PORT_CTL_D_VLAN | TXGBE_CFG_PORT_CTL_QINQ;
+	wr32m(hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_D_VLAN |
+	      TXGBE_CFG_PORT_CTL_QINQ,
+	      value);
+
+	wr32(hw, TXGBE_CFG_TAG_TPID(0),
+	     ETH_P_8021Q | ETH_P_8021AD << 16);
+	adapter->hw.tpid[0] = ETH_P_8021Q;
+	adapter->hw.tpid[1] = ETH_P_8021AD;
+	for (i = 1; i < 4; i++)
+		wr32(hw, TXGBE_CFG_TAG_TPID(i),
+		     ETH_P_8021Q | ETH_P_8021Q << 16);
+	for (i = 2; i < 8; i++)
+		adapter->hw.tpid[i] = ETH_P_8021Q;
+}
+
+static void txgbe_configure(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	txgbe_configure_pb(adapter);
+
+	txgbe_configure_port(adapter);
+
+	txgbe_set_rx_mode(adapter->netdev);
+	txgbe_restore_vlan(adapter);
+
+	TCALL(hw, mac.ops.disable_sec_rx_path);
+
+	TCALL(hw, mac.ops.enable_sec_rx_path);
+
+	txgbe_configure_tx(adapter);
+	txgbe_configure_rx(adapter);
+	txgbe_configure_isb(adapter);
+}
+
+static bool txgbe_is_sfp(struct txgbe_hw *hw)
+{
+	switch (TCALL(hw, mac.ops.get_media_type)) {
+	case txgbe_media_type_fiber:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static bool txgbe_is_backplane(struct txgbe_hw *hw)
+{
+	switch (TCALL(hw, mac.ops.get_media_type)) {
+	case txgbe_media_type_backplane:
+		return true;
+	default:
+		return false;
+	}
+}
+
+/**
+ * txgbe_sfp_link_config - set up SFP+ link
+ * @adapter: pointer to private adapter struct
+ **/
+static void txgbe_sfp_link_config(struct txgbe_adapter *adapter)
+{
+	/* We are assuming the worst case scenerio here, and that
+	 * is that an SFP was inserted/removed after the reset
+	 * but before SFP detection was enabled.  As such the best
+	 * solution is to just start searching as soon as we start
+	 */
+
+	adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+	adapter->sfp_poll_time = 0;
+}
+
+static void txgbe_setup_gpie(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 gpie = 0;
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		gpie = TXGBE_PX_GPIE_MODEL;
+
+	wr32(hw, TXGBE_PX_GPIE, gpie);
+}
+
+static void txgbe_up_complete(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 links_reg;
+
+	txgbe_get_hw_control(adapter);
+	txgbe_setup_gpie(adapter);
+
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+		txgbe_configure_msix(adapter);
+	else
+		txgbe_configure_msi_and_legacy(adapter);
+
+	/* enable the optics for SFP+ fiber */
+	TCALL(hw, mac.ops.enable_tx_laser);
+
+	smp_mb__before_atomic();
+	clear_bit(__TXGBE_DOWN, &adapter->state);
+	txgbe_napi_enable_all(adapter);
+
+	if (txgbe_is_sfp(hw)) {
+		txgbe_sfp_link_config(adapter);
+	} else if (txgbe_is_backplane(hw)) {
+		adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+		txgbe_service_event_schedule(adapter);
+	}
+
+	links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+	if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+		if (links_reg & TXGBE_CFG_PORT_ST_LINK_10G) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+			     TXGBE_MAC_TX_CFG_SPEED_10G);
+		} else if (links_reg & (TXGBE_CFG_PORT_ST_LINK_1G | TXGBE_CFG_PORT_ST_LINK_100M)) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+			     TXGBE_MAC_TX_CFG_SPEED_1G);
+		}
+	}
+
+	/* clear any pending interrupts, may auto mask */
+	rd32(hw, TXGBE_PX_IC(0));
+	rd32(hw, TXGBE_PX_IC(1));
+	rd32(hw, TXGBE_PX_MISC_IC);
+	if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI)
+		wr32(hw, TXGBE_GPIO_EOI, TXGBE_GPIO_EOI_6);
+	txgbe_irq_enable(adapter, true, true);
+
+	/* enable transmits */
+	netif_tx_start_all_queues(adapter->netdev);
+
+	/* bring the link up in the watchdog, this could race with our first
+	 * link up interrupt but shouldn't be a problem
+	 */
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+
+	mod_timer(&adapter->service_timer, jiffies);
+
+	if (hw->bus.lan_id == 0) {
+		wr32m(hw, TXGBE_MIS_PRB_CTL,
+		      TXGBE_MIS_PRB_CTL_LAN0_UP, TXGBE_MIS_PRB_CTL_LAN0_UP);
+	} else if (hw->bus.lan_id == 1) {
+		wr32m(hw, TXGBE_MIS_PRB_CTL,
+		      TXGBE_MIS_PRB_CTL_LAN1_UP, TXGBE_MIS_PRB_CTL_LAN1_UP);
+	} else {
+		txgbe_err(probe, "%s:invalid bus lan id %d\n",
+			  __func__, hw->bus.lan_id);
+	}
+
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	wr32m(hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD);
+}
+
+void txgbe_reinit_locked(struct txgbe_adapter *adapter)
+{
+	/* put off any impending NetWatchDogTimeout */
+	netif_trans_update(adapter->netdev);
+
+	while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
+		usleep_range(1000, 2000);
+	txgbe_down(adapter);
+	txgbe_up(adapter);
+	clear_bit(__TXGBE_RESETTING, &adapter->state);
+}
+
+void txgbe_up(struct txgbe_adapter *adapter)
+{
+	/* hardware has been reset, we need to reload some things */
+	txgbe_configure(adapter);
+
+	txgbe_up_complete(adapter);
+}
+
+void txgbe_reset(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct net_device *netdev = adapter->netdev;
+	int err;
+	u8 old_addr[ETH_ALEN];
+
+	if (TXGBE_REMOVED(hw->hw_addr))
+		return;
+	/* lock SFP init bit to prevent race conditions with the watchdog */
+	while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		usleep_range(1000, 2000);
+
+	/* clear all SFP and link config related flags while holding SFP_INIT */
+	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+
+	err = TCALL(hw, mac.ops.init_hw);
+	switch (err) {
+	case 0:
+	case TXGBE_ERR_SFP_NOT_PRESENT:
+	case TXGBE_ERR_SFP_NOT_SUPPORTED:
+		break;
+	case TXGBE_ERR_MASTER_REQUESTS_PENDING:
+		txgbe_dev_err("master disable timed out\n");
+		break;
+	default:
+		txgbe_dev_err("Hardware Error: %d\n", err);
+	}
+
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+	/* do not flush user set addresses */
+	memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
+	txgbe_flush_sw_mac_table(adapter);
+	txgbe_mac_set_default_filter(adapter, old_addr);
+
+	/* update SAN MAC vmdq pool selection */
+	TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
+}
+
+/**
+ * txgbe_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring)
+{
+	struct device *dev = rx_ring->dev;
+	unsigned long size;
+	u16 i;
+
+	/* ring already cleared, nothing to do */
+	if (!rx_ring->rx_buffer_info)
+		return;
+
+	/* Free all the Rx ring sk_buffs */
+	for (i = 0; i < rx_ring->count; i++) {
+		struct txgbe_rx_buffer *rx_buffer = &rx_ring->rx_buffer_info[i];
+
+		if (rx_buffer->dma) {
+			dma_unmap_single(dev,
+					 rx_buffer->dma,
+					 rx_ring->rx_buf_len,
+					 DMA_FROM_DEVICE);
+			rx_buffer->dma = 0;
+		}
+
+		if (rx_buffer->skb) {
+			struct sk_buff *skb = rx_buffer->skb;
+
+			if (TXGBE_CB(skb)->dma_released) {
+				dma_unmap_single(dev,
+						 TXGBE_CB(skb)->dma,
+						 rx_ring->rx_buf_len,
+						 DMA_FROM_DEVICE);
+				TXGBE_CB(skb)->dma = 0;
+				TXGBE_CB(skb)->dma_released = false;
+			}
+
+			if (TXGBE_CB(skb)->page_released)
+				dma_unmap_page(dev,
+					       TXGBE_CB(skb)->dma,
+					       txgbe_rx_bufsz(rx_ring),
+					       DMA_FROM_DEVICE);
+			dev_kfree_skb(skb);
+			rx_buffer->skb = NULL;
+		}
+
+		if (!rx_buffer->page)
+			continue;
+
+		dma_unmap_page(dev, rx_buffer->page_dma,
+			       txgbe_rx_pg_size(rx_ring),
+			       DMA_FROM_DEVICE);
+
+		__free_pages(rx_buffer->page,
+			     txgbe_rx_pg_order(rx_ring));
+		rx_buffer->page = NULL;
+	}
+
+	size = sizeof(struct txgbe_rx_buffer) * rx_ring->count;
+	memset(rx_ring->rx_buffer_info, 0, size);
+
+	/* Zero out the descriptor ring */
+	memset(rx_ring->desc, 0, rx_ring->size);
+
+	rx_ring->next_to_alloc = 0;
+	rx_ring->next_to_clean = 0;
+	rx_ring->next_to_use = 0;
+}
+
+/**
+ * txgbe_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring)
+{
+	struct txgbe_tx_buffer *tx_buffer_info;
+	unsigned long size;
+	u16 i;
+
+	/* ring already cleared, nothing to do */
+	if (!tx_ring->tx_buffer_info)
+		return;
+
+	/* Free all the Tx ring sk_buffs */
+	for (i = 0; i < tx_ring->count; i++) {
+		tx_buffer_info = &tx_ring->tx_buffer_info[i];
+		txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer_info);
+	}
+
+	netdev_tx_reset_queue(txring_txq(tx_ring));
+
+	size = sizeof(struct txgbe_tx_buffer) * tx_ring->count;
+	memset(tx_ring->tx_buffer_info, 0, size);
+
+	/* Zero out the descriptor ring */
+	memset(tx_ring->desc, 0, tx_ring->size);
+}
+
+/**
+ * txgbe_clean_all_rx_rings - Free Rx Buffers for all queues
+ * @adapter: board private structure
+ **/
+static void txgbe_clean_all_rx_rings(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_rx_queues; i++)
+		txgbe_clean_rx_ring(adapter->rx_ring[i]);
+}
+
+/**
+ * txgbe_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @adapter: board private structure
+ **/
+static void txgbe_clean_all_tx_rings(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_tx_queues; i++)
+		txgbe_clean_tx_ring(adapter->tx_ring[i]);
+}
+
+void txgbe_disable_device(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i;
+
+	/* signal that we are down to the interrupt handler */
+	if (test_and_set_bit(__TXGBE_DOWN, &adapter->state))
+		return; /* do nothing if already down */
+
+	txgbe_disable_pcie_master(hw);
+	/* disable receives */
+	TCALL(hw, mac.ops.disable_rx);
+
+	/* disable all enabled rx queues */
+	for (i = 0; i < adapter->num_rx_queues; i++)
+		/* this call also flushes the previous write */
+		txgbe_disable_rx_queue(adapter, adapter->rx_ring[i]);
+
+	netif_tx_stop_all_queues(netdev);
+
+	/* call carrier off first to avoid false dev_watchdog timeouts */
+	netif_carrier_off(netdev);
+	netif_tx_disable(netdev);
+
+	txgbe_irq_disable(adapter);
+
+	txgbe_napi_disable_all(adapter);
+
+	adapter->flags2 &= ~(TXGBE_FLAG2_PF_RESET_REQUESTED |
+			     TXGBE_FLAG2_GLOBAL_RESET_REQUESTED);
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+
+	del_timer_sync(&adapter->service_timer);
+
+	if (hw->bus.lan_id == 0)
+		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN0_UP, 0);
+	else if (hw->bus.lan_id == 1)
+		wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN1_UP, 0);
+	else
+		txgbe_dev_err("%s: invalid bus lan id %d\n", __func__,
+			      hw->bus.lan_id);
+
+	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+	      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+		/* disable mac transmiter */
+		wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, 0);
+	}
+	/* disable transmits in the hardware now that interrupts are off */
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		u8 reg_idx = adapter->tx_ring[i]->reg_idx;
+
+		wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
+	}
+
+	/* Disable the Tx DMA engine */
+	wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
+}
+
+void txgbe_down(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	txgbe_disable_device(adapter);
+	txgbe_reset(adapter);
+
+	if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)))
+		/* power down the optics for SFP+ fiber */
+		TCALL(&adapter->hw, mac.ops.disable_tx_laser);
+
+	txgbe_clean_all_tx_rings(adapter);
+	txgbe_clean_all_rx_rings(adapter);
+}
+
+/**
+ *  txgbe_init_shared_code - Initialize the shared code
+ *  @hw: pointer to hardware structure
+ *
+ *  This will assign function pointers and assign the MAC type and PHY code.
+ **/
+s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+{
+	s32 status;
+
+	status = txgbe_init_ops(hw);
+	return status;
+}
+
+/**
+ * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
+ * @adapter: board private structure to initialize
+ *
+ * txgbe_sw_init initializes the Adapter private data structure.
+ * Fields are initialized based on PCI device information and
+ * OS network device settings (MTU size).
+ **/
+static const u32 def_rss_key[10] = {
+	0xE291D73D, 0x1805EC6C, 0x2A94B30D,
+	0xA54F2BEC, 0xEA49AF7C, 0xE214AD3D, 0xB855AABE,
+	0x6A3E67EA, 0x14364D17, 0x3BED200D
+};
+
+static int txgbe_sw_init(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct pci_dev *pdev = adapter->pdev;
+	int err;
+	unsigned int rss;
+
+	/* PCI config space info */
+	hw->vendor_id = pdev->vendor;
+	hw->device_id = pdev->device;
+	pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
+	if (hw->revision_id == TXGBE_FAILED_READ_CFG_BYTE &&
+	    txgbe_check_cfg_remove(hw, pdev)) {
+		txgbe_err(probe, "read of revision id failed\n");
+		err = -ENODEV;
+		goto out;
+	}
+	hw->subsystem_vendor_id = pdev->subsystem_vendor;
+	hw->subsystem_device_id = pdev->subsystem_device;
+
+	pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id);
+	if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) {
+		txgbe_err(probe, "read of subsystem id failed\n");
+		err = -ENODEV;
+		goto out;
+	}
+
+	err = txgbe_init_shared_code(hw);
+	if (err) {
+		txgbe_err(probe, "init_shared_code failed: %d\n", err);
+		goto out;
+	}
+	adapter->mac_table = kzalloc(sizeof(*adapter->mac_table) *
+				     hw->mac.num_rar_entries,
+				     GFP_ATOMIC);
+	if (!adapter->mac_table) {
+		err = TXGBE_ERR_OUT_OF_MEM;
+		txgbe_err(probe, "mac_table allocation failed: %d\n", err);
+		goto out;
+	}
+
+	memcpy(adapter->rss_key, def_rss_key, sizeof(def_rss_key));
+
+	/* Set common capability flags and settings */
+	rss = min_t(int, TXGBE_MAX_RSS_INDICES,
+		    num_online_cpus());
+	adapter->ring_feature[RING_F_RSS].limit = rss;
+	adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED;
+
+	/* enable itr by default in dynamic mode */
+	adapter->rx_itr_setting = 1;
+	adapter->tx_itr_setting = 1;
+
+	adapter->atr_sample_rate = 20;
+
+	adapter->flags |= TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE;
+	adapter->flags2 |= TXGBE_FLAG2_RSC_CAPABLE;
+
+	adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
+
+	/* set default ring sizes */
+	adapter->tx_ring_count = TXGBE_DEFAULT_TXD;
+	adapter->rx_ring_count = TXGBE_DEFAULT_RXD;
+
+	/* set default work limits */
+	adapter->tx_work_limit = TXGBE_DEFAULT_TX_WORK;
+	adapter->rx_work_limit = TXGBE_DEFAULT_RX_WORK;
+
+	set_bit(0, &adapter->fwd_bitmask);
+	set_bit(__TXGBE_DOWN, &adapter->state);
+out:
+	return err;
+}
+
+/**
+ * txgbe_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring:    tx descriptor ring (for a specific queue) to setup
+ *
+ * Return 0 on success, negative on failure
+ **/
+int txgbe_setup_tx_resources(struct txgbe_ring *tx_ring)
+{
+	struct device *dev = tx_ring->dev;
+	int orig_node = dev_to_node(dev);
+	int numa_node = -1;
+	int size;
+
+	size = sizeof(struct txgbe_tx_buffer) * tx_ring->count;
+
+	if (tx_ring->q_vector)
+		numa_node = tx_ring->q_vector->numa_node;
+
+	tx_ring->tx_buffer_info = vzalloc_node(size, numa_node);
+	if (!tx_ring->tx_buffer_info)
+		tx_ring->tx_buffer_info = vzalloc(size);
+	if (!tx_ring->tx_buffer_info)
+		goto err;
+
+	/* round up to nearest 4K */
+	tx_ring->size = tx_ring->count * sizeof(union txgbe_tx_desc);
+	tx_ring->size = ALIGN(tx_ring->size, 4096);
+
+	set_dev_node(dev, numa_node);
+	tx_ring->desc = dma_alloc_coherent(dev,
+					   tx_ring->size,
+					   &tx_ring->dma,
+					   GFP_KERNEL);
+	set_dev_node(dev, orig_node);
+	if (!tx_ring->desc)
+		tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+						   &tx_ring->dma, GFP_KERNEL);
+	if (!tx_ring->desc)
+		goto err;
+
+	return 0;
+
+err:
+	vfree(tx_ring->tx_buffer_info);
+	tx_ring->tx_buffer_info = NULL;
+	dev_err(dev, "Unable to allocate memory for the Tx descriptor ring\n");
+	return -ENOMEM;
+}
+
+/**
+ * txgbe_setup_all_tx_resources - allocate all queues Tx resources
+ * @adapter: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not).  It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_all_tx_resources(struct txgbe_adapter *adapter)
+{
+	int i, err = 0;
+
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		err = txgbe_setup_tx_resources(adapter->tx_ring[i]);
+		if (!err)
+			continue;
+
+		txgbe_err(probe, "Allocation for Tx Queue %u failed\n", i);
+		goto err_setup_tx;
+	}
+
+	return 0;
+err_setup_tx:
+	/* rewind the index freeing the rings as we go */
+	while (i--)
+		txgbe_free_tx_resources(adapter->tx_ring[i]);
+	return err;
+}
+
+/**
+ * txgbe_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring:    rx descriptor ring (for a specific queue) to setup
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int txgbe_setup_rx_resources(struct txgbe_ring *rx_ring)
+{
+	struct device *dev = rx_ring->dev;
+	int orig_node = dev_to_node(dev);
+	int numa_node = -1;
+	int size;
+
+	size = sizeof(struct txgbe_rx_buffer) * rx_ring->count;
+
+	if (rx_ring->q_vector)
+		numa_node = rx_ring->q_vector->numa_node;
+
+	rx_ring->rx_buffer_info = vzalloc_node(size, numa_node);
+	if (!rx_ring->rx_buffer_info)
+		rx_ring->rx_buffer_info = vzalloc(size);
+	if (!rx_ring->rx_buffer_info)
+		goto err;
+
+	/* Round up to nearest 4K */
+	rx_ring->size = rx_ring->count * sizeof(union txgbe_rx_desc);
+	rx_ring->size = ALIGN(rx_ring->size, 4096);
+
+	set_dev_node(dev, numa_node);
+	rx_ring->desc = dma_alloc_coherent(dev,
+					   rx_ring->size,
+					   &rx_ring->dma,
+					   GFP_KERNEL);
+	set_dev_node(dev, orig_node);
+	if (!rx_ring->desc)
+		rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+						   &rx_ring->dma, GFP_KERNEL);
+	if (!rx_ring->desc)
+		goto err;
+
+	return 0;
+err:
+	vfree(rx_ring->rx_buffer_info);
+	rx_ring->rx_buffer_info = NULL;
+	dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n");
+	return -ENOMEM;
+}
+
+/**
+ * txgbe_setup_all_rx_resources - allocate all queues Rx resources
+ * @adapter: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not).  It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_all_rx_resources(struct txgbe_adapter *adapter)
+{
+	int i, err = 0;
+
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		err = txgbe_setup_rx_resources(adapter->rx_ring[i]);
+		if (!err)
+			continue;
+
+		txgbe_err(probe, "Allocation for Rx Queue %u failed\n", i);
+		goto err_setup_rx;
+	}
+
+		return 0;
+err_setup_rx:
+	/* rewind the index freeing the rings as we go */
+	while (i--)
+		txgbe_free_rx_resources(adapter->rx_ring[i]);
+	return err;
+}
+
+/**
+ * txgbe_setup_isb_resources - allocate interrupt status resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_isb_resources(struct txgbe_adapter *adapter)
+{
+	struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+	adapter->isb_mem = dma_alloc_coherent(dev,
+					      sizeof(u32) * TXGBE_ISB_MAX,
+					      &adapter->isb_dma,
+					      GFP_KERNEL);
+	if (!adapter->isb_mem)
+		return -ENOMEM;
+	memset(adapter->isb_mem, 0, sizeof(u32) * TXGBE_ISB_MAX);
+	return 0;
+}
+
+/**
+ * txgbe_free_isb_resources - allocate all queues Rx resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static void txgbe_free_isb_resources(struct txgbe_adapter *adapter)
+{
+	struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+	dma_free_coherent(dev, sizeof(u32) * TXGBE_ISB_MAX,
+			  adapter->isb_mem, adapter->isb_dma);
+	adapter->isb_mem = NULL;
+}
+
+/**
+ * txgbe_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: Tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+void txgbe_free_tx_resources(struct txgbe_ring *tx_ring)
+{
+	txgbe_clean_tx_ring(tx_ring);
+
+	vfree(tx_ring->tx_buffer_info);
+	tx_ring->tx_buffer_info = NULL;
+
+	/* if not set, then don't free */
+	if (!tx_ring->desc)
+		return;
+
+	dma_free_coherent(tx_ring->dev, tx_ring->size,
+			  tx_ring->desc, tx_ring->dma);
+	tx_ring->desc = NULL;
+}
+
+/**
+ * txgbe_free_all_tx_resources - Free Tx Resources for All Queues
+ * @adapter: board private structure
+ *
+ * Free all transmit software resources
+ **/
+static void txgbe_free_all_tx_resources(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_tx_queues; i++)
+		txgbe_free_tx_resources(adapter->tx_ring[i]);
+}
+
+/**
+ * txgbe_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+void txgbe_free_rx_resources(struct txgbe_ring *rx_ring)
+{
+	txgbe_clean_rx_ring(rx_ring);
+
+	vfree(rx_ring->rx_buffer_info);
+	rx_ring->rx_buffer_info = NULL;
+
+	/* if not set, then don't free */
+	if (!rx_ring->desc)
+		return;
+
+	dma_free_coherent(rx_ring->dev, rx_ring->size,
+			  rx_ring->desc, rx_ring->dma);
+
+	rx_ring->desc = NULL;
+}
+
+/**
+ * txgbe_free_all_rx_resources - Free Rx Resources for All Queues
+ * @adapter: board private structure
+ *
+ * Free all receive software resources
+ **/
+static void txgbe_free_all_rx_resources(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_rx_queues; i++)
+		txgbe_free_rx_resources(adapter->rx_ring[i]);
+}
+
+/**
+ * txgbe_change_mtu - Change the Maximum Transfer Unit
+ * @netdev: network interface device structure
+ * @new_mtu: new value for maximum frame size
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int txgbe_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	if (new_mtu < 68 || new_mtu > 9414)
+		return -EINVAL;
+
+	txgbe_info(probe, "changing MTU from %d to %d\n", netdev->mtu, new_mtu);
+
+	/* must set new MTU before calling down or up */
+	netdev->mtu = new_mtu;
+
+	if (netif_running(netdev))
+		txgbe_reinit_locked(adapter);
+
+	return 0;
+}
+
+/**
+ * txgbe_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP).  At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the watchdog timer is started,
+ * and the stack is notified that the interface is ready.
+ **/
+int txgbe_open(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	int err;
+
+	netif_carrier_off(netdev);
+
+	/* allocate transmit descriptors */
+	err = txgbe_setup_all_tx_resources(adapter);
+	if (err)
+		goto err_setup_tx;
+
+	/* allocate receive descriptors */
+	err = txgbe_setup_all_rx_resources(adapter);
+	if (err)
+		goto err_setup_rx;
+
+	err = txgbe_setup_isb_resources(adapter);
+	if (err)
+		goto err_req_isb;
+
+	txgbe_configure(adapter);
+
+	err = txgbe_request_irq(adapter);
+	if (err)
+		goto err_req_irq;
+
+	/* Notify the stack of the actual queue counts. */
+	err = netif_set_real_num_tx_queues(netdev, adapter->num_tx_queues);
+	if (err)
+		goto err_set_queues;
+
+	err = netif_set_real_num_rx_queues(netdev, adapter->num_rx_queues);
+	if (err)
+		goto err_set_queues;
+
+	txgbe_up_complete(adapter);
+
+	txgbe_clear_vxlan_port(adapter);
+	udp_tunnel_get_rx_info(netdev);
+
+	return 0;
+
+err_set_queues:
+	txgbe_free_irq(adapter);
+err_req_irq:
+	txgbe_free_isb_resources(adapter);
+err_req_isb:
+	txgbe_free_all_rx_resources(adapter);
+
+err_setup_rx:
+	txgbe_free_all_tx_resources(adapter);
+err_setup_tx:
+	txgbe_reset(adapter);
+
+	return err;
+}
+
+/**
+ * txgbe_close_suspend - actions necessary to both suspend and close flows
+ * @adapter: the private adapter struct
+ *
+ * This function should contain the necessary work common to both suspending
+ * and closing of the device.
+ */
+static void txgbe_close_suspend(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	txgbe_disable_device(adapter);
+	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+		TCALL(hw, mac.ops.disable_tx_laser);
+	txgbe_clean_all_tx_rings(adapter);
+	txgbe_clean_all_rx_rings(adapter);
+
+	txgbe_free_irq(adapter);
+
+	txgbe_free_isb_resources(adapter);
+	txgbe_free_all_rx_resources(adapter);
+	txgbe_free_all_tx_resources(adapter);
+}
+
+/**
+ * txgbe_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS.  The hardware is still under the drivers control, but
+ * needs to be disabled.  A global MAC reset is issued to stop the
+ * hardware, and all transmit and receive resources are freed.
+ **/
+int txgbe_close(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	txgbe_down(adapter);
+	txgbe_free_irq(adapter);
+
+	txgbe_free_isb_resources(adapter);
+	txgbe_free_all_rx_resources(adapter);
+	txgbe_free_all_tx_resources(adapter);
+
+	txgbe_release_hw_control(adapter);
+
+	return 0;
+}
+
+static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
+{
+	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+	struct net_device *netdev = adapter->netdev;
+
+	netif_device_detach(netdev);
+
+	rtnl_lock();
+	if (netif_running(netdev))
+		txgbe_close_suspend(adapter);
+	rtnl_unlock();
+
+	txgbe_clear_interrupt_scheme(adapter);
+
+	txgbe_release_hw_control(adapter);
+
+	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
+		pci_disable_device(pdev);
+
+	return 0;
+}
+
+static void txgbe_shutdown(struct pci_dev *pdev)
+{
+	bool wake;
+
+	__txgbe_shutdown(pdev, &wake);
+
+	if (system_state == SYSTEM_POWER_OFF) {
+		pci_wake_from_d3(pdev, wake);
+		pci_set_power_state(pdev, PCI_D3hot);
+	}
+}
+
+/**
+ * txgbe_get_stats64 - Get System Network Statistics
+ * @netdev: network interface device structure
+ * @stats: storage space for 64bit statistics
+ *
+ * Returns 64bit statistics, for use in the ndo_get_stats64 callback. This
+ * function replaces txgbe_get_stats for kernels which support it.
+ */
+static void txgbe_get_stats64(struct net_device *netdev,
+			      struct rtnl_link_stats64 *stats)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	int i;
+
+	rcu_read_lock();
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		struct txgbe_ring *ring = READ_ONCE(adapter->rx_ring[i]);
+		u64 bytes, packets;
+		unsigned int start;
+
+		if (ring) {
+			do {
+				start = u64_stats_fetch_begin_irq(&ring->syncp);
+				packets = ring->stats.packets;
+				bytes   = ring->stats.bytes;
+			} while (u64_stats_fetch_retry_irq(&ring->syncp,
+				 start));
+			stats->rx_packets += packets;
+			stats->rx_bytes   += bytes;
+		}
+	}
+
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		struct txgbe_ring *ring = READ_ONCE(adapter->tx_ring[i]);
+		u64 bytes, packets;
+		unsigned int start;
+
+		if (ring) {
+			do {
+				start = u64_stats_fetch_begin_irq(&ring->syncp);
+				packets = ring->stats.packets;
+				bytes   = ring->stats.bytes;
+			} while (u64_stats_fetch_retry_irq(&ring->syncp,
+				 start));
+			stats->tx_packets += packets;
+			stats->tx_bytes   += bytes;
+		}
+	}
+	rcu_read_unlock();
+	/* following stats updated by txgbe_watchdog_subtask() */
+	stats->multicast        = netdev->stats.multicast;
+	stats->rx_errors        = netdev->stats.rx_errors;
+	stats->rx_length_errors = netdev->stats.rx_length_errors;
+	stats->rx_crc_errors    = netdev->stats.rx_crc_errors;
+	stats->rx_missed_errors = netdev->stats.rx_missed_errors;
+}
+
+/**
+ * txgbe_update_stats - Update the board statistics counters.
+ * @adapter: board private structure
+ **/
+void txgbe_update_stats(struct txgbe_adapter *adapter)
+{
+	struct net_device_stats *net_stats = &adapter->netdev->stats;
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_hw_stats *hwstats = &adapter->stats;
+	u64 total_mpc = 0;
+	u32 i, missed_rx = 0, mpc, bprc, lxon, lxoff;
+	u64 non_eop_descs = 0, restart_queue = 0, tx_busy = 0;
+	u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0;
+	u64 bytes = 0, packets = 0, hw_csum_rx_error = 0;
+	u64 hw_csum_rx_good = 0;
+
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_RESETTING, &adapter->state))
+		return;
+
+	if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) {
+		u64 rsc_count = 0;
+		u64 rsc_flush = 0;
+
+		for (i = 0; i < adapter->num_rx_queues; i++) {
+			rsc_count += adapter->rx_ring[i]->rx_stats.rsc_count;
+			rsc_flush += adapter->rx_ring[i]->rx_stats.rsc_flush;
+		}
+		adapter->rsc_total_count = rsc_count;
+		adapter->rsc_total_flush = rsc_flush;
+	}
+
+	for (i = 0; i < adapter->num_rx_queues; i++) {
+		struct txgbe_ring *rx_ring = adapter->rx_ring[i];
+
+		non_eop_descs += rx_ring->rx_stats.non_eop_descs;
+		alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed;
+		alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed;
+		hw_csum_rx_error += rx_ring->rx_stats.csum_err;
+		hw_csum_rx_good += rx_ring->rx_stats.csum_good_cnt;
+		bytes += rx_ring->stats.bytes;
+		packets += rx_ring->stats.packets;
+	}
+	adapter->non_eop_descs = non_eop_descs;
+	adapter->alloc_rx_page_failed = alloc_rx_page_failed;
+	adapter->alloc_rx_buff_failed = alloc_rx_buff_failed;
+	adapter->hw_csum_rx_error = hw_csum_rx_error;
+	adapter->hw_csum_rx_good = hw_csum_rx_good;
+	net_stats->rx_bytes = bytes;
+	net_stats->rx_packets = packets;
+
+	bytes = 0;
+	packets = 0;
+	/* gather some stats to the adapter struct that are per queue */
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+
+		restart_queue += tx_ring->tx_stats.restart_queue;
+		tx_busy += tx_ring->tx_stats.tx_busy;
+		bytes += tx_ring->stats.bytes;
+		packets += tx_ring->stats.packets;
+	}
+	adapter->restart_queue = restart_queue;
+	adapter->tx_busy = tx_busy;
+	net_stats->tx_bytes = bytes;
+	net_stats->tx_packets = packets;
+
+	hwstats->crcerrs += rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW);
+
+	/* 8 register reads */
+	for (i = 0; i < 8; i++) {
+		/* for packet buffers not used, the register should read 0 */
+		mpc = rd32(hw, TXGBE_RDB_MPCNT(i));
+		missed_rx += mpc;
+		hwstats->mpc[i] += mpc;
+		total_mpc += hwstats->mpc[i];
+		hwstats->pxontxc[i] += rd32(hw, TXGBE_RDB_PXONTXC(i));
+		hwstats->pxofftxc[i] +=
+				rd32(hw, TXGBE_RDB_PXOFFTXC(i));
+		hwstats->pxonrxc[i] += rd32(hw, TXGBE_MAC_PXONRXC(i));
+	}
+
+	hwstats->gprc += rd32(hw, TXGBE_PX_GPRC);
+
+	hwstats->o2bgptc += rd32(hw, TXGBE_TDM_OS2BMC_CNT);
+	if (txgbe_check_mng_access(&adapter->hw)) {
+		hwstats->o2bspc += rd32(hw, TXGBE_MNG_OS2BMC_CNT);
+		hwstats->b2ospc += rd32(hw, TXGBE_MNG_BMC2OS_CNT);
+	}
+	hwstats->b2ogprc += rd32(hw, TXGBE_RDM_BMC2OS_CNT);
+	hwstats->gorc += rd32(hw, TXGBE_PX_GORC_LSB);
+	hwstats->gorc += (u64)rd32(hw, TXGBE_PX_GORC_MSB) << 32;
+
+	hwstats->gotc += rd32(hw, TXGBE_PX_GOTC_LSB);
+	hwstats->gotc += (u64)rd32(hw, TXGBE_PX_GOTC_MSB) << 32;
+
+	adapter->hw_rx_no_dma_resources +=
+				     rd32(hw, TXGBE_RDM_DRP_PKT);
+	hwstats->lxonrxc += rd32(hw, TXGBE_MAC_LXONRXC);
+
+	bprc = rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW);
+	hwstats->bprc += bprc;
+	hwstats->mprc = 0;
+
+	for (i = 0; i < 128; i++)
+		hwstats->mprc += rd32(hw, TXGBE_PX_MPRC(i));
+
+	hwstats->roc += rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD);
+	hwstats->rlec += rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW);
+	lxon = rd32(hw, TXGBE_RDB_LXONTXC);
+	hwstats->lxontxc += lxon;
+	lxoff = rd32(hw, TXGBE_RDB_LXOFFTXC);
+	hwstats->lxofftxc += lxoff;
+
+	hwstats->gptc += rd32(hw, TXGBE_PX_GPTC);
+	hwstats->mptc += rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW);
+	hwstats->ruc += rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD);
+	hwstats->tpr += rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW);
+	hwstats->bptc += rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW);
+	/* Fill out the OS statistics structure */
+	net_stats->multicast = hwstats->mprc;
+
+	/* Rx Errors */
+	net_stats->rx_errors = hwstats->crcerrs +
+				       hwstats->rlec;
+	net_stats->rx_dropped = 0;
+	net_stats->rx_length_errors = hwstats->rlec;
+	net_stats->rx_crc_errors = hwstats->crcerrs;
+	net_stats->rx_missed_errors = total_mpc;
+}
+
+/**
+ * txgbe_check_hang_subtask - check for hung queues and dropped interrupts
+ * @adapter - pointer to the device adapter structure
+ *
+ * This function serves two purposes.  First it strobes the interrupt lines
+ * in order to make certain interrupts are occurring.  Secondly it sets the
+ * bits needed to check for TX hangs.  As a result we should immediately
+ * determine if a hang has occurred.
+ */
+static void txgbe_check_hang_subtask(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	/* If we're down or resetting, just bail */
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_REMOVING, &adapter->state) ||
+	    test_bit(__TXGBE_RESETTING, &adapter->state))
+		return;
+
+	/* Force detection of hung controller */
+	if (netif_carrier_ok(adapter->netdev)) {
+		for (i = 0; i < adapter->num_tx_queues; i++)
+			set_check_for_tx_hang(adapter->tx_ring[i]);
+	}
+}
+
+/**
+ * txgbe_watchdog_update_link - update the link status
+ * @adapter - pointer to the device adapter structure
+ * @link_speed - pointer to a u32 to store the link_speed
+ **/
+static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 link_speed = adapter->link_speed;
+	bool link_up = adapter->link_up;
+	u32 reg;
+	u32 i = 1;
+
+	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE))
+		return;
+
+	link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+	link_up = true;
+	TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+
+	if (link_up || time_after(jiffies, (adapter->link_check_timeout +
+		TXGBE_TRY_LINK_TIMEOUT))) {
+		adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+	}
+
+	for (i = 0; i < 3; i++) {
+		TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+		msleep(1);
+	}
+
+	adapter->link_up = link_up;
+	adapter->link_speed = link_speed;
+
+	if (link_up) {
+		txgbe_set_rx_drop_en(adapter);
+
+		if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+			     TXGBE_MAC_TX_CFG_SPEED_10G);
+		} else if (link_speed & (TXGBE_LINK_SPEED_1GB_FULL |
+			   TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL)) {
+			wr32(hw, TXGBE_MAC_TX_CFG,
+			     (rd32(hw, TXGBE_MAC_TX_CFG) &
+			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+			     TXGBE_MAC_TX_CFG_SPEED_1G);
+		}
+
+		/* Re configure MAC RX */
+		reg = rd32(hw, TXGBE_MAC_RX_CFG);
+		wr32(hw, TXGBE_MAC_RX_CFG, reg);
+		wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+		reg = rd32(hw, TXGBE_MAC_WDG_TIMEOUT);
+		wr32(hw, TXGBE_MAC_WDG_TIMEOUT, reg);
+	}
+}
+
+/**
+ * txgbe_watchdog_link_is_up - update netif_carrier status and
+ *                             print link up message
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_link_is_up(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 link_speed = adapter->link_speed;
+	bool flow_rx, flow_tx;
+
+	/* only continue if link was previously down */
+	if (netif_carrier_ok(netdev))
+		return;
+
+	/* flow_rx, flow_tx report link flow control status */
+	flow_rx = (rd32(hw, TXGBE_MAC_RX_FLOW_CTRL) & 0x101) == 0x1;
+	flow_tx = !!(TXGBE_RDB_RFCC_RFCE_802_3X &
+		     rd32(hw, TXGBE_RDB_RFCC));
+
+	txgbe_info(drv, "NIC Link is Up %s, Flow Control: %s\n",
+		   (link_speed == TXGBE_LINK_SPEED_10GB_FULL ?
+		    "10 Gbps" :
+		    (link_speed == TXGBE_LINK_SPEED_1GB_FULL ?
+		     "1 Gbps" :
+		     (link_speed == TXGBE_LINK_SPEED_100_FULL ?
+		      "100 Mbps" :
+		      (link_speed == TXGBE_LINK_SPEED_10_FULL ?
+		       "10 Mbps" :
+		       "unknown speed")))),
+		  ((flow_rx && flow_tx) ? "RX/TX" :
+		   (flow_rx ? "RX" :
+		    (flow_tx ? "TX" : "None"))));
+
+	netif_carrier_on(netdev);
+	netif_tx_wake_all_queues(netdev);
+}
+
+/**
+ * txgbe_watchdog_link_is_down - update netif_carrier status and
+ *                               print link down message
+ * @adapter - pointer to the adapter structure
+ **/
+static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+
+	adapter->link_up = false;
+	adapter->link_speed = 0;
+
+	/* only continue if link was up previously */
+	if (!netif_carrier_ok(netdev))
+		return;
+
+	txgbe_info(drv, "NIC Link is Down\n");
+	netif_carrier_off(netdev);
+	netif_tx_stop_all_queues(netdev);
+}
+
+static bool txgbe_ring_tx_pending(struct txgbe_adapter *adapter)
+{
+	int i;
+
+	for (i = 0; i < adapter->num_tx_queues; i++) {
+		struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+
+		if (tx_ring->next_to_use != tx_ring->next_to_clean)
+			return true;
+	}
+
+	return false;
+}
+
+/**
+ * txgbe_watchdog_flush_tx - flush queues on link down
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_flush_tx(struct txgbe_adapter *adapter)
+{
+	if (!netif_carrier_ok(adapter->netdev)) {
+		if (txgbe_ring_tx_pending(adapter)) {
+			/* We've lost link, so the controller stops DMA,
+			 * but we've got queued Tx work that's never going
+			 * to get done, so reset controller to flush Tx.
+			 * (Do the reset outside of interrupt context).
+			 */
+			txgbe_warn(drv,
+				   "initiating reset due to lost link with pending Tx work\n");
+			adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+		}
+	}
+}
+
+/**
+ * txgbe_watchdog_subtask - check and bring link up
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter)
+{
+	/* if interface is down do nothing */
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_REMOVING, &adapter->state) ||
+	    test_bit(__TXGBE_RESETTING, &adapter->state))
+		return;
+
+	txgbe_watchdog_update_link(adapter);
+
+	if (adapter->link_up)
+		txgbe_watchdog_link_is_up(adapter);
+	else
+		txgbe_watchdog_link_is_down(adapter);
+
+	txgbe_update_stats(adapter);
+
+	txgbe_watchdog_flush_tx(adapter);
+}
+
+/**
+ * txgbe_sfp_detection_subtask - poll for SFP+ cable
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_detection_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_mac_info *mac = &hw->mac;
+	s32 err;
+
+	/* not searching for SFP so there is nothing to do here */
+	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+		return;
+
+	if (adapter->sfp_poll_time &&
+	    time_after(adapter->sfp_poll_time, jiffies))
+		return; /* If not yet time to poll for SFP */
+
+	/* someone else is in init, wait until next service event */
+	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		return;
+
+	adapter->sfp_poll_time = jiffies + TXGBE_SFP_POLL_JIFFIES - 1;
+
+	err = TCALL(hw, phy.ops.identify_sfp);
+	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		goto sfp_out;
+
+	if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
+		/* If no cable is present, then we need to reset
+		 * the next time we find a good cable.
+		 */
+		adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+	}
+
+	/* exit on error */
+	if (err)
+		goto sfp_out;
+
+	/* exit if reset not needed */
+	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+		goto sfp_out;
+
+	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+
+	if (hw->phy.multispeed_fiber) {
+		/* Set up dual speed SFP+ support */
+		mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
+		mac->ops.setup_mac_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+	} else {
+		mac->ops.setup_link = txgbe_setup_mac_link;
+		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+		hw->phy.autoneg_advertised = 0;
+	}
+
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+	txgbe_info(probe, "detected SFP+: %d\n", hw->phy.sfp_type);
+
+sfp_out:
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+
+	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED && adapter->netdev_registered)
+		txgbe_dev_err("failed to initialize because an unsupported SFP+ module type was detected.\n");
+}
+
+/**
+ * txgbe_sfp_link_config_subtask - set up link SFP after module install
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_link_config_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 speed;
+	bool autoneg = false;
+	u8 device_type = hw->subsystem_id & 0xF0;
+
+	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG))
+		return;
+
+	/* someone else is in init, wait until next service event */
+	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+		return;
+
+	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+
+	if (device_type == TXGBE_ID_MAC_SGMII) {
+		speed = TXGBE_LINK_SPEED_1GB_FULL;
+	} else {
+		speed = hw->phy.autoneg_advertised;
+		if (!speed && hw->mac.ops.get_link_capabilities) {
+			TCALL(hw, mac.ops.get_link_capabilities, &speed, &autoneg);
+			/* setup the highest link when no autoneg */
+			if (!autoneg) {
+				if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+					speed = TXGBE_LINK_SPEED_10GB_FULL;
+			}
+		}
+	}
+
+	TCALL(hw, mac.ops.setup_link, speed, false);
+
+	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+	adapter->link_check_timeout = jiffies;
+	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+}
+
+/**
+ * txgbe_service_timer - Timer Call-back
+ * @data: pointer to adapter cast into an unsigned long
+ **/
+static void txgbe_service_timer(struct timer_list *t)
+{
+	struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
+	unsigned long next_event_offset;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	/* poll faster when waiting for link */
+	if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
+		if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+			next_event_offset = HZ;
+		else
+			next_event_offset = HZ / 10;
+	} else {
+		next_event_offset = HZ * 2;
+	}
+
+	/* Reset the timer */
+	mod_timer(&adapter->service_timer, next_event_offset + jiffies);
+
+	txgbe_service_event_schedule(adapter);
+}
+
+static void txgbe_reset_subtask(struct txgbe_adapter *adapter)
+{
+	u32 reset_flag = 0;
+	u32 value = 0;
+
+	if (!(adapter->flags2 & (TXGBE_FLAG2_PF_RESET_REQUESTED |
+				 TXGBE_FLAG2_GLOBAL_RESET_REQUESTED |
+				 TXGBE_FLAG2_RESET_INTR_RECEIVED)))
+		return;
+
+	/* If we're already down, just bail */
+	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+	    test_bit(__TXGBE_REMOVING, &adapter->state))
+		return;
+
+	netdev_err(adapter->netdev, "Reset adapter\n");
+	adapter->tx_timeout_count++;
+
+	rtnl_lock();
+	if (adapter->flags2 & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+		reset_flag |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+		adapter->flags2 &= ~TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+	}
+	if (adapter->flags2 & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+		reset_flag |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+		adapter->flags2 &= ~TXGBE_FLAG2_PF_RESET_REQUESTED;
+	}
+
+	if (adapter->flags2 & TXGBE_FLAG2_RESET_INTR_RECEIVED) {
+		/* If there's a recovery already waiting, it takes
+		 * precedence before starting a new reset sequence.
+		 */
+		adapter->flags2 &= ~TXGBE_FLAG2_RESET_INTR_RECEIVED;
+		value = rd32m(&adapter->hw, TXGBE_MIS_RST_ST,
+			      TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK) >>
+			TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT;
+		if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST) {
+			adapter->hw.reset_type = TXGBE_SW_RESET;
+			/* errata 7 */
+			if (txgbe_mng_present(&adapter->hw) &&
+			    adapter->hw.revision_id == TXGBE_SP_MPW)
+				adapter->flags2 |=
+					TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
+		} else if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST) {
+			adapter->hw.reset_type = TXGBE_GLOBAL_RESET;
+		}
+		adapter->hw.force_full_reset = true;
+		txgbe_reinit_locked(adapter);
+		adapter->hw.force_full_reset = false;
+		goto unlock;
+	}
+
+	if (reset_flag & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+		/*debug to up*/
+		txgbe_reinit_locked(adapter);
+	} else if (reset_flag & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+		/* Request a Global Reset
+		 *
+		 * This will start the chip's countdown to the actual full
+		 * chip reset event, and a warning interrupt to be sent
+		 * to all PFs, including the requestor.  Our handler
+		 * for the warning interrupt will deal with the shutdown
+		 * and recovery of the switch setup.
+		 */
+		/*debug to up*/
+		pci_save_state(adapter->pdev);
+		if (txgbe_mng_present(&adapter->hw))
+			txgbe_reset_hostif(&adapter->hw);
+		else
+			wr32m(&adapter->hw, TXGBE_MIS_RST,
+			      TXGBE_MIS_RST_GLOBAL_RST,
+			      TXGBE_MIS_RST_GLOBAL_RST);
+	}
+
+unlock:
+	rtnl_unlock();
+}
+
+/**
+ * txgbe_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void txgbe_service_task(struct work_struct *work)
+{
+	struct txgbe_adapter *adapter = container_of(work,
+						     struct txgbe_adapter,
+						     service_task);
+	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+			rtnl_lock();
+			txgbe_down(adapter);
+			rtnl_unlock();
+		}
+		txgbe_service_event_complete(adapter);
+		return;
+	}
+
+	txgbe_reset_subtask(adapter);
+	txgbe_sfp_detection_subtask(adapter);
+	txgbe_sfp_link_config_subtask(adapter);
+	txgbe_check_overtemp_subtask(adapter);
+	txgbe_watchdog_subtask(adapter);
+	txgbe_check_hang_subtask(adapter);
+
+	txgbe_service_event_complete(adapter);
+}
+
+static u8 get_ipv6_proto(struct sk_buff *skb, int offset)
+{
+	struct ipv6hdr *hdr = (struct ipv6hdr *)(skb->data + offset);
+	u8 nexthdr = hdr->nexthdr;
 
-	txgbe_up_complete(adapter);
+	offset += sizeof(struct ipv6hdr);
 
-	return 0;
+	while (ipv6_ext_hdr(nexthdr)) {
+		struct ipv6_opt_hdr _hdr, *hp;
 
-err_set_queues:
-	txgbe_free_irq(adapter);
-err_req_irq:
-	txgbe_free_isb_resources(adapter);
-err_req_isb:
-	return err;
-}
+		if (nexthdr == NEXTHDR_NONE)
+			break;
 
-/**
- * txgbe_close_suspend - actions necessary to both suspend and close flows
- * @adapter: the private adapter struct
- *
- * This function should contain the necessary work common to both suspending
- * and closing of the device.
- */
-static void txgbe_close_suspend(struct txgbe_adapter *adapter)
-{
-	struct txgbe_hw *hw = &adapter->hw;
+		hp = skb_header_pointer(skb, offset, sizeof(_hdr), &_hdr);
+		if (!hp)
+			break;
 
-	txgbe_disable_device(adapter);
-	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
-		TCALL(hw, mac.ops.disable_tx_laser);
-	txgbe_free_irq(adapter);
+		if (nexthdr == NEXTHDR_FRAGMENT)
+			break;
+		else if (nexthdr == NEXTHDR_AUTH)
+			offset +=  ipv6_authlen(hp);
+		else
+			offset +=  ipv6_optlen(hp);
 
-	txgbe_free_isb_resources(adapter);
+		nexthdr = hp->nexthdr;
+	}
+
+	return nexthdr;
 }
 
-/**
- * txgbe_close - Disables a network interface
- * @netdev: network interface device structure
- *
- * Returns 0, this is not allowed to fail
- *
- * The close entry point is called when an interface is de-activated
- * by the OS.  The hardware is still under the drivers control, but
- * needs to be disabled.  A global MAC reset is issued to stop the
- * hardware, and all transmit and receive resources are freed.
- **/
-int txgbe_close(struct net_device *netdev)
+union network_header {
+	struct iphdr *ipv4;
+	struct ipv6hdr *ipv6;
+	void *raw;
+};
+
+static struct txgbe_dptype encode_tx_desc_ptype(const struct txgbe_tx_buffer *first)
 {
-	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct sk_buff *skb = first->skb;
+	u8 tun_prot = 0;
+	u8 l4_prot = 0;
+	u8 ptype = 0;
+
+	if (skb->encapsulation) {
+		union network_header hdr;
+
+		switch (first->protocol) {
+		case htons(ETH_P_IP):
+			tun_prot = ip_hdr(skb)->protocol;
+			if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET))
+				goto encap_frag;
+			ptype = TXGBE_PTYPE_TUN_IPV4;
+			break;
+		case htons(ETH_P_IPV6):
+			tun_prot = get_ipv6_proto(skb, skb_network_offset(skb));
+			if (tun_prot == NEXTHDR_FRAGMENT)
+				goto encap_frag;
+			ptype = TXGBE_PTYPE_TUN_IPV6;
+			break;
+		default:
+			goto exit;
+		}
 
-	txgbe_down(adapter);
-	txgbe_free_irq(adapter);
+		if (tun_prot == IPPROTO_IPIP) {
+			hdr.raw = (void *)inner_ip_hdr(skb);
+			ptype |= TXGBE_PTYPE_PKT_IPIP;
+		} else if (tun_prot == IPPROTO_UDP) {
+			hdr.raw = (void *)inner_ip_hdr(skb);
+			if (skb->inner_protocol_type != ENCAP_TYPE_ETHER ||
+			    skb->inner_protocol != htons(ETH_P_TEB)) {
+				ptype |= TXGBE_PTYPE_PKT_IG;
+			} else {
+				if (((struct ethhdr *)
+					skb_inner_mac_header(skb))->h_proto
+					== htons(ETH_P_8021Q)) {
+					ptype |= TXGBE_PTYPE_PKT_IGMV;
+				} else {
+					ptype |= TXGBE_PTYPE_PKT_IGM;
+				}
+			}
 
-	txgbe_free_isb_resources(adapter);
+		} else if (tun_prot == IPPROTO_GRE) {
+			hdr.raw = (void *)inner_ip_hdr(skb);
+			if (skb->inner_protocol ==  htons(ETH_P_IP) ||
+			    skb->inner_protocol ==  htons(ETH_P_IPV6)) {
+				ptype |= TXGBE_PTYPE_PKT_IG;
+			} else {
+				if (((struct ethhdr *)
+					skb_inner_mac_header(skb))->h_proto
+					== htons(ETH_P_8021Q)) {
+					ptype |= TXGBE_PTYPE_PKT_IGMV;
+				} else {
+					ptype |= TXGBE_PTYPE_PKT_IGM;
+				}
+			}
+		} else {
+			goto exit;
+		}
 
-	txgbe_release_hw_control(adapter);
+		switch (hdr.ipv4->version) {
+		case IPVERSION:
+			l4_prot = hdr.ipv4->protocol;
+			if (hdr.ipv4->frag_off & htons(IP_MF | IP_OFFSET)) {
+				ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+				goto exit;
+			}
+			break;
+		case 6:
+			l4_prot = get_ipv6_proto(skb,
+						 skb_inner_network_offset(skb));
+			ptype |= TXGBE_PTYPE_PKT_IPV6;
+			if (l4_prot == NEXTHDR_FRAGMENT) {
+				ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+				goto exit;
+			}
+			break;
+		default:
+			goto exit;
+		}
+	} else {
+encap_frag:
+
+		switch (first->protocol) {
+		case htons(ETH_P_IP):
+			l4_prot = ip_hdr(skb)->protocol;
+			ptype = TXGBE_PTYPE_PKT_IP;
+			if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) {
+				ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+				goto exit;
+			}
+			break;
+#ifdef NETIF_F_IPV6_CSUM
+		case htons(ETH_P_IPV6):
+			l4_prot = get_ipv6_proto(skb, skb_network_offset(skb));
+			ptype = TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6;
+			if (l4_prot == NEXTHDR_FRAGMENT) {
+				ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+				goto exit;
+			}
+			break;
+#endif /* NETIF_F_IPV6_CSUM */
+		case htons(ETH_P_1588):
+			ptype = TXGBE_PTYPE_L2_TS;
+			goto exit;
+		case htons(ETH_P_FIP):
+			ptype = TXGBE_PTYPE_L2_FIP;
+			goto exit;
+		case htons(ETH_P_LLDP):
+			ptype = TXGBE_PTYPE_L2_LLDP;
+			goto exit;
+		case htons(TXGBE_ETH_P_CNM):
+			ptype = TXGBE_PTYPE_L2_CNM;
+			goto exit;
+		case htons(ETH_P_PAE):
+			ptype = TXGBE_PTYPE_L2_EAPOL;
+			goto exit;
+		case htons(ETH_P_ARP):
+			ptype = TXGBE_PTYPE_L2_ARP;
+			goto exit;
+		default:
+			ptype = TXGBE_PTYPE_L2_MAC;
+			goto exit;
+		}
+	}
 
-	return 0;
+	switch (l4_prot) {
+	case IPPROTO_TCP:
+		ptype |= TXGBE_PTYPE_TYP_TCP;
+		break;
+	case IPPROTO_UDP:
+		ptype |= TXGBE_PTYPE_TYP_UDP;
+		break;
+	case IPPROTO_SCTP:
+		ptype |= TXGBE_PTYPE_TYP_SCTP;
+		break;
+	default:
+		ptype |= TXGBE_PTYPE_TYP_IP;
+		break;
+	}
+
+exit:
+	return txgbe_decode_ptype(ptype);
 }
 
-static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
+static int txgbe_tso(struct txgbe_ring *tx_ring,
+		     struct txgbe_tx_buffer *first,
+		     u8 *hdr_len, struct txgbe_dptype dptype)
 {
-	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
-	struct net_device *netdev = adapter->netdev;
+	struct sk_buff *skb = first->skb;
+	u32 vlan_macip_lens, type_tucmd;
+	u32 mss_l4len_idx, l4len;
+	struct tcphdr *tcph;
+	struct iphdr *iph;
+	u32 tunhdr_eiplen_tunlen = 0;
 
-	netif_device_detach(netdev);
+	u8 tun_prot = 0;
+	bool enc = skb->encapsulation;
 
-	rtnl_lock();
-	if (netif_running(netdev))
-		txgbe_close_suspend(adapter);
-	rtnl_unlock();
+	struct ipv6hdr *ipv6h;
 
-	txgbe_clear_interrupt_scheme(adapter);
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
 
-	txgbe_release_hw_control(adapter);
+	if (!skb_is_gso(skb))
+		return 0;
 
-	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
-		pci_disable_device(pdev);
+	if (skb_header_cloned(skb)) {
+		int err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
 
-	return 0;
-}
+		if (err)
+			return err;
+	}
 
-static void txgbe_shutdown(struct pci_dev *pdev)
-{
-	bool wake;
+	iph = enc ? inner_ip_hdr(skb) : ip_hdr(skb);
+
+	if (iph->version == 4) {
+		tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb);
+		iph->tot_len = 0;
+		iph->check = 0;
+		tcph->check = ~csum_tcpudp_magic(iph->saddr,
+						iph->daddr, 0,
+						IPPROTO_TCP,
+						0);
+		first->tx_flags |= TXGBE_TX_FLAGS_TSO |
+				   TXGBE_TX_FLAGS_CSUM |
+				   TXGBE_TX_FLAGS_IPV4 |
+				   TXGBE_TX_FLAGS_CC;
+	} else if (iph->version == 6 && skb_is_gso_v6(skb)) {
+		ipv6h = enc ? inner_ipv6_hdr(skb) : ipv6_hdr(skb);
+		tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb);
+		ipv6h->payload_len = 0;
+		tcph->check = ~csum_ipv6_magic(&ipv6h->saddr,
+					       &ipv6h->daddr,
+					       0, IPPROTO_TCP, 0);
+		first->tx_flags |= TXGBE_TX_FLAGS_TSO |
+				   TXGBE_TX_FLAGS_CSUM |
+				   TXGBE_TX_FLAGS_CC;
+	}
 
-	__txgbe_shutdown(pdev, &wake);
+	/* compute header lengths */
+	l4len = enc ? inner_tcp_hdrlen(skb) : tcp_hdrlen(skb);
+	*hdr_len = enc ? (skb_inner_transport_header(skb) - skb->data)
+		       : skb_transport_offset(skb);
+	*hdr_len += l4len;
+
+	/* update gso size and bytecount with header size */
+	first->gso_segs = skb_shinfo(skb)->gso_segs;
+	first->bytecount += (first->gso_segs - 1) * *hdr_len;
+
+	/* mss_l4len_id: use 0 as index for TSO */
+	mss_l4len_idx = l4len << TXGBE_TXD_L4LEN_SHIFT;
+	mss_l4len_idx |= skb_shinfo(skb)->gso_size << TXGBE_TXD_MSS_SHIFT;
+
+	/* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
+
+	if (enc) {
+		switch (first->protocol) {
+		case htons(ETH_P_IP):
+			tun_prot = ip_hdr(skb)->protocol;
+			first->tx_flags |= TXGBE_TX_FLAGS_OUTER_IPV4;
+			break;
+		case htons(ETH_P_IPV6):
+			tun_prot = ipv6_hdr(skb)->nexthdr;
+			break;
+		default:
+			break;
+		}
+		switch (tun_prot) {
+		case IPPROTO_UDP:
+			tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP;
+			tunhdr_eiplen_tunlen |=
+					((skb_network_header_len(skb) >> 2) <<
+					 TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+					(((skb_inner_mac_header(skb) -
+					   skb_transport_header(skb)) >> 1) <<
+					 TXGBE_TXD_TUNNEL_LEN_SHIFT);
+			break;
+		case IPPROTO_GRE:
+			tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE;
+			tunhdr_eiplen_tunlen |=
+					((skb_network_header_len(skb) >> 2) <<
+					 TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+					(((skb_inner_mac_header(skb) -
+					   skb_transport_header(skb)) >> 1) <<
+					 TXGBE_TXD_TUNNEL_LEN_SHIFT);
+			break;
+		case IPPROTO_IPIP:
+			tunhdr_eiplen_tunlen = (((char *)inner_ip_hdr(skb) -
+						 (char *)ip_hdr(skb)) >> 2) <<
+						TXGBE_TXD_OUTER_IPLEN_SHIFT;
+			break;
+		default:
+			break;
+		}
 
-	if (system_state == SYSTEM_POWER_OFF) {
-		pci_wake_from_d3(pdev, wake);
-		pci_set_power_state(pdev, PCI_D3hot);
+		vlan_macip_lens = skb_inner_network_header_len(skb) >> 1;
+	} else {
+		vlan_macip_lens = skb_network_header_len(skb) >> 1;
 	}
+
+	vlan_macip_lens |= skb_network_offset(skb) << TXGBE_TXD_MACLEN_SHIFT;
+	vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK;
+
+	type_tucmd = dptype.ptype << 24;
+	txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen,
+			  type_tucmd, mss_l4len_idx);
+
+	return 1;
 }
 
-/**
- * txgbe_watchdog_update_link - update the link status
- * @adapter - pointer to the device adapter structure
- * @link_speed - pointer to a u32 to store the link_speed
- **/
-static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
+static void txgbe_tx_csum(struct txgbe_ring *tx_ring,
+			  struct txgbe_tx_buffer *first,
+			  struct txgbe_dptype dptype)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 link_speed = adapter->link_speed;
-	bool link_up = adapter->link_up;
-	u32 reg;
-	u32 i = 1;
+	struct sk_buff *skb = first->skb;
+	u32 vlan_macip_lens = 0;
+	u32 mss_l4len_idx = 0;
+	u32 tunhdr_eiplen_tunlen = 0;
 
-	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE))
-		return;
+	u8 tun_prot = 0;
 
-	link_speed = TXGBE_LINK_SPEED_10GB_FULL;
-	link_up = true;
-	TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+	u32 type_tucmd;
 
-	if (link_up || time_after(jiffies, (adapter->link_check_timeout +
-		TXGBE_TRY_LINK_TIMEOUT))) {
-		adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
-	}
+	if (skb->ip_summed != CHECKSUM_PARTIAL) {
+		if (!(first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN) &&
+		    !(first->tx_flags & TXGBE_TX_FLAGS_CC))
+			return;
+		vlan_macip_lens = skb_network_offset(skb) <<
+				  TXGBE_TXD_MACLEN_SHIFT;
+	} else {
+		u8 l4_prot = 0;
+
+		union {
+			struct iphdr *ipv4;
+			struct ipv6hdr *ipv6;
+			u8 *raw;
+		} network_hdr;
+		union {
+			struct tcphdr *tcphdr;
+			u8 *raw;
+		} transport_hdr;
+
+		if (skb->encapsulation) {
+			network_hdr.raw = skb_inner_network_header(skb);
+			transport_hdr.raw = skb_inner_transport_header(skb);
+			vlan_macip_lens = skb_network_offset(skb) <<
+					  TXGBE_TXD_MACLEN_SHIFT;
+			switch (first->protocol) {
+			case htons(ETH_P_IP):
+				tun_prot = ip_hdr(skb)->protocol;
+				break;
+			case htons(ETH_P_IPV6):
+				tun_prot = ipv6_hdr(skb)->nexthdr;
+				break;
+			default:
+				if (unlikely(net_ratelimit())) {
+					dev_warn(tx_ring->dev,
+						 "partial checksum but version=%d\n",
+						 network_hdr.ipv4->version);
+				}
+				return;
+			}
+			switch (tun_prot) {
+			case IPPROTO_UDP:
+				tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP;
+				tunhdr_eiplen_tunlen |=
+					((skb_network_header_len(skb) >> 2) <<
+					TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+					(((skb_inner_mac_header(skb) -
+					skb_transport_header(skb)) >> 1) <<
+					TXGBE_TXD_TUNNEL_LEN_SHIFT);
+				break;
+			case IPPROTO_GRE:
+				tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE;
+				tunhdr_eiplen_tunlen |=
+					((skb_network_header_len(skb) >> 2) <<
+					TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+					(((skb_inner_mac_header(skb) -
+					skb_transport_header(skb)) >> 1) <<
+					TXGBE_TXD_TUNNEL_LEN_SHIFT);
+				break;
+			case IPPROTO_IPIP:
+				tunhdr_eiplen_tunlen =
+					(((char *)inner_ip_hdr(skb) -
+					(char *)ip_hdr(skb)) >> 2) <<
+					TXGBE_TXD_OUTER_IPLEN_SHIFT;
+				break;
+			default:
+				break;
+			}
 
-	for (i = 0; i < 3; i++) {
-		TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
-		msleep(1);
-	}
+		} else {
+			network_hdr.raw = skb_network_header(skb);
+			transport_hdr.raw = skb_transport_header(skb);
+			vlan_macip_lens = skb_network_offset(skb) <<
+					  TXGBE_TXD_MACLEN_SHIFT;
+		}
 
-	adapter->link_up = link_up;
-	adapter->link_speed = link_speed;
+		switch (network_hdr.ipv4->version) {
+		case IPVERSION:
+			vlan_macip_lens |=
+				(transport_hdr.raw - network_hdr.raw) >> 1;
+			l4_prot = network_hdr.ipv4->protocol;
+			break;
+		case 6:
+			vlan_macip_lens |=
+				(transport_hdr.raw - network_hdr.raw) >> 1;
+			l4_prot = network_hdr.ipv6->nexthdr;
+			break;
+		default:
+			break;
+		}
 
-	if (link_up) {
-		if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
-			wr32(hw, TXGBE_MAC_TX_CFG,
-			     (rd32(hw, TXGBE_MAC_TX_CFG) &
-			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
-			     TXGBE_MAC_TX_CFG_SPEED_10G);
-		} else if (link_speed & (TXGBE_LINK_SPEED_1GB_FULL |
-			   TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL)) {
-			wr32(hw, TXGBE_MAC_TX_CFG,
-			     (rd32(hw, TXGBE_MAC_TX_CFG) &
-			      ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
-			     TXGBE_MAC_TX_CFG_SPEED_1G);
+		switch (l4_prot) {
+		case IPPROTO_TCP:
+
+		mss_l4len_idx = (transport_hdr.tcphdr->doff * 4) <<
+				TXGBE_TXD_L4LEN_SHIFT;
+			break;
+		case IPPROTO_SCTP:
+			mss_l4len_idx = sizeof(struct sctphdr) <<
+					TXGBE_TXD_L4LEN_SHIFT;
+			break;
+		case IPPROTO_UDP:
+			mss_l4len_idx = sizeof(struct udphdr) <<
+					TXGBE_TXD_L4LEN_SHIFT;
+			break;
+		default:
+			break;
 		}
 
-		/* Re configure MAC RX */
-		reg = rd32(hw, TXGBE_MAC_RX_CFG);
-		wr32(hw, TXGBE_MAC_RX_CFG, reg);
-		wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
-		reg = rd32(hw, TXGBE_MAC_WDG_TIMEOUT);
-		wr32(hw, TXGBE_MAC_WDG_TIMEOUT, reg);
+		/* update TX checksum flag */
+		first->tx_flags |= TXGBE_TX_FLAGS_CSUM;
 	}
+	first->tx_flags |= TXGBE_TX_FLAGS_CC;
+	/* vlan_macip_lens: MACLEN, VLAN tag */
+	vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK;
+
+	type_tucmd = dptype.ptype << 24;
+	txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen,
+			  type_tucmd, mss_l4len_idx);
 }
 
-/**
- * txgbe_watchdog_link_is_up - update netif_carrier status and
- *                             print link up message
- * @adapter - pointer to the device adapter structure
- **/
-static void txgbe_watchdog_link_is_up(struct txgbe_adapter *adapter)
+static u32 txgbe_tx_cmd_type(u32 tx_flags)
 {
-	struct net_device *netdev = adapter->netdev;
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 link_speed = adapter->link_speed;
-	bool flow_rx, flow_tx;
+	/* set type for advanced descriptor with frame checksum insertion */
+	u32 cmd_type = TXGBE_TXD_DTYP_DATA |
+		       TXGBE_TXD_IFCS;
 
-	/* only continue if link was previously down */
-	if (netif_carrier_ok(netdev))
-		return;
+	/* set HW vlan bit if vlan is present */
+	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_HW_VLAN,
+				   TXGBE_TXD_VLE);
 
-	/* flow_rx, flow_tx report link flow control status */
-	flow_rx = (rd32(hw, TXGBE_MAC_RX_FLOW_CTRL) & 0x101) == 0x1;
-	flow_tx = !!(TXGBE_RDB_RFCC_RFCE_802_3X &
-		     rd32(hw, TXGBE_RDB_RFCC));
+	/* set segmentation enable bits for TSO/FSO */
+	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSO,
+				   TXGBE_TXD_TSE);
 
-	txgbe_info(drv, "NIC Link is Up %s, Flow Control: %s\n",
-		   (link_speed == TXGBE_LINK_SPEED_10GB_FULL ?
-		    "10 Gbps" :
-		    (link_speed == TXGBE_LINK_SPEED_1GB_FULL ?
-		     "1 Gbps" :
-		     (link_speed == TXGBE_LINK_SPEED_100_FULL ?
-		      "100 Mbps" :
-		      (link_speed == TXGBE_LINK_SPEED_10_FULL ?
-		       "10 Mbps" :
-		       "unknown speed")))),
-		  ((flow_rx && flow_tx) ? "RX/TX" :
-		   (flow_rx ? "RX" :
-		    (flow_tx ? "TX" : "None"))));
+	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_LINKSEC,
+				   TXGBE_TXD_LINKSEC);
 
-	netif_carrier_on(netdev);
+	return cmd_type;
 }
 
-/**
- * txgbe_watchdog_link_is_down - update netif_carrier status and
- *                               print link down message
- * @adapter - pointer to the adapter structure
- **/
-static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter)
+static void txgbe_tx_olinfo_status(union txgbe_tx_desc *tx_desc,
+				   u32 tx_flags, unsigned int paylen)
 {
-	struct net_device *netdev = adapter->netdev;
-
-	adapter->link_up = false;
-	adapter->link_speed = 0;
+	u32 olinfo_status = paylen << TXGBE_TXD_PAYLEN_SHIFT;
+
+	/* enable L4 checksum for TSO and TX checksum offload */
+	olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+					TXGBE_TX_FLAGS_CSUM,
+					TXGBE_TXD_L4CS);
+
+	/* enable IPv4 checksum for TSO */
+	olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+					TXGBE_TX_FLAGS_IPV4,
+					TXGBE_TXD_IIPCS);
+	/* enable outer IPv4 checksum for TSO */
+	olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+					TXGBE_TX_FLAGS_OUTER_IPV4,
+					TXGBE_TXD_EIPCS);
+	/* Check Context must be set if Tx switch is enabled, which it
+	 * always is for case where virtual functions are running
+	 */
+	olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+					TXGBE_TX_FLAGS_CC,
+					TXGBE_TXD_CC);
 
-	/* only continue if link was up previously */
-	if (!netif_carrier_ok(netdev))
-		return;
+	olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+					TXGBE_TX_FLAGS_IPSEC,
+					TXGBE_TXD_IPSEC);
 
-	txgbe_info(drv, "NIC Link is Down\n");
-	netif_carrier_off(netdev);
+	tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
 }
 
-/**
- * txgbe_watchdog_subtask - check and bring link up
- * @adapter - pointer to the device adapter structure
- **/
-static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter)
+static int __txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size)
 {
-	/* if interface is down do nothing */
-	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
-	    test_bit(__TXGBE_REMOVING, &adapter->state) ||
-	    test_bit(__TXGBE_RESETTING, &adapter->state))
-		return;
+	netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
 
-	txgbe_watchdog_update_link(adapter);
+	/* Herbert's original patch had:
+	 *  smp_mb__after_netif_stop_queue();
+	 * but since that doesn't exist yet, just open code it.
+	 */
+	smp_mb();
 
-	if (adapter->link_up)
-		txgbe_watchdog_link_is_up(adapter);
-	else
-		txgbe_watchdog_link_is_down(adapter);
+	/* We need to check again in a case another CPU has just
+	 * made room available.
+	 */
+	if (likely(txgbe_desc_unused(tx_ring) < size))
+		return -EBUSY;
+
+	/* A reprieve! - use start_queue because it doesn't call schedule */
+	netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+	++tx_ring->tx_stats.restart_queue;
+	return 0;
 }
 
-/**
- * txgbe_sfp_detection_subtask - poll for SFP+ cable
- * @adapter - the txgbe adapter structure
- **/
-static void txgbe_sfp_detection_subtask(struct txgbe_adapter *adapter)
+static inline int txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size)
 {
-	struct txgbe_hw *hw = &adapter->hw;
-	struct txgbe_mac_info *mac = &hw->mac;
-	s32 err;
+	if (likely(txgbe_desc_unused(tx_ring) >= size))
+		return 0;
 
-	/* not searching for SFP so there is nothing to do here */
-	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
-		return;
+	return __txgbe_maybe_stop_tx(tx_ring, size);
+}
 
-	if (adapter->sfp_poll_time &&
-	    time_after(adapter->sfp_poll_time, jiffies))
-		return; /* If not yet time to poll for SFP */
+#define TXGBE_TXD_CMD (TXGBE_TXD_EOP | \
+		       TXGBE_TXD_RS)
 
-	/* someone else is in init, wait until next service event */
-	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
-		return;
+static int txgbe_tx_map(struct txgbe_ring *tx_ring,
+			struct txgbe_tx_buffer *first,
+			const u8 hdr_len)
+{
+	struct sk_buff *skb = first->skb;
+	struct txgbe_tx_buffer *tx_buffer;
+	union txgbe_tx_desc *tx_desc;
+	skb_frag_t *frag;
+	dma_addr_t dma;
+	unsigned int data_len, size;
+	u32 tx_flags = first->tx_flags;
+	u32 cmd_type = txgbe_tx_cmd_type(tx_flags);
+	u16 i = tx_ring->next_to_use;
 
-	adapter->sfp_poll_time = jiffies + TXGBE_SFP_POLL_JIFFIES - 1;
+	tx_desc = TXGBE_TX_DESC(tx_ring, i);
 
-	err = TCALL(hw, phy.ops.identify_sfp);
-	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
-		goto sfp_out;
+	txgbe_tx_olinfo_status(tx_desc, tx_flags, skb->len - hdr_len);
 
-	if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
-		/* If no cable is present, then we need to reset
-		 * the next time we find a good cable.
-		 */
-		adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
-	}
+	size = skb_headlen(skb);
+	data_len = skb->data_len;
+
+	dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+
+	tx_buffer = first;
+
+	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+		if (dma_mapping_error(tx_ring->dev, dma))
+			goto dma_error;
+
+		/* record length, and DMA address */
+		dma_unmap_len_set(tx_buffer, len, size);
+		dma_unmap_addr_set(tx_buffer, dma, dma);
+
+		tx_desc->read.buffer_addr = cpu_to_le64(dma);
+
+		while (unlikely(size > TXGBE_MAX_DATA_PER_TXD)) {
+			tx_desc->read.cmd_type_len =
+				cpu_to_le32(cmd_type ^ TXGBE_MAX_DATA_PER_TXD);
+
+			i++;
+			tx_desc++;
+			if (i == tx_ring->count) {
+				tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+				i = 0;
+			}
+			tx_desc->read.olinfo_status = 0;
+
+			dma += TXGBE_MAX_DATA_PER_TXD;
+			size -= TXGBE_MAX_DATA_PER_TXD;
+
+			tx_desc->read.buffer_addr = cpu_to_le64(dma);
+		}
+
+		if (likely(!data_len))
+			break;
 
-	/* exit on error */
-	if (err)
-		goto sfp_out;
+		tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type ^ size);
 
-	/* exit if reset not needed */
-	if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
-		goto sfp_out;
+		i++;
+		tx_desc++;
+		if (i == tx_ring->count) {
+			tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+			i = 0;
+		}
+		tx_desc->read.olinfo_status = 0;
 
-	adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+		size = skb_frag_size(frag);
 
-	if (hw->phy.multispeed_fiber) {
-		/* Set up dual speed SFP+ support */
-		mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
-		mac->ops.setup_mac_link = txgbe_setup_mac_link;
-		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
-	} else {
-		mac->ops.setup_link = txgbe_setup_mac_link;
-		mac->ops.set_rate_select_speed = txgbe_set_hard_rate_select_speed;
-		hw->phy.autoneg_advertised = 0;
+		data_len -= size;
+
+		dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+				       DMA_TO_DEVICE);
+
+		tx_buffer = &tx_ring->tx_buffer_info[i];
 	}
 
-	adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
-	txgbe_info(probe, "detected SFP+: %d\n", hw->phy.sfp_type);
+	/* write last descriptor with RS and EOP bits */
+	cmd_type |= size | TXGBE_TXD_CMD;
+	tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
 
-sfp_out:
-	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+	netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
 
-	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED && adapter->netdev_registered)
-		txgbe_dev_err("failed to initialize because an unsupported SFP+ module type was detected.\n");
-}
+	/* Force memory writes to complete before letting h/w know there
+	 * are new descriptors to fetch.  (Only applicable for weak-ordered
+	 * memory model archs, such as IA-64).
+	 *
+	 * We also need this memory barrier to make certain all of the
+	 * status bits have been updated before next_to_watch is written.
+	 */
+	wmb();
 
-/**
- * txgbe_sfp_link_config_subtask - set up link SFP after module install
- * @adapter - the txgbe adapter structure
- **/
-static void txgbe_sfp_link_config_subtask(struct txgbe_adapter *adapter)
-{
-	struct txgbe_hw *hw = &adapter->hw;
-	u32 speed;
-	bool autoneg = false;
-	u8 device_type = hw->subsystem_id & 0xF0;
+	/* set next_to_watch value indicating a packet is present */
+	first->next_to_watch = tx_desc;
 
-	if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG))
-		return;
+	i++;
+	if (i == tx_ring->count)
+		i = 0;
 
-	/* someone else is in init, wait until next service event */
-	if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
-		return;
+	tx_ring->next_to_use = i;
 
-	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+	txgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
 
-	if (device_type == TXGBE_ID_MAC_SGMII) {
-		speed = TXGBE_LINK_SPEED_1GB_FULL;
-	} else {
-		speed = hw->phy.autoneg_advertised;
-		if (!speed && hw->mac.ops.get_link_capabilities) {
-			TCALL(hw, mac.ops.get_link_capabilities, &speed, &autoneg);
-			/* setup the highest link when no autoneg */
-			if (!autoneg) {
-				if (speed & TXGBE_LINK_SPEED_10GB_FULL)
-					speed = TXGBE_LINK_SPEED_10GB_FULL;
-			}
-		}
+	if (netif_xmit_stopped(txring_txq(tx_ring)) || !netdev_xmit_more())
+		writel(i, tx_ring->tail);
+
+	return 0;
+dma_error:
+	dev_err(tx_ring->dev, "TX DMA map failed\n");
+
+	/* clear dma mappings for failed tx_buffer_info map */
+	for (;;) {
+		tx_buffer = &tx_ring->tx_buffer_info[i];
+		if (dma_unmap_len(tx_buffer, len))
+			dma_unmap_page(tx_ring->dev,
+				       dma_unmap_addr(tx_buffer, dma),
+				       dma_unmap_len(tx_buffer, len),
+				       DMA_TO_DEVICE);
+		dma_unmap_len_set(tx_buffer, len, 0);
+		if (tx_buffer == first)
+			break;
+		if (i == 0)
+			i += tx_ring->count;
+		i--;
 	}
 
-	TCALL(hw, mac.ops.setup_link, speed, false);
+	dev_kfree_skb_any(first->skb);
+	first->skb = NULL;
 
-	adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
-	adapter->link_check_timeout = jiffies;
-	clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+	tx_ring->next_to_use = i;
+
+	return -1;
 }
 
 /**
- * txgbe_service_timer - Timer Call-back
- * @data: pointer to adapter cast into an unsigned long
- **/
-static void txgbe_service_timer(struct timer_list *t)
+ * skb_pad - zero pad the tail of an skb
+ * @skb: buffer to pad
+ * @pad: space to pad
+ *
+ * Ensure that a buffer is followed by a padding area that is zero
+ * filled. Used by network drivers which may DMA or transfer data
+ * beyond the buffer end onto the wire.
+ *
+ * May return error in out of memory cases. The skb is freed on error.
+ */
+
+int txgbe_skb_pad_nonzero(struct sk_buff *skb, int pad)
 {
-	struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
-	unsigned long next_event_offset;
-	struct txgbe_hw *hw = &adapter->hw;
+	int err;
+	int ntail;
 
-	/* poll faster when waiting for link */
-	if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
-		if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
-			next_event_offset = HZ;
-		else
-			next_event_offset = HZ / 10;
-	} else {
-		next_event_offset = HZ * 2;
+	/* If the skbuff is non linear tailroom is always zero. */
+	if (!skb_cloned(skb) && skb_tailroom(skb) >= pad) {
+		memset(skb->data + skb->len, 0x1, pad);
+		return 0;
 	}
 
-	/* Reset the timer */
-	mod_timer(&adapter->service_timer, next_event_offset + jiffies);
+	ntail = skb->data_len + pad - (skb->end - skb->tail);
+	if (likely(skb_cloned(skb) || ntail > 0)) {
+		err = pskb_expand_head(skb, 0, ntail, GFP_ATOMIC);
+		if (unlikely(err))
+			goto free_skb;
+	}
 
-	txgbe_service_event_schedule(adapter);
-}
+	/* The use of this function with non-linear skb's really needs
+	 * to be audited.
+	 */
+	err = skb_linearize(skb);
+	if (unlikely(err))
+		goto free_skb;
 
-static void txgbe_reset_subtask(struct txgbe_adapter *adapter)
-{
-	u32 reset_flag = 0;
-	u32 value = 0;
+	memset(skb->data + skb->len, 0x1, pad);
+	return 0;
 
-	if (!(adapter->flags2 & (TXGBE_FLAG2_PF_RESET_REQUESTED |
-				 TXGBE_FLAG2_GLOBAL_RESET_REQUESTED |
-				 TXGBE_FLAG2_RESET_INTR_RECEIVED)))
-		return;
+free_skb:
+	kfree_skb(skb);
+	return err;
+}
 
-	/* If we're already down, just bail */
-	if (test_bit(__TXGBE_DOWN, &adapter->state) ||
-	    test_bit(__TXGBE_REMOVING, &adapter->state))
-		return;
+netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
+				  struct txgbe_adapter __maybe_unused *adapter,
+				  struct txgbe_ring *tx_ring)
+{
+	struct txgbe_tx_buffer *first;
+	int tso;
+	u32 tx_flags = 0;
+	unsigned short f;
+	u16 count = TXD_USE_COUNT(skb_headlen(skb));
+	__be16 protocol = skb->protocol;
+	u8 hdr_len = 0;
+	struct txgbe_dptype dptype;
+	u8 vlan_addlen = 0;
+
+	/* work around hw errata 3 */
+	u16 _llclen, *llclen;
+
+	llclen = skb_header_pointer(skb, ETH_HLEN - 2, sizeof(u16), &_llclen);
+	if (*llclen == 0x3 || *llclen == 0x4 || *llclen == 0x5) {
+		if (txgbe_skb_pad_nonzero(skb, ETH_ZLEN - skb->len))
+			return -ENOMEM;
+		__skb_put(skb, ETH_ZLEN - skb->len);
+	}
 
-	netdev_err(adapter->netdev, "Reset adapter\n");
+	/* need: 1 descriptor per page * PAGE_SIZE/TXGBE_MAX_DATA_PER_TXD,
+	 *       + 1 desc for skb_headlen/TXGBE_MAX_DATA_PER_TXD,
+	 *       + 2 desc gap to keep tail from touching head,
+	 *       + 1 desc for context descriptor,
+	 * otherwise try next time
+	 */
+	for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+		count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->
+						     frags[f]));
 
-	rtnl_lock();
-	if (adapter->flags2 & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
-		reset_flag |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
-		adapter->flags2 &= ~TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+	if (txgbe_maybe_stop_tx(tx_ring, count + 3)) {
+		tx_ring->tx_stats.tx_busy++;
+		return NETDEV_TX_BUSY;
 	}
-	if (adapter->flags2 & TXGBE_FLAG2_PF_RESET_REQUESTED) {
-		reset_flag |= TXGBE_FLAG2_PF_RESET_REQUESTED;
-		adapter->flags2 &= ~TXGBE_FLAG2_PF_RESET_REQUESTED;
+
+	/* record the location of the first descriptor for this packet */
+	first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+	first->skb = skb;
+	first->bytecount = skb->len;
+	first->gso_segs = 1;
+
+	/* if we have a HW VLAN tag being added default to the HW one */
+	if (skb_vlan_tag_present(skb)) {
+		tx_flags |= skb_vlan_tag_get(skb) << TXGBE_TX_FLAGS_VLAN_SHIFT;
+		tx_flags |= TXGBE_TX_FLAGS_HW_VLAN;
+	/* else if it is a SW VLAN check the next protocol and store the tag */
+	} else if (protocol == htons(ETH_P_8021Q)) {
+		struct vlan_hdr *vhdr, _vhdr;
+
+		vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr);
+		if (!vhdr)
+			goto out_drop;
+
+		protocol = vhdr->h_vlan_encapsulated_proto;
+		tx_flags |= ntohs(vhdr->h_vlan_TCI) <<
+				  TXGBE_TX_FLAGS_VLAN_SHIFT;
+		tx_flags |= TXGBE_TX_FLAGS_SW_VLAN;
 	}
 
-	if (adapter->flags2 & TXGBE_FLAG2_RESET_INTR_RECEIVED) {
-		/* If there's a recovery already waiting, it takes
-		 * precedence before starting a new reset sequence.
-		 */
-		adapter->flags2 &= ~TXGBE_FLAG2_RESET_INTR_RECEIVED;
-		value = rd32m(&adapter->hw, TXGBE_MIS_RST_ST,
-			      TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK) >>
-			TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT;
-		if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST) {
-			adapter->hw.reset_type = TXGBE_SW_RESET;
-			/* errata 7 */
-			if (txgbe_mng_present(&adapter->hw) &&
-			    adapter->hw.revision_id == TXGBE_SP_MPW)
-				adapter->flags2 |=
-					TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
-		} else if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST) {
-			adapter->hw.reset_type = TXGBE_GLOBAL_RESET;
-		}
-		adapter->hw.force_full_reset = true;
-		txgbe_reinit_locked(adapter);
-		adapter->hw.force_full_reset = false;
-		goto unlock;
+	if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD)) {
+		struct vlan_hdr *vhdr, _vhdr;
+
+		vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr);
+		if (!vhdr)
+			goto out_drop;
+
+		protocol = vhdr->h_vlan_encapsulated_proto;
+		tx_flags |= TXGBE_TX_FLAGS_SW_VLAN;
+		vlan_addlen += VLAN_HLEN;
 	}
 
-	if (reset_flag & TXGBE_FLAG2_PF_RESET_REQUESTED) {
-		/*debug to up*/
-		txgbe_reinit_locked(adapter);
-	} else if (reset_flag & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
-		/* Request a Global Reset
-		 *
-		 * This will start the chip's countdown to the actual full
-		 * chip reset event, and a warning interrupt to be sent
-		 * to all PFs, including the requestor.  Our handler
-		 * for the warning interrupt will deal with the shutdown
-		 * and recovery of the switch setup.
-		 */
-		/*debug to up*/
-		pci_save_state(adapter->pdev);
-		if (txgbe_mng_present(&adapter->hw))
-			txgbe_reset_hostif(&adapter->hw);
-		else
-			wr32m(&adapter->hw, TXGBE_MIS_RST,
-			      TXGBE_MIS_RST_GLOBAL_RST,
-			      TXGBE_MIS_RST_GLOBAL_RST);
+	/* record initial flags and protocol */
+	first->tx_flags = tx_flags;
+	first->protocol = protocol;
+
+	dptype = encode_tx_desc_ptype(first);
+
+	tso = txgbe_tso(tx_ring, first, &hdr_len, dptype);
+	if (tso < 0)
+		goto out_drop;
+	else if (!tso)
+		txgbe_tx_csum(tx_ring, first, dptype);
+
+	txgbe_tx_map(tx_ring, first, hdr_len);
+
+	return NETDEV_TX_OK;
+
+out_drop:
+	dev_kfree_skb_any(first->skb);
+	first->skb = NULL;
+
+	return NETDEV_TX_OK;
+}
+
+static netdev_tx_t txgbe_xmit_frame(struct sk_buff *skb,
+				    struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_ring *tx_ring;
+	unsigned int r_idx = skb->queue_mapping;
+
+	if (!netif_carrier_ok(netdev)) {
+		dev_kfree_skb_any(skb);
+		return NETDEV_TX_OK;
 	}
 
-unlock:
-	rtnl_unlock();
+	/* The minimum packet size for olinfo paylen is 17 so pad the skb
+	 * in order to meet this minimum size requirement.
+	 */
+	if (skb_put_padto(skb, 17))
+		return NETDEV_TX_OK;
+
+	if (r_idx >= adapter->num_tx_queues)
+		r_idx = r_idx % adapter->num_tx_queues;
+	tx_ring = adapter->tx_ring[r_idx];
+
+	return txgbe_xmit_frame_ring(skb, adapter, tx_ring);
 }
 
 /**
- * txgbe_service_task - manages and runs subtasks
- * @work: pointer to work_struct containing our data
+ * txgbe_set_mac - Change the Ethernet Address of the NIC
+ * @netdev: network interface device structure
+ * @p: pointer to an address structure
+ *
+ * Returns 0 on success, negative on failure
  **/
-static void txgbe_service_task(struct work_struct *work)
+static int txgbe_set_mac(struct net_device *netdev, void *p)
 {
-	struct txgbe_adapter *adapter = container_of(work,
-						     struct txgbe_adapter,
-						     service_task);
-	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
-		if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
-			rtnl_lock();
-			txgbe_down(adapter);
-			rtnl_unlock();
-		}
-		txgbe_service_event_complete(adapter);
-		return;
-	}
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	struct sockaddr *addr = p;
 
-	txgbe_reset_subtask(adapter);
-	txgbe_sfp_detection_subtask(adapter);
-	txgbe_sfp_link_config_subtask(adapter);
-	txgbe_check_overtemp_subtask(adapter);
-	txgbe_watchdog_subtask(adapter);
+	if (!is_valid_ether_addr(addr->sa_data))
+		return -EADDRNOTAVAIL;
 
-	txgbe_service_event_complete(adapter);
+	txgbe_del_mac_filter(adapter, hw->mac.addr, 0);
+	eth_hw_addr_set(netdev, addr->sa_data);
+	memcpy(hw->mac.addr, addr->sa_data, netdev->addr_len);
+
+	txgbe_mac_set_default_filter(adapter, hw->mac.addr);
+
+	return 0;
 }
 
 /**
@@ -1613,9 +5286,153 @@ static int txgbe_del_sanmac_netdev(struct net_device *dev)
 	return err;
 }
 
+void txgbe_do_reset(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	if (netif_running(netdev))
+		txgbe_reinit_locked(adapter);
+	else
+		txgbe_reset(adapter);
+}
+
+static netdev_features_t txgbe_fix_features(struct net_device *netdev,
+					    netdev_features_t features)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	/* If Rx checksum is disabled, then RSC/LRO should also be disabled */
+	if (!(features & NETIF_F_RXCSUM))
+		features &= ~NETIF_F_LRO;
+
+	/* Turn off LRO if not RSC capable */
+	if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE))
+		features &= ~NETIF_F_LRO;
+
+	return features;
+}
+
+static int txgbe_set_features(struct net_device *netdev,
+			      netdev_features_t features)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	bool need_reset = false;
+
+	/* Make sure RSC matches LRO, reset if change */
+	if (!(features & NETIF_F_LRO)) {
+		if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+			need_reset = true;
+		adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED;
+	} else if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) &&
+		   !(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) {
+		if (adapter->rx_itr_setting == 1 ||
+		    adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) {
+			adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+			need_reset = true;
+		} else if ((netdev->features ^ features) & NETIF_F_LRO) {
+			txgbe_info(probe, "rx-usecs set too low, disabling RSC\n");
+		}
+	}
+
+	if (features & NETIF_F_HW_VLAN_CTAG_RX)
+		txgbe_vlan_strip_enable(adapter);
+	else
+		txgbe_vlan_strip_disable(adapter);
+
+	if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE &&
+	      features & NETIF_F_RXCSUM))
+		txgbe_clear_vxlan_port(adapter);
+
+	if (features & NETIF_F_RXHASH) {
+		if (!(adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)) {
+			wr32m(&adapter->hw, TXGBE_RDB_RA_CTL,
+			      TXGBE_RDB_RA_CTL_RSS_EN, TXGBE_RDB_RA_CTL_RSS_EN);
+			adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED;
+		}
+	} else {
+		if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) {
+			wr32m(&adapter->hw, TXGBE_RDB_RA_CTL,
+			      TXGBE_RDB_RA_CTL_RSS_EN, ~TXGBE_RDB_RA_CTL_RSS_EN);
+			adapter->flags2 &= ~TXGBE_FLAG2_RSS_ENABLED;
+		}
+	}
+
+	if (need_reset)
+		txgbe_do_reset(netdev);
+
+	return 0;
+}
+
+static int txgbe_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+			     struct net_device *dev,
+			     const unsigned char *addr,
+			     u16 vid,
+			     u16 flags,
+			     struct netlink_ext_ack *extack)
+{
+	/* guarantee we can provide a unique filter for the unicast address */
+	if (is_unicast_ether_addr(addr) || is_link_local_ether_addr(addr)) {
+		if (netdev_uc_count(dev) >= TXGBE_MAX_PF_MACVLANS)
+			return -ENOMEM;
+	}
+
+	return ndo_dflt_fdb_add(ndm, tb, dev, addr, vid, flags);
+}
+
+#define TXGBE_MAX_TUNNEL_HDR_LEN 80
+static netdev_features_t
+txgbe_features_check(struct sk_buff *skb, struct net_device *dev,
+		     netdev_features_t features)
+{
+	u32 vlan_num = 0;
+	u16 vlan_depth = skb->mac_len;
+	__be16 type = skb->protocol;
+	struct vlan_hdr *vh;
+
+	if (skb_vlan_tag_present(skb))
+		vlan_num++;
+
+	if (vlan_depth)
+		vlan_depth -= VLAN_HLEN;
+	else
+		vlan_depth = ETH_HLEN;
+
+	while (type == htons(ETH_P_8021Q) || type == htons(ETH_P_8021AD)) {
+		vlan_num++;
+		vh = (struct vlan_hdr *)(skb->data + vlan_depth);
+		type = vh->h_vlan_encapsulated_proto;
+		vlan_depth += VLAN_HLEN;
+	}
+
+	if (vlan_num > 2)
+		features &= ~(NETIF_F_HW_VLAN_CTAG_TX |
+			    NETIF_F_HW_VLAN_STAG_TX);
+
+	if (skb->encapsulation) {
+		if (unlikely(skb_inner_mac_header(skb) -
+			     skb_transport_header(skb) >
+			     TXGBE_MAX_TUNNEL_HDR_LEN))
+			return features & ~NETIF_F_CSUM_MASK;
+	}
+	return features;
+}
+
 static const struct net_device_ops txgbe_netdev_ops = {
 	.ndo_open               = txgbe_open,
 	.ndo_stop               = txgbe_close,
+	.ndo_start_xmit         = txgbe_xmit_frame,
+	.ndo_set_rx_mode        = txgbe_set_rx_mode,
+	.ndo_validate_addr      = eth_validate_addr,
+	.ndo_set_mac_address    = txgbe_set_mac,
+	.ndo_change_mtu		= txgbe_change_mtu,
+	.ndo_tx_timeout         = txgbe_tx_timeout,
+	.ndo_vlan_rx_add_vid    = txgbe_vlan_rx_add_vid,
+	.ndo_vlan_rx_kill_vid   = txgbe_vlan_rx_kill_vid,
+	.ndo_get_stats64        = txgbe_get_stats64,
+	.ndo_fdb_add            = txgbe_ndo_fdb_add,
+	.ndo_features_check     = txgbe_features_check,
+	.ndo_set_features       = txgbe_set_features,
+	.ndo_fix_features       = txgbe_fix_features,
 };
 
 void txgbe_assign_netdev_ops(struct net_device *dev)
@@ -1647,6 +5464,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 	u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0;
 	u32 etrack_id = 0;
 	u16 build = 0, major = 0, patch = 0;
+	char *info_string, *i_s_var;
 	u8 part_str[TXGBE_PBANUM_LENGTH];
 	unsigned int indices = MAX_TX_QUEUES;
 	bool disable_dev = false;
@@ -1755,6 +5573,52 @@ static int txgbe_probe(struct pci_dev *pdev,
 		goto err_sw_init;
 	}
 
+	netdev->features = NETIF_F_SG |
+			   NETIF_F_LRO |
+			   NETIF_F_TSO |
+			   NETIF_F_TSO6 |
+			   NETIF_F_RXHASH |
+			   NETIF_F_RXCSUM |
+			   NETIF_F_HW_CSUM |
+			   NETIF_F_SCTP_CRC;
+
+	netdev->gso_partial_features = TXGBE_GSO_PARTIAL_FEATURES;
+	netdev->features |= NETIF_F_GSO_PARTIAL |
+			    TXGBE_GSO_PARTIAL_FEATURES;
+
+	/* copy netdev features into list of user selectable features */
+	netdev->hw_features |= netdev->features |
+			       NETIF_F_HW_VLAN_CTAG_FILTER |
+			       NETIF_F_HW_VLAN_CTAG_RX |
+			       NETIF_F_HW_VLAN_CTAG_TX |
+			       NETIF_F_RXALL;
+
+	netdev->hw_features |= NETIF_F_NTUPLE;
+
+	if (pci_using_dac)
+		netdev->features |= NETIF_F_HIGHDMA;
+
+	netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+	netdev->hw_enc_features |= netdev->vlan_features;
+	netdev->mpls_features |= NETIF_F_HW_CSUM;
+
+	/* set this bit last since it cannot be part of vlan_features */
+	netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+			    NETIF_F_HW_VLAN_CTAG_RX |
+			    NETIF_F_HW_VLAN_CTAG_TX;
+
+	netdev->priv_flags |= IFF_UNICAST_FLT;
+	netdev->priv_flags |= IFF_SUPP_NOFCS;
+
+	/* give us the option of enabling RSC/LRO later */
+	if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) {
+		netdev->hw_features |= NETIF_F_LRO;
+		adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+	}
+
+	netdev->min_mtu = ETH_MIN_MTU;
+	netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
+
 	/* make sure the EEPROM is good */
 	if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) {
 		txgbe_dev_err("The EEPROM Checksum Is Not Valid\n");
@@ -1848,6 +5712,7 @@ static int txgbe_probe(struct pci_dev *pdev,
 
 	/* carrier off reporting is important to ethtool even BEFORE open */
 	netif_carrier_off(netdev);
+	netif_tx_stop_all_queues(netdev);
 
 	/* calculate the expected PCIe bandwidth required for optimal
 	 * performance. Note that some older parts will never have enough
@@ -1882,6 +5747,26 @@ static int txgbe_probe(struct pci_dev *pdev,
 		       netdev->dev_addr[2], netdev->dev_addr[3],
 		       netdev->dev_addr[4], netdev->dev_addr[5]);
 
+#define INFO_STRING_LEN 255
+	info_string = kzalloc(INFO_STRING_LEN, GFP_KERNEL);
+	if (!info_string) {
+		txgbe_err(probe, "allocation for info string failed\n");
+		goto no_info_string;
+	}
+	i_s_var = info_string;
+	i_s_var += sprintf(info_string, "Enabled Features: ");
+	i_s_var += sprintf(i_s_var, "RxQ: %d TxQ: %d ",
+			   adapter->num_rx_queues, adapter->num_tx_queues);
+	if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+		i_s_var += sprintf(i_s_var, "RSC ");
+	if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE)
+		i_s_var += sprintf(i_s_var, "vxlan_rx ");
+
+	BUG_ON(i_s_var > (info_string + INFO_STRING_LEN));
+	/* end features printing */
+	txgbe_info(probe, "%s\n", info_string);
+	kfree(info_string);
+no_info_string:
 	/* firmware requires blank driver version */
 	TCALL(hw, mac.ops.set_fw_drv_ver, 0xFF, 0xFF, 0xFF, 0xFF);
 
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index ff2b7a4f028b..c673c5d86b1a 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -1683,6 +1683,282 @@ enum TXGBE_MSCA_CMD_value {
 #define TXGBE_PCIDEVCTRL2_4_8s          0xd
 #define TXGBE_PCIDEVCTRL2_17_34s        0xe
 
+/******************* Receive Descriptor bit definitions **********************/
+#define TXGBE_RXD_NEXTP_MASK            0x000FFFF0U /* Next Descriptor Index */
+#define TXGBE_RXD_NEXTP_SHIFT           0x00000004U
+#define TXGBE_RXD_STAT_MASK             0x000fffffU /* Stat/NEXTP: bit 0-19 */
+#define TXGBE_RXD_STAT_DD               0x00000001U /* Done */
+#define TXGBE_RXD_STAT_EOP              0x00000002U /* End of Packet */
+#define TXGBE_RXD_STAT_CLASS_ID_MASK    0x0000001CU
+#define TXGBE_RXD_STAT_CLASS_ID_TC_RSS  0x00000000U
+#define TXGBE_RXD_STAT_CLASS_ID_FLM     0x00000004U /* FDir Match */
+#define TXGBE_RXD_STAT_CLASS_ID_SYN     0x00000008U
+#define TXGBE_RXD_STAT_CLASS_ID_5_TUPLE 0x0000000CU
+#define TXGBE_RXD_STAT_CLASS_ID_L2_ETYPE 0x00000010U
+#define TXGBE_RXD_STAT_VP               0x00000020U /* IEEE VLAN Pkt */
+#define TXGBE_RXD_STAT_UDPCS            0x00000040U /* UDP xsum calculated */
+#define TXGBE_RXD_STAT_L4CS             0x00000080U /* L4 xsum calculated */
+#define TXGBE_RXD_STAT_IPCS             0x00000100U /* IP xsum calculated */
+#define TXGBE_RXD_STAT_PIF              0x00000200U /* passed in-exact filter */
+#define TXGBE_RXD_STAT_OUTERIPCS        0x00000400U /* Cloud IP xsum calculated*/
+#define TXGBE_RXD_STAT_VEXT             0x00000800U /* 1st VLAN found */
+#define TXGBE_RXD_STAT_LLINT            0x00002000U /* Pkt caused Low Latency Int */
+#define TXGBE_RXD_STAT_TS               0x00004000U /* IEEE1588 Time Stamp */
+#define TXGBE_RXD_STAT_SECP             0x00008000U /* Security Processing */
+#define TXGBE_RXD_STAT_LB               0x00010000U /* Loopback Status */
+#define TXGBE_RXD_STAT_FCEOFS           0x00020000U /* FCoE EOF/SOF Stat */
+#define TXGBE_RXD_STAT_FCSTAT           0x000C0000U /* FCoE Pkt Stat */
+#define TXGBE_RXD_STAT_FCSTAT_NOMTCH    0x00000000U /* 00: No Ctxt Match */
+#define TXGBE_RXD_STAT_FCSTAT_NODDP     0x00040000U /* 01: Ctxt w/o DDP */
+#define TXGBE_RXD_STAT_FCSTAT_FCPRSP    0x00080000U /* 10: Recv. FCP_RSP */
+#define TXGBE_RXD_STAT_FCSTAT_DDP       0x000C0000U /* 11: Ctxt w/ DDP */
+
+#define TXGBE_RXD_ERR_MASK              0xfff00000U /* RDESC.ERRORS mask */
+#define TXGBE_RXD_ERR_SHIFT             20         /* RDESC.ERRORS shift */
+#define TXGBE_RXD_ERR_FCEOFE            0x80000000U /* FCEOFe/IPE */
+#define TXGBE_RXD_ERR_FCERR             0x00700000U /* FCERR/FDIRERR */
+#define TXGBE_RXD_ERR_FDIR_LEN          0x00100000U /* FDIR Length error */
+#define TXGBE_RXD_ERR_FDIR_DROP         0x00200000U /* FDIR Drop error */
+#define TXGBE_RXD_ERR_FDIR_COLL         0x00400000U /* FDIR Collision error */
+#define TXGBE_RXD_ERR_HBO               0x00800000U /*Header Buffer Overflow */
+#define TXGBE_RXD_ERR_OUTERIPER         0x04000000U /* CRC IP Header error */
+#define TXGBE_RXD_ERR_SECERR_MASK       0x18000000U
+#define TXGBE_RXD_ERR_RXE               0x20000000U /* Any MAC Error */
+#define TXGBE_RXD_ERR_TCPE              0x40000000U /* TCP/UDP Checksum Error */
+#define TXGBE_RXD_ERR_IPE               0x80000000U /* IP Checksum Error */
+
+#define TXGBE_RXD_RSSTYPE_MASK          0x0000000FU
+#define TXGBE_RXD_TPID_MASK             0x000001C0U
+#define TXGBE_RXD_TPID_SHIFT            6
+#define TXGBE_RXD_HDRBUFLEN_MASK        0x00007FE0U
+#define TXGBE_RXD_RSCCNT_MASK           0x001E0000U
+#define TXGBE_RXD_RSCCNT_SHIFT          17
+#define TXGBE_RXD_HDRBUFLEN_SHIFT       5
+#define TXGBE_RXD_SPLITHEADER_EN        0x00001000U
+#define TXGBE_RXD_SPH                   0x8000
+
+/* RSS Hash results */
+#define TXGBE_RXD_RSSTYPE_NONE          0x00000000U
+#define TXGBE_RXD_RSSTYPE_IPV4_TCP      0x00000001U
+#define TXGBE_RXD_RSSTYPE_IPV4          0x00000002U
+#define TXGBE_RXD_RSSTYPE_IPV6_TCP      0x00000003U
+#define TXGBE_RXD_RSSTYPE_IPV4_SCTP     0x00000004U
+#define TXGBE_RXD_RSSTYPE_IPV6          0x00000005U
+#define TXGBE_RXD_RSSTYPE_IPV6_SCTP     0x00000006U
+#define TXGBE_RXD_RSSTYPE_IPV4_UDP      0x00000007U
+#define TXGBE_RXD_RSSTYPE_IPV6_UDP      0x00000008U
+
+/**
+ * receive packet type
+ * PTYPE:8 = TUN:2 + PKT:2 + TYP:4
+ **/
+/* TUN */
+#define TXGBE_PTYPE_TUN_IPV4            (0x80)
+#define TXGBE_PTYPE_TUN_IPV6            (0xC0)
+
+/* PKT for TUN */
+#define TXGBE_PTYPE_PKT_IPIP            (0x00) /* IP+IP */
+#define TXGBE_PTYPE_PKT_IG              (0x10) /* IP+GRE */
+#define TXGBE_PTYPE_PKT_IGM             (0x20) /* IP+GRE+MAC */
+#define TXGBE_PTYPE_PKT_IGMV            (0x30) /* IP+GRE+MAC+VLAN */
+/* PKT for !TUN */
+#define TXGBE_PTYPE_PKT_MAC             (0x10)
+#define TXGBE_PTYPE_PKT_IP              (0x20)
+#define TXGBE_PTYPE_PKT_FCOE            (0x30)
+
+/* TYP for PKT=mac */
+#define TXGBE_PTYPE_TYP_MAC             (0x01)
+#define TXGBE_PTYPE_TYP_TS              (0x02) /* time sync */
+#define TXGBE_PTYPE_TYP_FIP             (0x03)
+#define TXGBE_PTYPE_TYP_LLDP            (0x04)
+#define TXGBE_PTYPE_TYP_CNM             (0x05)
+#define TXGBE_PTYPE_TYP_EAPOL           (0x06)
+#define TXGBE_PTYPE_TYP_ARP             (0x07)
+/* TYP for PKT=ip */
+#define TXGBE_PTYPE_PKT_IPV6            (0x08)
+#define TXGBE_PTYPE_TYP_IPFRAG          (0x01)
+#define TXGBE_PTYPE_TYP_IP              (0x02)
+#define TXGBE_PTYPE_TYP_UDP             (0x03)
+#define TXGBE_PTYPE_TYP_TCP             (0x04)
+#define TXGBE_PTYPE_TYP_SCTP            (0x05)
+/* TYP for PKT=fcoe */
+#define TXGBE_PTYPE_PKT_VFT             (0x08)
+#define TXGBE_PTYPE_TYP_FCOE            (0x00)
+#define TXGBE_PTYPE_TYP_FCDATA          (0x01)
+#define TXGBE_PTYPE_TYP_FCRDY           (0x02)
+#define TXGBE_PTYPE_TYP_FCRSP           (0x03)
+#define TXGBE_PTYPE_TYP_FCOTHER         (0x04)
+
+/* Packet type non-ip values */
+enum txgbe_l2_ptypes {
+	TXGBE_PTYPE_L2_ABORTED = (TXGBE_PTYPE_PKT_MAC),
+	TXGBE_PTYPE_L2_MAC = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_MAC),
+	TXGBE_PTYPE_L2_TS = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_TS),
+	TXGBE_PTYPE_L2_FIP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_FIP),
+	TXGBE_PTYPE_L2_LLDP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_LLDP),
+	TXGBE_PTYPE_L2_CNM = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_CNM),
+	TXGBE_PTYPE_L2_EAPOL = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_EAPOL),
+	TXGBE_PTYPE_L2_ARP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_ARP),
+
+	TXGBE_PTYPE_L2_IPV4_FRAG = (TXGBE_PTYPE_PKT_IP |
+				    TXGBE_PTYPE_TYP_IPFRAG),
+	TXGBE_PTYPE_L2_IPV4 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_IP),
+	TXGBE_PTYPE_L2_IPV4_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_UDP),
+	TXGBE_PTYPE_L2_IPV4_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_TCP),
+	TXGBE_PTYPE_L2_IPV4_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_SCTP),
+	TXGBE_PTYPE_L2_IPV6_FRAG = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+				    TXGBE_PTYPE_TYP_IPFRAG),
+	TXGBE_PTYPE_L2_IPV6 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+			       TXGBE_PTYPE_TYP_IP),
+	TXGBE_PTYPE_L2_IPV6_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+				   TXGBE_PTYPE_TYP_UDP),
+	TXGBE_PTYPE_L2_IPV6_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+				   TXGBE_PTYPE_TYP_TCP),
+	TXGBE_PTYPE_L2_IPV6_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+				    TXGBE_PTYPE_TYP_SCTP),
+
+	TXGBE_PTYPE_L2_FCOE = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_TYP_FCOE),
+	TXGBE_PTYPE_L2_FCOE_FCDATA = (TXGBE_PTYPE_PKT_FCOE |
+				      TXGBE_PTYPE_TYP_FCDATA),
+	TXGBE_PTYPE_L2_FCOE_FCRDY = (TXGBE_PTYPE_PKT_FCOE |
+				     TXGBE_PTYPE_TYP_FCRDY),
+	TXGBE_PTYPE_L2_FCOE_FCRSP = (TXGBE_PTYPE_PKT_FCOE |
+				     TXGBE_PTYPE_TYP_FCRSP),
+	TXGBE_PTYPE_L2_FCOE_FCOTHER = (TXGBE_PTYPE_PKT_FCOE |
+				       TXGBE_PTYPE_TYP_FCOTHER),
+	TXGBE_PTYPE_L2_FCOE_VFT = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_PKT_VFT),
+	TXGBE_PTYPE_L2_FCOE_VFT_FCDATA = (TXGBE_PTYPE_PKT_FCOE |
+				TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCDATA),
+	TXGBE_PTYPE_L2_FCOE_VFT_FCRDY = (TXGBE_PTYPE_PKT_FCOE |
+				TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRDY),
+	TXGBE_PTYPE_L2_FCOE_VFT_FCRSP = (TXGBE_PTYPE_PKT_FCOE |
+				TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRSP),
+	TXGBE_PTYPE_L2_FCOE_VFT_FCOTHER = (TXGBE_PTYPE_PKT_FCOE |
+				TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCOTHER),
+
+	TXGBE_PTYPE_L2_TUN4_MAC = (TXGBE_PTYPE_TUN_IPV4 | TXGBE_PTYPE_PKT_IGM),
+	TXGBE_PTYPE_L2_TUN6_MAC = (TXGBE_PTYPE_TUN_IPV6 | TXGBE_PTYPE_PKT_IGM),
+};
+
+#define TXGBE_RXD_PKTTYPE(_rxd) \
+	((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 9) & 0xFF)
+#define TXGBE_PTYPE_TUN(_pt) ((_pt) & 0xC0)
+#define TXGBE_PTYPE_PKT(_pt) ((_pt) & 0x30)
+#define TXGBE_PTYPE_TYP(_pt) ((_pt) & 0x0F)
+#define TXGBE_PTYPE_TYPL4(_pt) ((_pt) & 0x07)
+
+#define TXGBE_RXD_IPV6EX(_rxd) \
+	((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 6) & 0x1)
+
+/* Masks to determine if packets should be dropped due to frame errors */
+#define TXGBE_RXD_ERR_FRAME_ERR_MASK    TXGBE_RXD_ERR_RXE
+
+/*********************** Transmit Descriptor Config Masks ****************/
+#define TXGBE_TXD_DTALEN_MASK           0x0000FFFFU /* Data buf length(bytes) */
+#define TXGBE_TXD_MAC_LINKSEC           0x00040000U /* Insert LinkSec */
+#define TXGBE_TXD_MAC_TSTAMP            0x00080000U /* IEEE1588 time stamp */
+#define TXGBE_TXD_IPSEC_SA_INDEX_MASK   0x000003FFU /* IPSec SA index */
+#define TXGBE_TXD_IPSEC_ESP_LEN_MASK    0x000001FFU /* IPSec ESP length */
+#define TXGBE_TXD_DTYP_MASK             0x00F00000U /* DTYP mask */
+#define TXGBE_TXD_DTYP_CTXT             0x00100000U /* Adv Context Desc */
+#define TXGBE_TXD_DTYP_DATA             0x00000000U /* Adv Data Descriptor */
+#define TXGBE_TXD_EOP                   0x01000000U  /* End of Packet */
+#define TXGBE_TXD_IFCS                  0x02000000U /* Insert FCS */
+#define TXGBE_TXD_LINKSEC               0x04000000U /* Enable linksec */
+#define TXGBE_TXD_RS                    0x08000000U /* Report Status */
+#define TXGBE_TXD_ECU                   0x10000000U /* DDP hdr type or iSCSI */
+#define TXGBE_TXD_QCN                   0x20000000U /* cntag insertion enable */
+#define TXGBE_TXD_VLE                   0x40000000U /* VLAN pkt enable */
+#define TXGBE_TXD_TSE                   0x80000000U /* TCP Seg enable */
+#define TXGBE_TXD_STAT_DD               0x00000001U /* Descriptor Done */
+#define TXGBE_TXD_IDX_SHIFT             4 /* Desc Index shift */
+#define TXGBE_TXD_CC                    0x00000080U /* Check Context */
+#define TXGBE_TXD_IPSEC                 0x00000100U /* Enable ipsec esp */
+#define TXGBE_TXD_IIPCS                 0x00000400U
+#define TXGBE_TXD_EIPCS                 0x00000800U
+#define TXGBE_TXD_L4CS                  0x00000200U
+#define TXGBE_TXD_PAYLEN_SHIFT          13 /* Desc PAYLEN shift */
+#define TXGBE_TXD_MACLEN_SHIFT          9  /* ctxt desc mac len shift */
+#define TXGBE_TXD_VLAN_SHIFT            16 /* ctxt vlan tag shift */
+#define TXGBE_TXD_TAG_TPID_SEL_SHIFT    11
+#define TXGBE_TXD_IPSEC_TYPE_SHIFT      14
+#define TXGBE_TXD_ENC_SHIFT             15
+
+#define TXGBE_TXD_TUCMD_IPSEC_TYPE_ESP  0x00004000U /* IPSec Type ESP */
+#define TXGBE_TXD_TUCMD_IPSEC_ENCRYPT_EN 0x00008000/* ESP Encrypt Enable */
+#define TXGBE_TXD_TUCMD_FCOE            0x00010000U /* FCoE Frame Type */
+#define TXGBE_TXD_FCOEF_EOF_MASK        (0x3 << 10) /* FC EOF index */
+#define TXGBE_TXD_FCOEF_SOF             ((1 << 2) << 10) /* FC SOF index */
+#define TXGBE_TXD_FCOEF_PARINC          ((1 << 3) << 10) /* Rel_Off in F_CTL */
+#define TXGBE_TXD_FCOEF_ORIE            ((1 << 4) << 10) /* Orientation End */
+#define TXGBE_TXD_FCOEF_ORIS            ((1 << 5) << 10) /* Orientation Start */
+#define TXGBE_TXD_FCOEF_EOF_N           (0x0 << 10) /* 00: EOFn */
+#define TXGBE_TXD_FCOEF_EOF_T           (0x1 << 10) /* 01: EOFt */
+#define TXGBE_TXD_FCOEF_EOF_NI          (0x2 << 10) /* 10: EOFni */
+#define TXGBE_TXD_FCOEF_EOF_A           (0x3 << 10) /* 11: EOFa */
+#define TXGBE_TXD_L4LEN_SHIFT           8  /* ctxt L4LEN shift */
+#define TXGBE_TXD_MSS_SHIFT             16  /* ctxt MSS shift */
+
+#define TXGBE_TXD_OUTER_IPLEN_SHIFT     12 /* ctxt OUTERIPLEN shift */
+#define TXGBE_TXD_TUNNEL_LEN_SHIFT      21 /* ctxt TUNNELLEN shift */
+#define TXGBE_TXD_TUNNEL_TYPE_SHIFT     11 /* Tx Desc Tunnel Type shift */
+#define TXGBE_TXD_TUNNEL_DECTTL_SHIFT   27 /* ctxt DECTTL shift */
+#define TXGBE_TXD_TUNNEL_UDP            (0x0ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
+#define TXGBE_TXD_TUNNEL_GRE            (0x1ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
+
+/* Transmit Descriptor */
+union txgbe_tx_desc {
+	struct {
+		__le64 buffer_addr; /* Address of descriptor's data buf */
+		__le32 cmd_type_len;
+		__le32 olinfo_status;
+	} read;
+	struct {
+		__le64 rsvd; /* Reserved */
+		__le32 nxtseq_seed;
+		__le32 status;
+	} wb;
+};
+
+/* Receive Descriptor */
+union txgbe_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			union {
+				__le32 data;
+				struct {
+					__le16 pkt_info; /* RSS, Pkt type */
+					__le16 hdr_info; /* Splithdr, hdrlen */
+				} hs_rss;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				struct {
+					__le16 ip_id; /* IP id */
+					__le16 csum; /* Packet Checksum */
+				} csum_ip;
+			} hi_dword;
+		} lower;
+		struct {
+			__le32 status_error; /* ext status/error */
+			__le16 length; /* Packet length */
+			__le16 vlan; /* VLAN tag */
+		} upper;
+	} wb;  /* writeback */
+};
+
+/* Context descriptors */
+struct txgbe_tx_context_desc {
+	__le32 vlan_macip_lens;
+	__le32 seqnum_seed;
+	__le32 type_tucmd_mlhl;
+	__le32 mss_l4len_idx;
+};
+
 /****************** Manageablility Host Interface defines ********************/
 #define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH  256 /* Num of bytes in range */
 #define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */
@@ -1932,9 +2208,80 @@ struct txgbe_bus_info {
 	u16 lan_id;
 };
 
+/* Statistics counters collected by the MAC */
+struct txgbe_hw_stats {
+	u64 crcerrs;
+	u64 illerrc;
+	u64 errbc;
+	u64 mspdc;
+	u64 mpctotal;
+	u64 mpc[8];
+	u64 mlfc;
+	u64 mrfc;
+	u64 rlec;
+	u64 lxontxc;
+	u64 lxonrxc;
+	u64 lxofftxc;
+	u64 lxoffrxc;
+	u64 pxontxc[8];
+	u64 pxonrxc[8];
+	u64 pxofftxc[8];
+	u64 pxoffrxc[8];
+	u64 prc64;
+	u64 prc127;
+	u64 prc255;
+	u64 prc511;
+	u64 prc1023;
+	u64 prc1522;
+	u64 gprc;
+	u64 bprc;
+	u64 mprc;
+	u64 gptc;
+	u64 gorc;
+	u64 gotc;
+	u64 rnbc[8];
+	u64 ruc;
+	u64 rfc;
+	u64 roc;
+	u64 rjc;
+	u64 mngprc;
+	u64 mngpdc;
+	u64 mngptc;
+	u64 tor;
+	u64 tpr;
+	u64 tpt;
+	u64 ptc64;
+	u64 ptc127;
+	u64 ptc255;
+	u64 ptc511;
+	u64 ptc1023;
+	u64 ptc1522;
+	u64 mptc;
+	u64 bptc;
+	u64 xec;
+	u64 qprc[16];
+	u64 qptc[16];
+	u64 qbrc[16];
+	u64 qbtc[16];
+	u64 qprdc[16];
+	u64 pxon2offc[8];
+	u64 fccrc;
+	u64 fclast;
+	u64 ldpcec;
+	u64 pcrc8ec;
+	u64 b2ospc;
+	u64 b2ogprc;
+	u64 o2bgptc;
+	u64 o2bspc;
+};
+
 /* forward declaration */
 struct txgbe_hw;
 
+/* iterator type for walking multicast address lists */
+typedef u8* (*txgbe_mc_addr_itr) (struct txgbe_hw *hw, u8 **mc_addr_ptr,
+				  u32 *vmdq);
+
 /* Function pointer table */
 struct txgbe_eeprom_operations {
 	s32 (*init_params)(struct txgbe_hw *hw);
@@ -1949,6 +2296,7 @@ struct txgbe_mac_operations {
 	s32 (*init_hw)(struct txgbe_hw *hw);
 	s32 (*reset_hw)(struct txgbe_hw *hw);
 	s32 (*start_hw)(struct txgbe_hw *hw);
+	s32 (*clear_hw_cntrs)(struct txgbe_hw *hw);
 	enum txgbe_media_type (*get_media_type)(struct txgbe_hw *hw);
 	s32 (*get_mac_addr)(struct txgbe_hw *hw, u8 *mac_addr);
 	s32 (*get_san_mac_addr)(struct txgbe_hw *hw, u8 *san_mac_addr);
@@ -1957,6 +2305,9 @@ struct txgbe_mac_operations {
 	s32 (*stop_adapter)(struct txgbe_hw *hw);
 	s32 (*get_bus_info)(struct txgbe_hw *hw);
 	void (*set_lan_id)(struct txgbe_hw *hw);
+	s32 (*enable_rx_dma)(struct txgbe_hw *hw, u32 regval);
+	s32 (*disable_sec_rx_path)(struct txgbe_hw *hw);
+	s32 (*enable_sec_rx_path)(struct txgbe_hw *hw);
 	s32 (*acquire_swfw_sync)(struct txgbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct txgbe_hw *hw, u32 mask);
 
@@ -1974,17 +2325,28 @@ struct txgbe_mac_operations {
 				     bool *autoneg);
 	void (*set_rate_select_speed)(struct txgbe_hw *hw, u32 speed);
 
+	/* Packet Buffer manipulation */
+	void (*setup_rxpba)(struct txgbe_hw *hw, int num_pb, u32 headroom,
+			    int strategy);
+
 	/* LED */
 	s32 (*led_on)(struct txgbe_hw *hw, u32 index);
 	s32 (*led_off)(struct txgbe_hw *hw, u32 index);
 
-	/* RAR */
+	/* RAR, Multicast, VLAN */
 	s32 (*set_rar)(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
 		       u32 enable_addr);
 	s32 (*clear_rar)(struct txgbe_hw *hw, u32 index);
 	s32 (*set_vmdq_san_mac)(struct txgbe_hw *hw, u32 vmdq);
 	s32 (*clear_vmdq)(struct txgbe_hw *hw, u32 rar, u32 vmdq);
 	s32 (*init_rx_addrs)(struct txgbe_hw *hw);
+	s32 (*update_uc_addr_list)(struct txgbe_hw *hw, u8 *addr_list,
+				   u32 addr_count, txgbe_mc_addr_itr func);
+	s32 (*update_mc_addr_list)(struct txgbe_hw *hw, u8 *mc_addr_list,
+				   u32 mc_addr_count,
+				   txgbe_mc_addr_itr func, bool clear);
+	s32 (*clear_vfta)(struct txgbe_hw *hw);
+	s32 (*set_vfta)(struct txgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on);
 	s32 (*init_uta_tables)(struct txgbe_hw *hw);
 
 	/* Manageability interface */
@@ -1992,6 +2354,7 @@ struct txgbe_mac_operations {
 			      u8 build, u8 ver);
 	s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
 	void (*disable_rx)(struct txgbe_hw *hw);
+	void (*enable_rx)(struct txgbe_hw *hw);
 };
 
 struct txgbe_phy_operations {
@@ -2022,9 +2385,15 @@ struct txgbe_mac_info {
 	u16 wwnn_prefix;
 	/* prefix for World Wide Port Name (WWPN) */
 	u16 wwpn_prefix;
+#define TXGBE_MAX_MTA                   128
+#define TXGBE_MAX_VFTA_ENTRIES          128
+	u32 mta_shadow[TXGBE_MAX_MTA];
 	s32 mc_filter_type;
 	u32 mcft_size;
+	u32 vft_shadow[TXGBE_MAX_VFTA_ENTRIES];
+	u32 vft_size;
 	u32 num_rar_entries;
+	u32 rx_pb_size;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
 	u32 orig_sr_pcs_ctl2;
@@ -2081,6 +2450,7 @@ struct txgbe_hw {
 	bool force_full_reset;
 	enum txgbe_link_status link_status;
 	u16 subsystem_id;
+	u16 tpid[8];
 };
 
 #define TCALL(hw, func, args...) (((hw)->func != NULL) \
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 07/14] net: txgbe: Support flow control
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (5 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 06/14] net: txgbe: Support to receive and tranmit packets Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 08/14] net: txgbe: Support flow director Jiawen Wu
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add flow control support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |   2 +
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c | 366 ++++++++++++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   4 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  96 ++++-
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  70 ++++
 5 files changed, 536 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 487e38007ccc..2fd11dfa43b4 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -40,6 +40,8 @@
 #define TXGBE_MAX_RXD                   8192
 #define TXGBE_MIN_RXD                   128
 
+#define TXGBE_DEFAULT_FCPAUSE   0xFFFF
+
 /* Supported Rx Buffer Sizes */
 #define TXGBE_RXBUFFER_256       256  /* Used for skb receive header */
 #define TXGBE_RXBUFFER_2K       2048
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index d3ea9cb82690..70db88dfb412 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -158,6 +158,87 @@ s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_setup_fc - Set up flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Called at init time to set up flow control.
+ **/
+s32 txgbe_setup_fc(struct txgbe_hw *hw)
+{
+	s32 ret_val = 0;
+	u32 pcap = 0;
+	u32 value = 0;
+	u32 pcap_backplane = 0;
+
+	/* 10gig parts do not have a word in the EEPROM to determine the
+	 * default flow control setting, so we explicitly set it to full.
+	 */
+	if (hw->fc.requested_mode == txgbe_fc_default)
+		hw->fc.requested_mode = txgbe_fc_full;
+
+	/* The possible values of fc.requested_mode are:
+	 * 0: Flow control is completely disabled
+	 * 1: Rx flow control is enabled (we can receive pause frames,
+	 *    but not send pause frames).
+	 * 2: Tx flow control is enabled (we can send pause frames but
+	 *    we do not support receiving pause frames).
+	 * 3: Both Rx and Tx flow control (symmetric) are enabled.
+	 * other: Invalid.
+	 */
+	switch (hw->fc.requested_mode) {
+	case txgbe_fc_none:
+		/* Flow control completely disabled by software override. */
+		break;
+	case txgbe_fc_tx_pause:
+		/* Tx Flow control is enabled, and Rx Flow control is
+		 * disabled by software override.
+		 */
+		pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+		break;
+	case txgbe_fc_rx_pause:
+		/* Rx Flow control is enabled and Tx Flow control is
+		 * disabled by software override. Since there really
+		 * isn't a way to advertise that we are capable of RX
+		 * Pause ONLY, we will advertise that we support both
+		 * symmetric and asymmetric Rx PAUSE, as such we fall
+		 * through to the fc_full statement.  Later, we will
+		 * disable the adapter's ability to send PAUSE frames.
+		 */
+	case txgbe_fc_full:
+		/* Flow control (both Rx and Tx) is enabled by SW override. */
+		pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM |
+			TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM |
+				  TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+		break;
+	default:
+		ERROR_REPORT1(TXGBE_ERROR_ARGUMENT,
+			      "Flow control param set incorrectly\n");
+		ret_val = TXGBE_ERR_CONFIG;
+		goto out;
+	}
+
+	/* Enable auto-negotiation between the MAC & PHY;
+	 * the MAC will advertise clause 37 flow control.
+	 */
+	value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV);
+	value = (value & ~(TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM |
+			   TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM)) | pcap;
+	txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV, value);
+
+	if (hw->phy.media_type == txgbe_media_type_backplane) {
+		value = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1);
+		value = (value & ~(TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM |
+				   TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM)) |
+			pcap_backplane;
+		txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1, value);
+	}
+out:
+	return ret_val;
+}
+
 /**
  *  txgbe_read_pba_string - Reads part number string from EEPROM
  *  @hw: pointer to hardware structure
@@ -850,6 +931,285 @@ s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
 	return 0;
 }
 
+/**
+ *  txgbe_fc_enable - Enable flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according to the current settings.
+ **/
+s32 txgbe_fc_enable(struct txgbe_hw *hw)
+{
+	s32 ret_val = 0;
+	u32 mflcn_reg, fccfg_reg;
+	u32 reg;
+	u32 fcrtl, fcrth;
+
+	/* Validate the water mark configuration */
+	if (!hw->fc.pause_time) {
+		ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+		goto out;
+	}
+
+	/* Low water mark of zero causes XOFF floods */
+	if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+	    hw->fc.high_water) {
+		if (!hw->fc.low_water ||
+		    hw->fc.low_water >= hw->fc.high_water) {
+			DEBUGOUT("Invalid water mark configuration\n");
+			ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+			goto out;
+		}
+	}
+
+	/* Negotiate the fc mode to use */
+	txgbe_fc_autoneg(hw);
+
+	/* Disable any previous flow control settings */
+	mflcn_reg = rd32(hw, TXGBE_MAC_RX_FLOW_CTRL);
+	mflcn_reg &= ~(TXGBE_MAC_RX_FLOW_CTRL_PFCE |
+		       TXGBE_MAC_RX_FLOW_CTRL_RFE);
+
+	fccfg_reg = rd32(hw, TXGBE_RDB_RFCC);
+	fccfg_reg &= ~(TXGBE_RDB_RFCC_RFCE_802_3X |
+		       TXGBE_RDB_RFCC_RFCE_PRIORITY);
+
+	/* The possible values of fc.current_mode are:
+	 * 0: Flow control is completely disabled
+	 * 1: Rx flow control is enabled (we can receive pause frames,
+	 *    but not send pause frames).
+	 * 2: Tx flow control is enabled (we can send pause frames but
+	 *    we do not support receiving pause frames).
+	 * 3: Both Rx and Tx flow control (symmetric) are enabled.
+	 * other: Invalid.
+	 */
+	switch (hw->fc.current_mode) {
+	case txgbe_fc_none:
+		/* Flow control is disabled by software override or autoneg.
+		 * The code below will actually disable it in the HW.
+		 */
+		break;
+	case txgbe_fc_rx_pause:
+		/* Rx Flow control is enabled and Tx Flow control is
+		 * disabled by software override. Since there really
+		 * isn't a way to advertise that we are capable of RX
+		 * Pause ONLY, we will advertise that we support both
+		 * symmetric and asymmetric Rx PAUSE.  Later, we will
+		 * disable the adapter's ability to send PAUSE frames.
+		 */
+		mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE;
+		break;
+	case txgbe_fc_tx_pause:
+		/* Tx Flow control is enabled, and Rx Flow control is
+		 * disabled by software override.
+		 */
+		fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X;
+		break;
+	case txgbe_fc_full:
+		/* Flow control (both Rx and Tx) is enabled by SW override. */
+		mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE;
+		fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X;
+		break;
+	default:
+		ERROR_REPORT1(TXGBE_ERROR_ARGUMENT,
+			      "Flow control param set incorrectly\n");
+		ret_val = TXGBE_ERR_CONFIG;
+		goto out;
+	}
+
+	/* Set 802.3x based flow control settings. */
+	wr32(hw, TXGBE_MAC_RX_FLOW_CTRL, mflcn_reg);
+	wr32(hw, TXGBE_RDB_RFCC, fccfg_reg);
+
+	/* Set up and enable Rx high/low water mark thresholds, enable XON. */
+	if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+	    hw->fc.high_water) {
+		fcrtl = (hw->fc.low_water << 10) |
+			TXGBE_RDB_RFCL_XONE;
+		wr32(hw, TXGBE_RDB_RFCL(0), fcrtl);
+		fcrth = (hw->fc.high_water << 10) |
+			TXGBE_RDB_RFCH_XOFFE;
+	} else {
+		wr32(hw, TXGBE_RDB_RFCL(0), 0);
+		/* In order to prevent Tx hangs when the internal Tx
+		 * switch is enabled we must set the high water mark
+		 * to the Rx packet buffer size - 24KB.  This allows
+		 * the Tx switch to function even under heavy Rx
+		 * workloads.
+		 */
+		fcrth = rd32(hw, TXGBE_RDB_PB_SZ(0)) - 24576;
+	}
+
+	wr32(hw, TXGBE_RDB_RFCH(0), fcrth);
+
+	/* Configure pause time */
+	reg = hw->fc.pause_time * 0x00010001;
+	wr32(hw, TXGBE_RDB_RFCV(0), reg);
+
+	/* Configure flow control refresh threshold value */
+	wr32(hw, TXGBE_RDB_RFCRT, hw->fc.pause_time / 2);
+
+out:
+	return ret_val;
+}
+
+/**
+ *  txgbe_negotiate_fc - Negotiate flow control
+ *  @hw: pointer to hardware structure
+ *  @adv_reg: flow control advertised settings
+ *  @lp_reg: link partner's flow control settings
+ *  @adv_sym: symmetric pause bit in advertisement
+ *  @adv_asm: asymmetric pause bit in advertisement
+ *  @lp_sym: symmetric pause bit in link partner advertisement
+ *  @lp_asm: asymmetric pause bit in link partner advertisement
+ *
+ *  Find the intersection between advertised settings and link partner's
+ *  advertised settings
+ **/
+static s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+			      u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+{
+	if ((!(adv_reg)) ||  (!(lp_reg))) {
+		ERROR_REPORT3(TXGBE_ERROR_UNSUPPORTED,
+			      "Local or link partner's advertised flow control settings are NULL. Local: %x, link partner: %x\n",
+			      adv_reg, lp_reg);
+		return TXGBE_ERR_FC_NOT_NEGOTIATED;
+	}
+
+	if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+		/* Now we need to check if the user selected Rx ONLY
+		 * of pause frames.  In this case, we had to advertise
+		 * FULL flow control because we could not advertise RX
+		 * ONLY. Hence, we must now check to see if we need to
+		 * turn OFF the TRANSMISSION of PAUSE frames.
+		 */
+		if (hw->fc.requested_mode == txgbe_fc_full) {
+			hw->fc.current_mode = txgbe_fc_full;
+			DEBUGOUT("Flow Control = FULL.\n");
+		} else {
+			hw->fc.current_mode = txgbe_fc_rx_pause;
+			DEBUGOUT("Flow Control=RX PAUSE frames only\n");
+		}
+	} else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+		   (lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+		hw->fc.current_mode = txgbe_fc_tx_pause;
+		DEBUGOUT("Flow Control = TX PAUSE frames only.\n");
+	} else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+		   !(lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+		hw->fc.current_mode = txgbe_fc_rx_pause;
+		DEBUGOUT("Flow Control = RX PAUSE frames only.\n");
+	} else {
+		hw->fc.current_mode = txgbe_fc_none;
+		DEBUGOUT("Flow Control = NONE.\n");
+	}
+	return 0;
+}
+
+/**
+ *  txgbe_fc_autoneg_fiber - Enable flow control on 1 gig fiber
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according on 1 gig fiber.
+ **/
+static s32 txgbe_fc_autoneg_fiber(struct txgbe_hw *hw)
+{
+	u32 pcs_anadv_reg, pcs_lpab_reg;
+	s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+	pcs_anadv_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV);
+	pcs_lpab_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_LP_BABL);
+
+	ret_val =  txgbe_negotiate_fc(hw, pcs_anadv_reg,
+				      pcs_lpab_reg,
+				      TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM,
+				      TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM,
+				      TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM,
+				      TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM);
+
+	return ret_val;
+}
+
+/**
+ *  txgbe_fc_autoneg_backplane - Enable flow control IEEE clause 37
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according to IEEE clause 37.
+ **/
+static s32 txgbe_fc_autoneg_backplane(struct txgbe_hw *hw)
+{
+	u32 anlp1_reg, autoc_reg;
+	s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+	/* Read the 10g AN autoc and LP ability registers and resolve
+	 * local flow control settings accordingly
+	 */
+	autoc_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1);
+	anlp1_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_LP_ABL1);
+
+	ret_val = txgbe_negotiate_fc(hw, autoc_reg,
+				     anlp1_reg,
+				     TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+				     TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM,
+				     TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+				     TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM);
+
+	return ret_val;
+}
+
+/**
+ *  txgbe_fc_autoneg - Configure flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Compares our advertised flow control capabilities to those advertised by
+ *  our link partner, and determines the proper flow control mode to use.
+ **/
+void txgbe_fc_autoneg(struct txgbe_hw *hw)
+{
+	s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+	u32 speed;
+	bool link_up;
+
+	/* AN should have completed when the cable was plugged in.
+	 * Look for reasons to bail out.  Bail out if:
+	 * - FC autoneg is disabled, or if
+	 * - link is not up.
+	 */
+	if (hw->fc.disable_fc_autoneg) {
+		ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED,
+			      "Flow control autoneg is disabled");
+		goto out;
+	}
+
+	TCALL(hw, mac.ops.check_link, &speed, &link_up, false);
+	if (!link_up) {
+		ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, "The link is down");
+		goto out;
+	}
+
+	switch (hw->phy.media_type) {
+	/* Autoneg flow control on fiber adapters */
+	case txgbe_media_type_fiber:
+		if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+			ret_val = txgbe_fc_autoneg_fiber(hw);
+		break;
+
+	/* Autoneg flow control on backplane adapters */
+	case txgbe_media_type_backplane:
+		ret_val = txgbe_fc_autoneg_backplane(hw);
+		break;
+
+	default:
+		break;
+	}
+
+out:
+	if (ret_val == 0) {
+		hw->fc.fc_was_autonegged = true;
+	} else {
+		hw->fc.fc_was_autonegged = false;
+		hw->fc.current_mode = hw->fc.requested_mode;
+	}
+}
+
 /**
  *  txgbe_disable_pcie_master - Disable PCI-express master access
  *  @hw: pointer to hardware structure
@@ -2313,6 +2673,9 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	mac->ops.clear_vfta = txgbe_clear_vfta;
 	mac->ops.init_uta_tables = txgbe_init_uta_tables;
 
+	/* Flow Control */
+	mac->ops.fc_enable = txgbe_fc_enable;
+	mac->ops.setup_fc = txgbe_setup_fc;
 
 	/* Link */
 	mac->ops.get_link_capabilities = txgbe_get_link_capabilities;
@@ -3726,6 +4089,9 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 
 	TXGBE_WRITE_FLUSH(hw);
 
+	/* Setup flow control */
+	ret_val = TCALL(hw, mac.ops.setup_fc);
+
 	/* Clear the rate limiters */
 	for (i = 0; i < hw->mac.max_tx_queues; i++) {
 		wr32(hw, TXGBE_TDM_RP_IDX, i);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index e400b0333cc4..1b11735cc278 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -90,6 +90,10 @@ s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
 s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
 s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
 
+s32 txgbe_fc_enable(struct txgbe_hw *hw);
+void txgbe_fc_autoneg(struct txgbe_hw *hw);
+s32 txgbe_setup_fc(struct txgbe_hw *hw);
+
 s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
 void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
 s32 txgbe_disable_pcie_master(struct txgbe_hw *hw);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 550af406de8d..6f1836a61711 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -1948,12 +1948,13 @@ void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter)
 	int i;
 
 	/* We should set the drop enable bit if:
-	 *  Number of Rx queues > 1
+	 *  Number of Rx queues > 1 and flow control is disabled
 	 *
 	 *  This allows us to avoid head of line blocking for security
 	 *  and performance reasons.
 	 */
-	if (adapter->num_rx_queues > 1) {
+	if ((adapter->num_rx_queues > 1 &&
+	     !(adapter->hw.fc.current_mode & txgbe_fc_tx_pause))) {
 		for (i = 0; i < adapter->num_rx_queues; i++)
 			txgbe_enable_rx_drop(adapter, adapter->rx_ring[i]);
 	} else {
@@ -2767,11 +2768,94 @@ void txgbe_clear_vxlan_port(struct txgbe_adapter *adapter)
 				    NETIF_F_GSO_UDP_TUNNEL | \
 				    NETIF_F_GSO_UDP_TUNNEL_CSUM)
 
+/* Additional bittime to account for TXGBE framing */
+#define TXGBE_ETH_FRAMING 20
+
+/**
+ * txgbe_hpbthresh - calculate high water mark for flow control
+ *
+ * @adapter: board private structure to calculate for
+ * @pb - packet buffer to calculate
+ **/
+static int txgbe_hpbthresh(struct txgbe_adapter *adapter, int pb)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct net_device *dev = adapter->netdev;
+	int link, tc, kb, marker;
+	u32 dv_id, rx_pba;
+
+	/* Calculate max LAN frame size */
+	link = dev->mtu + ETH_HLEN + ETH_FCS_LEN + TXGBE_ETH_FRAMING;
+	tc = link;
+
+	/* Calculate delay value for device */
+	dv_id = TXGBE_DV(link, tc);
+
+	/* Delay value is calculated in bit times convert to KB */
+	kb = TXGBE_BT2KB(dv_id);
+	rx_pba = rd32(hw, TXGBE_RDB_PB_SZ(pb)) >> TXGBE_RDB_PB_SZ_SHIFT;
+
+	marker = rx_pba - kb;
+
+	/* It is possible that the packet buffer is not large enough
+	 * to provide required headroom. In this case throw an error
+	 * to user and a do the best we can.
+	 */
+	if (marker < 0) {
+		txgbe_warn(drv,
+			   "Packet Buffer(%i) can not provide enough headroom to support flow control. Decrease MTU or number of traffic classes\n",
+			   pb);
+		marker = tc + 1;
+	}
+
+	return marker;
+}
+
+/**
+ * txgbe_lpbthresh - calculate low water mark for flow control
+ *
+ * @adapter: board private structure to calculate for
+ * @pb - packet buffer to calculate
+ **/
+static int txgbe_lpbthresh(struct txgbe_adapter *adapter, int __maybe_unused pb)
+{
+	struct net_device *dev = adapter->netdev;
+	int tc;
+	u32 dv_id;
+
+	/* Calculate max LAN frame size */
+	tc = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
+
+	/* Calculate delay value for device */
+	dv_id = TXGBE_LOW_DV(tc);
+
+	/* Delay value is calculated in bit times convert to KB */
+	return TXGBE_BT2KB(dv_id);
+}
+
+/**
+ * txgbe_pbthresh_setup - calculate and setup high low water marks
+ **/
+static void txgbe_pbthresh_setup(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	hw->fc.high_water = txgbe_hpbthresh(adapter, 0);
+	hw->fc.low_water = txgbe_lpbthresh(adapter, 0);
+
+	/* Low water marks must not be larger than high water marks */
+	if (hw->fc.low_water > hw->fc.high_water)
+		hw->fc.low_water = 0;
+
+	hw->fc.high_water = 0;
+}
+
 static void txgbe_configure_pb(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
 
 	TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL);
+	txgbe_pbthresh_setup(adapter);
 }
 
 void txgbe_configure_isb(struct txgbe_adapter *adapter)
@@ -3295,6 +3379,13 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 
 	adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
 
+	/* default flow control settings */
+	hw->fc.requested_mode = txgbe_fc_full;
+	hw->fc.current_mode = txgbe_fc_full;
+
+	hw->fc.pause_time = TXGBE_DEFAULT_FCPAUSE;
+	hw->fc.disable_fc_autoneg = false;
+
 	/* set default ring sizes */
 	adapter->tx_ring_count = TXGBE_DEFAULT_TXD;
 	adapter->rx_ring_count = TXGBE_DEFAULT_RXD;
@@ -4004,6 +4095,7 @@ static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
 	adapter->link_speed = link_speed;
 
 	if (link_up) {
+		TCALL(hw, mac.ops.fc_enable);
 		txgbe_set_rx_drop_en(adapter);
 
 		if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index c673c5d86b1a..af9339777ea5 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -2085,6 +2085,51 @@ typedef u32 txgbe_physical_layer;
 #define TXGBE_PHYSICAL_LAYER_10GBASE_XAUI       0x1000
 #define TXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA      0x2000
 #define TXGBE_PHYSICAL_LAYER_1000BASE_SX        0x4000
+/* Flow Control Data Sheet defined values
+ * Calculation and defines taken from 802.1bb Annex O
+ */
+
+/* BitTimes (BT) conversion */
+#define TXGBE_BT2KB(BT)         (((BT) + (8 * 1024 - 1)) / (8 * 1024))
+#define TXGBE_B2BT(BT)          ((BT) * 8)
+
+/* Calculate Delay to respond to PFC */
+#define TXGBE_PFC_D     672
+
+/* Calculate Cable Delay */
+#define TXGBE_CABLE_DC  5556
+
+/* Calculate Interface Delay */
+#define TXGBE_PHY_D     12800
+#define TXGBE_MAC_D     4096
+#define TXGBE_XAUI_D    (2 * 1024)
+
+#define TXGBE_ID        (TXGBE_MAC_D + TXGBE_XAUI_D + TXGBE_PHY_D)
+
+/* Calculate Delay incurred from higher layer */
+#define TXGBE_HD        6144
+
+/* Calculate PCI Bus delay for low thresholds */
+#define TXGBE_PCI_DELAY 10000
+
+/* Calculate delay value in bit times */
+#define TXGBE_DV(_max_frame_link, _max_frame_tc) \
+			((36 * \
+			  (TXGBE_B2BT(_max_frame_link) + \
+			   TXGBE_PFC_D + \
+			   (2 * TXGBE_CABLE_DC) + \
+			   (2 * TXGBE_ID) + \
+			   TXGBE_HD) / 25 + 1) + \
+			 2 * TXGBE_B2BT(_max_frame_tc))
+
+/* Calculate low threshold delay values */
+#define TXGBE_LOW_DV_SP(_max_frame_tc) \
+			(2 * TXGBE_B2BT(_max_frame_tc) + \
+			(36 * TXGBE_PCI_DELAY / 25) + 1)
+
+#define TXGBE_LOW_DV(_max_frame_tc) \
+			(2 * TXGBE_LOW_DV_SP(_max_frame_tc))
+
 
 enum txgbe_eeprom_type {
 	txgbe_eeprom_uninitialized = 0,
@@ -2154,6 +2199,15 @@ enum txgbe_media_type {
 	txgbe_media_type_virtual
 };
 
+/* Flow Control Settings */
+enum txgbe_fc_mode {
+	txgbe_fc_none = 0,
+	txgbe_fc_rx_pause,
+	txgbe_fc_tx_pause,
+	txgbe_fc_full,
+	txgbe_fc_default
+};
+
 /* PCI bus types */
 enum txgbe_bus_type {
 	txgbe_bus_type_unknown = 0,
@@ -2208,6 +2262,17 @@ struct txgbe_bus_info {
 	u16 lan_id;
 };
 
+/* Flow control parameters */
+struct txgbe_fc_info {
+	u32 high_water; /* Flow Ctrl High-water */
+	u32 low_water; /* Flow Ctrl Low-water */
+	u16 pause_time; /* Flow Control Pause timer */
+	bool disable_fc_autoneg; /* Do not autonegotiate FC */
+	bool fc_was_autonegged; /* Is current_mode the result of autonegging? */
+	enum txgbe_fc_mode current_mode; /* FC mode in effect */
+	enum txgbe_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
 /* Statistics counters collected by the MAC */
 struct txgbe_hw_stats {
 	u64 crcerrs;
@@ -2349,6 +2414,10 @@ struct txgbe_mac_operations {
 	s32 (*set_vfta)(struct txgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on);
 	s32 (*init_uta_tables)(struct txgbe_hw *hw);
 
+	/* Flow Control */
+	s32 (*fc_enable)(struct txgbe_hw *hw);
+	s32 (*setup_fc)(struct txgbe_hw *hw);
+
 	/* Manageability interface */
 	s32 (*set_fw_drv_ver)(struct txgbe_hw *hw, u8 maj, u8 min,
 			      u8 build, u8 ver);
@@ -2437,6 +2506,7 @@ struct txgbe_hw {
 	void *back;
 	struct txgbe_mac_info mac;
 	struct txgbe_addr_filter_info addr_ctrl;
+	struct txgbe_fc_info fc;
 	struct txgbe_phy_info phy;
 	struct txgbe_eeprom_info eeprom;
 	struct txgbe_bus_info bus;
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 08/14] net: txgbe: Support flow director
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (6 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 07/14] net: txgbe: Support flow control Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 09/14] net: txgbe: Support PTP Jiawen Wu
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add to support flow director signature and perfect filters.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  34 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c | 609 ++++++++++++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |  24 +
 .../net/ethernet/wangxun/txgbe/txgbe_lib.c    |  27 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 293 ++++++++-
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |  81 +++
 6 files changed, 1064 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 2fd11dfa43b4..7a6683b33930 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -150,6 +150,7 @@ struct txgbe_rx_queue_stats {
 
 enum txgbe_ring_state_t {
 	__TXGBE_RX_BUILD_SKB_ENABLED,
+	__TXGBE_TX_FDIR_INIT_DONE,
 	__TXGBE_TX_XPS_INIT_DONE,
 	__TXGBE_TX_DETECT_HANG,
 	__TXGBE_HANG_CHECK_ARMED,
@@ -200,7 +201,14 @@ struct txgbe_ring {
 	u16 next_to_use;
 	u16 next_to_clean;
 	u16 rx_buf_len;
-	u16 next_to_alloc;
+	union {
+		u16 next_to_alloc;
+		struct {
+			u8 atr_sample_rate;
+			u8 atr_count;
+		};
+	};
+
 	struct txgbe_queue_stats stats;
 	struct u64_stats_sync syncp;
 
@@ -213,6 +221,7 @@ struct txgbe_ring {
 enum txgbe_ring_f_enum {
 	RING_F_NONE = 0,
 	RING_F_RSS,
+	RING_F_FDIR,
 	RING_F_ARRAY_SIZE  /* must be last in enum set */
 };
 
@@ -352,6 +361,8 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG_MSIX_ENABLED                 ((u32)(1 << 3))
 #define TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE        ((u32)(1 << 4))
 #define TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE         ((u32)(1 << 5))
+#define TXGBE_FLAG_FDIR_HASH_CAPABLE            ((u32)(1 << 6))
+#define TXGBE_FLAG_FDIR_PERFECT_CAPABLE         ((u32)(1 << 7))
 
 /**
  * txgbe_adapter.flag2
@@ -367,6 +378,7 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG2_RSS_FIELD_IPV4_UDP          (1U << 8)
 #define TXGBE_FLAG2_RSS_FIELD_IPV6_UDP          (1U << 9)
 #define TXGBE_FLAG2_RSS_ENABLED                 (1U << 10)
+#define TXGBE_FLAG2_FDIR_REQUIRES_REINIT        (1U << 11)
 
 #define TXGBE_SET_FLAG(_input, _flag, _result) \
 	((_flag <= _result) ? \
@@ -397,6 +409,9 @@ struct txgbe_adapter {
 	u32 flags2;
 	u8 backplane_an;
 	u8 an37;
+
+	bool cloud_mode;
+
 	/* Tx fast path data */
 	int num_tx_queues;
 	u16 tx_itr_setting;
@@ -448,7 +463,13 @@ struct txgbe_adapter {
 
 	struct timer_list service_timer;
 	struct work_struct service_task;
+	struct hlist_head fdir_filter_list;
+	unsigned long fdir_overflow; /* number of times ATR was backed off */
+	union txgbe_atr_input fdir_mask;
+	int fdir_filter_count;
+	u32 fdir_pballoc;
 	u32 atr_sample_rate;
+	spinlock_t fdir_perfect_lock;
 
 	u8 __iomem *io_addr;    /* Mainly for iounmap use */
 
@@ -486,6 +507,13 @@ static inline u32 txgbe_misc_isb(struct txgbe_adapter *adapter,
 	return adapter->isb_mem[idx];
 }
 
+struct txgbe_fdir_filter {
+	struct  hlist_node fdir_node;
+	union txgbe_atr_input filter;
+	u16 sw_idx;
+	u16 action;
+};
+
 enum txgbe_state_t {
 	__TXGBE_TESTING,
 	__TXGBE_RESETTING,
@@ -653,6 +681,10 @@ static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
 
 extern u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg);
 
+#define TXGBE_HTONL(_i) htonl(_i)
+#define TXGBE_NTOHL(_i) ntohl(_i)
+#define TXGBE_NTOHS(_i) ntohs(_i)
+
 enum {
 	TXGBE_ERROR_SOFTWARE,
 	TXGBE_ERROR_POLLING,
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 70db88dfb412..040ddb0e46fd 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -4065,6 +4065,615 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ * txgbe_fdir_check_cmd_complete - poll to check whether FDIRCMD is complete
+ * @hw: pointer to hardware structure
+ * @fdircmd: current value of FDIRCMD register
+ */
+static s32 txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, u32 *fdircmd)
+{
+	int i;
+
+	for (i = 0; i < TXGBE_RDB_FDIR_CMD_CMD_POLL; i++) {
+		*fdircmd = rd32(hw, TXGBE_RDB_FDIR_CMD);
+		if (!(*fdircmd & TXGBE_RDB_FDIR_CMD_CMD_MASK))
+			return 0;
+		usec_delay(10);
+	}
+
+	return TXGBE_ERR_FDIR_CMD_INCOMPLETE;
+}
+
+/**
+ *  txgbe_reinit_fdir_tables - Reinitialize Flow Director tables.
+ *  @hw: pointer to hardware structure
+ **/
+s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw)
+{
+	s32 err;
+	int i;
+	u32 fdirctrl = rd32(hw, TXGBE_RDB_FDIR_CTL);
+	u32 fdircmd;
+
+	fdirctrl &= ~TXGBE_RDB_FDIR_CTL_INIT_DONE;
+
+	/* Before starting reinitialization process,
+	 * FDIRCMD.CMD must be zero.
+	 */
+	err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+	if (err) {
+		DEBUGOUT("Flow Director previous command did not complete, aborting table re-initialization.\n");
+		return err;
+	}
+
+	wr32(hw, TXGBE_RDB_FDIR_FREE, 0);
+	TXGBE_WRITE_FLUSH(hw);
+	/* Sapphire adapters flow director init flow cannot be restarted,
+	 * workaround sapphire silicon errata by performing the following steps
+	 * before re-writing the FDIRCTRL control register with the same value.
+	 * - write 1 to bit 8 of FDIRCMD register &
+	 * - write 0 to bit 8 of FDIRCMD register
+	 */
+	wr32m(hw, TXGBE_RDB_FDIR_CMD,
+	      TXGBE_RDB_FDIR_CMD_CLEARHT, TXGBE_RDB_FDIR_CMD_CLEARHT);
+	TXGBE_WRITE_FLUSH(hw);
+	wr32m(hw, TXGBE_RDB_FDIR_CMD,
+	      TXGBE_RDB_FDIR_CMD_CLEARHT, 0);
+	TXGBE_WRITE_FLUSH(hw);
+	/* Clear FDIR Hash register to clear any leftover hashes
+	 * waiting to be programmed.
+	 */
+	wr32(hw, TXGBE_RDB_FDIR_HASH, 0x00);
+	TXGBE_WRITE_FLUSH(hw);
+
+	wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl);
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* Poll init-done after we write FDIRCTRL register */
+	for (i = 0; i < TXGBE_FDIR_INIT_DONE_POLL; i++) {
+		if (rd32(hw, TXGBE_RDB_FDIR_CTL) &
+		    TXGBE_RDB_FDIR_CTL_INIT_DONE)
+			break;
+		msec_delay(1);
+	}
+	if (i >= TXGBE_FDIR_INIT_DONE_POLL) {
+		DEBUGOUT("Flow Director Signature poll time exceeded!\n");
+		return TXGBE_ERR_FDIR_REINIT_FAILED;
+	}
+
+	/* Clear FDIR statistics registers (read to clear) */
+	rd32(hw, TXGBE_RDB_FDIR_USE_ST);
+	rd32(hw, TXGBE_RDB_FDIR_FAIL_ST);
+	rd32(hw, TXGBE_RDB_FDIR_MATCH);
+	rd32(hw, TXGBE_RDB_FDIR_MISS);
+	rd32(hw, TXGBE_RDB_FDIR_LEN);
+
+	return 0;
+}
+
+/**
+ *  txgbe_fdir_enable - Initialize Flow Director control registers
+ *  @hw: pointer to hardware structure
+ *  @fdirctrl: value to write to flow director control register
+ **/
+static void txgbe_fdir_enable(struct txgbe_hw *hw, u32 fdirctrl)
+{
+	int i;
+
+	/* Prime the keys for hashing */
+	wr32(hw, TXGBE_RDB_FDIR_HKEY, TXGBE_ATR_BUCKET_HASH_KEY);
+	wr32(hw, TXGBE_RDB_FDIR_SKEY, TXGBE_ATR_SIGNATURE_HASH_KEY);
+
+	/* Poll init-done after we write the register.  Estimated times:
+	 *      10G: PBALLOC = 11b, timing is 60us
+	 *       1G: PBALLOC = 11b, timing is 600us
+	 *     100M: PBALLOC = 11b, timing is 6ms
+	 *
+	 *     Multiple these timings by 4 if under full Rx load
+	 *
+	 * So we'll poll for TXGBE_FDIR_INIT_DONE_POLL times, sleeping for
+	 * 1 msec per poll time.  If we're at line rate and drop to 100M, then
+	 * this might not finish in our poll time, but we can live with that
+	 * for now.
+	 */
+	wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl);
+	TXGBE_WRITE_FLUSH(hw);
+	for (i = 0; i < TXGBE_RDB_FDIR_INIT_DONE_POLL; i++) {
+		if (rd32(hw, TXGBE_RDB_FDIR_CTL) &
+		    TXGBE_RDB_FDIR_CTL_INIT_DONE)
+			break;
+		msec_delay(1);
+	}
+
+	if (i >= TXGBE_RDB_FDIR_INIT_DONE_POLL)
+		DEBUGOUT("Flow Director poll time exceeded!\n");
+}
+
+/**
+ *  txgbe_init_fdir_signature -Initialize Flow Director sig filters
+ *  @hw: pointer to hardware structure
+ *  @fdirctrl: value to write to flow director control register, initially
+ *           contains just the value of the Rx packet buffer allocation
+ **/
+s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl)
+{
+	u32 flex = 0;
+
+	flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(0),
+		     ~(TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK |
+		       TXGBE_RDB_FDIR_FLEX_CFG_MSK |
+		       TXGBE_RDB_FDIR_FLEX_CFG_OFST));
+
+	flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC |
+		 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT);
+	wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(0), flex);
+
+	/* Continue setup of fdirctrl register bits:
+	 *  Move the flexible bytes to use the ethertype - shift 6 words
+	 *  Set the maximum length per hash bucket to 0xA filters
+	 *  Send interrupt when 64 filters are left
+	 */
+	fdirctrl |= (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) |
+		    (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) |
+		    (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT);
+
+	/* write hashes and fdirctrl register, poll for completion */
+	txgbe_fdir_enable(hw, fdirctrl);
+
+	if (hw->revision_id == TXGBE_SP_MPW) {
+		/* errata 1: disable RSC of drop ring 0 */
+		wr32m(hw, TXGBE_PX_RR_CFG(0),
+		      TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC);
+	}
+	return 0;
+}
+
+/**
+ *  txgbe_init_fdir_perfect - Initialize Flow Director perfect filters
+ *  @hw: pointer to hardware structure
+ *  @fdirctrl: value to write to flow director control register, initially
+ *           contains just the value of the Rx packet buffer allocation
+ *  @cloud_mode: true - cloud mode, false - other mode
+ **/
+s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl,
+			    bool __maybe_unused cloud_mode)
+{
+	/* Continue setup of fdirctrl register bits:
+	 *  Turn perfect match filtering on
+	 *  Report hash in RSS field of Rx wb descriptor
+	 *  Initialize the drop queue
+	 *  Move the flexible bytes to use the ethertype - shift 6 words
+	 *  Set the maximum length per hash bucket to 0xA filters
+	 *  Send interrupt when 64 (0x4 * 16) filters are left
+	 */
+	fdirctrl |= TXGBE_RDB_FDIR_CTL_PERFECT_MATCH |
+		    (TXGBE_RDB_FDIR_DROP_QUEUE <<
+		     TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT) |
+		    (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) |
+		    (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) |
+		    (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT);
+
+	/* write hashes and fdirctrl register, poll for completion */
+	txgbe_fdir_enable(hw, fdirctrl);
+
+	if (hw->revision_id == TXGBE_SP_MPW) {
+		if (((struct txgbe_adapter *)hw->back)->num_rx_queues >
+			TXGBE_RDB_FDIR_DROP_QUEUE)
+			/* errata 1: disable RSC of drop ring */
+			wr32m(hw,
+			      TXGBE_PX_RR_CFG(TXGBE_RDB_FDIR_DROP_QUEUE),
+			      TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC);
+	}
+	return 0;
+}
+
+/* These defines allow us to quickly generate all of the necessary instructions
+ * in the function below by simply calling out TXGBE_COMPUTE_SIG_HASH_ITERATION
+ * for values 0 through 15
+ */
+#define TXGBE_ATR_COMMON_HASH_KEY \
+		(TXGBE_ATR_BUCKET_HASH_KEY & TXGBE_ATR_SIGNATURE_HASH_KEY)
+#define TXGBE_COMPUTE_SIG_HASH_ITERATION(_n) \
+do { \
+	u32 n = (_n); \
+	if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << n)) \
+		common_hash ^= lo_hash_dword >> n; \
+	else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+		bucket_hash ^= lo_hash_dword >> n; \
+	else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << n)) \
+		sig_hash ^= lo_hash_dword << (16 - n); \
+	if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << (n + 16))) \
+		common_hash ^= hi_hash_dword >> n; \
+	else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+		bucket_hash ^= hi_hash_dword >> n; \
+	else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << (n + 16))) \
+		sig_hash ^= hi_hash_dword << (16 - n); \
+} while (0)
+
+/**
+ *  txgbe_atr_compute_sig_hash - Compute the signature hash
+ *  @stream: input bitstream to compute the hash on
+ *
+ *  This function is almost identical to the function above but contains
+ *  several optimizations such as unwinding all of the loops, letting the
+ *  compiler work out all of the conditional ifs since the keys are static
+ *  defines, and computing two keys at once since the hashed dword stream
+ *  will be the same for both keys.
+ **/
+u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input,
+			       union txgbe_atr_hash_dword common)
+{
+	u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
+	u32 sig_hash = 0, bucket_hash = 0, common_hash = 0;
+
+	/* record the flow_vm_vlan bits as they are a key part to the hash */
+	flow_vm_vlan = TXGBE_NTOHL(input.dword);
+
+	/* generate common hash dword */
+	hi_hash_dword = TXGBE_NTOHL(common.dword);
+
+	/* low dword is word swapped version of common */
+	lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
+
+	/* apply flow ID/VM pool/VLAN ID bits to hash words */
+	hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
+
+	/* Process bits 0 and 16 */
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(0);
+
+	/* apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
+	 * delay this because bit 0 of the stream should not be processed
+	 * so we do not add the VLAN until after bit 0 was processed
+	 */
+	lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
+
+	/* Process remaining 30 bit of the key */
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(1);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(2);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(3);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(4);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(5);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(6);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(7);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(8);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(9);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(10);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(11);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(12);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(13);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(14);
+	TXGBE_COMPUTE_SIG_HASH_ITERATION(15);
+
+	/* combine common_hash result with signature and bucket hashes */
+	bucket_hash ^= common_hash;
+	bucket_hash &= TXGBE_ATR_HASH_MASK;
+
+	sig_hash ^= common_hash << 16;
+	sig_hash &= TXGBE_ATR_HASH_MASK << 16;
+
+	/* return completed signature hash */
+	return sig_hash ^ bucket_hash;
+}
+
+/**
+ *  txgbe_atr_add_signature_filter - Adds a signature hash filter
+ *  @hw: pointer to hardware structure
+ *  @input: unique input dword
+ *  @common: compressed common input dword
+ *  @queue: queue index to direct traffic to
+ **/
+s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_hash_dword input,
+				    union txgbe_atr_hash_dword common,
+				    u8 queue)
+{
+	u32 fdirhashcmd = 0;
+	u8 flow_type;
+	u32 fdircmd;
+	s32 err;
+
+	/* Get the flow_type in order to program FDIRCMD properly
+	 * lowest 2 bits are FDIRCMD.L4TYPE, third lowest bit is FDIRCMD.IPV6
+	 * fifth is FDIRCMD.TUNNEL_FILTER
+	 */
+	flow_type = input.formatted.flow_type;
+	switch (flow_type) {
+	case TXGBE_ATR_FLOW_TYPE_TCPV4:
+	case TXGBE_ATR_FLOW_TYPE_UDPV4:
+	case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+	case TXGBE_ATR_FLOW_TYPE_TCPV6:
+	case TXGBE_ATR_FLOW_TYPE_UDPV6:
+	case TXGBE_ATR_FLOW_TYPE_SCTPV6:
+		break;
+	default:
+		DEBUGOUT(" Error on flow type input\n");
+		return TXGBE_ERR_CONFIG;
+	}
+
+	/* configure FDIRCMD register */
+	fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW |
+		  TXGBE_RDB_FDIR_CMD_FILTER_UPDATE |
+		  TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN;
+	fdircmd |= (u32)flow_type << TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT;
+	fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT;
+
+	fdirhashcmd |= txgbe_atr_compute_sig_hash(input, common);
+	fdirhashcmd |= 0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT;
+	wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhashcmd);
+
+	wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd);
+
+	err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+	if (err) {
+		DEBUGOUT("Flow Director command did not complete!\n");
+		return err;
+	}
+
+	DEBUGOUT2("Tx Queue=%x hash=%x\n", queue, (u32)fdirhashcmd);
+
+	return 0;
+}
+
+#define TXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \
+do { \
+	u32 n = (_n); \
+	if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+		bucket_hash ^= lo_hash_dword >> n; \
+	if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+		bucket_hash ^= hi_hash_dword >> n; \
+} while (0)
+
+/**
+ *  txgbe_atr_compute_perfect_hash - Compute the perfect filter hash
+ *  @atr_input: input bitstream to compute the hash on
+ *  @input_mask: mask for the input bitstream
+ *
+ *  This function serves two main purposes.  First it applies the input_mask
+ *  to the atr_input resulting in a cleaned up atr_input data stream.
+ *  Secondly it computes the hash and stores it in the bkt_hash field at
+ *  the end of the input byte stream.  This way it will be available for
+ *  future use without needing to recompute the hash.
+ **/
+void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input,
+				    union txgbe_atr_input *input_mask)
+{
+	u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
+	u32 bucket_hash = 0;
+	u32 hi_dword = 0;
+	u32 i = 0;
+
+	/* Apply masks to input data */
+	for (i = 0; i < 11; i++)
+		input->dword_stream[i] &= input_mask->dword_stream[i];
+
+	/* record the flow_vm_vlan bits as they are a key part to the hash */
+	flow_vm_vlan = TXGBE_NTOHL(input->dword_stream[0]);
+
+	/* generate common hash dword */
+	for (i = 1; i <= 10; i++)
+		hi_dword ^= input->dword_stream[i];
+	hi_hash_dword = TXGBE_NTOHL(hi_dword);
+
+	/* low dword is word swapped version of common */
+	lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
+
+	/* apply flow ID/VM pool/VLAN ID bits to hash words */
+	hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
+
+	/* Process bits 0 and 16 */
+	TXGBE_COMPUTE_BKT_HASH_ITERATION(0);
+
+	/* apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
+	 * delay this because bit 0 of the stream should not be processed
+	 * so we do not add the VLAN until after bit 0 was processed
+	 */
+	lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
+
+	/* Process remaining 30 bit of the key */
+	for (i = 1; i <= 15; i++)
+		TXGBE_COMPUTE_BKT_HASH_ITERATION(i);
+
+	/* Limit hash to 13 bits since max bucket count is 8K.
+	 * Store result at the end of the input stream.
+	 */
+	input->formatted.bkt_hash = bucket_hash & 0x1FFF;
+}
+
+/**
+ *  txgbe_get_fdirtcpm - generate a TCP port from atr_input_masks
+ *  @input_mask: mask to be bit swapped
+ *
+ *  The source and destination port masks for flow director are bit swapped
+ *  in that bit 15 effects bit 0, 14 effects 1, 13, 2 etc.  In order to
+ *  generate a correctly swapped value we need to bit swap the mask and that
+ *  is what is accomplished by this function.
+ **/
+static u32 txgbe_get_fdirtcpm(union txgbe_atr_input *input_mask)
+{
+	u32 mask = TXGBE_NTOHS(input_mask->formatted.dst_port);
+
+	mask <<= TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT;
+	mask |= TXGBE_NTOHS(input_mask->formatted.src_port);
+
+	return mask;
+}
+
+/* These two macros are meant to address the fact that we have registers
+ * that are either all or in part big-endian.  As a result on big-endian
+ * systems we will end up byte swapping the value to little-endian before
+ * it is byte swapped again and written to the hardware in the original
+ * big-endian format.
+ */
+#define TXGBE_STORE_AS_BE32(_value) \
+	(((u32)(_value) >> 24) | (((u32)(_value) & 0x00FF0000) >> 8) | \
+	 (((u32)(_value) & 0x0000FF00) << 8) | ((u32)(_value) << 24))
+
+#define TXGBE_WRITE_REG_BE32(a, reg, value) \
+	wr32((a), (reg), TXGBE_STORE_AS_BE32(TXGBE_NTOHL(value)))
+
+#define TXGBE_STORE_AS_BE16(_value) \
+	TXGBE_NTOHS(((u16)(_value) >> 8) | ((u16)(_value) << 8))
+
+s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw,
+			      union txgbe_atr_input *input_mask,
+			      bool __maybe_unused cloud_mode)
+{
+	/* mask IPv6 since it is currently not supported */
+	u32 fdirm = 0;
+	u32 fdirtcpm;
+	u32 flex = 0;
+
+	/* Program the relevant mask registers.  If src/dst_port or src/dst_addr
+	 * are zero, then assume a full mask for that field.  Also assume that
+	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
+	 * cannot be masked out in this implementation.
+	 *
+	 * This also assumes IPv4 only.  IPv6 masking isn't supported at this
+	 * point in time.
+	 */
+
+	/* verify bucket hash is cleared on hash generation */
+	if (input_mask->formatted.bkt_hash)
+		DEBUGOUT(" bucket hash should always be 0 in mask\n");
+
+	/* Program FDIRM and verify partial masks */
+	switch (input_mask->formatted.vm_pool & 0x7F) {
+	case 0x0:
+		fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_POOL;
+	case 0x7F:
+		break;
+	default:
+		DEBUGOUT(" Error on vm pool mask\n");
+		return TXGBE_ERR_CONFIG;
+	}
+
+	switch (input_mask->formatted.flow_type & TXGBE_ATR_L4TYPE_MASK) {
+	case 0x0:
+		fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_L4P;
+		if (input_mask->formatted.dst_port ||
+		    input_mask->formatted.src_port) {
+			DEBUGOUT(" Error on src/dst port mask\n");
+			return TXGBE_ERR_CONFIG;
+		}
+	case TXGBE_ATR_L4TYPE_MASK:
+		break;
+	default:
+		DEBUGOUT(" Error on flow type mask\n");
+		return TXGBE_ERR_CONFIG;
+	}
+
+	/* Now mask VM pool and destination IPv6 - bits 5 and 2 */
+	wr32(hw, TXGBE_RDB_FDIR_OTHER_MSK, fdirm);
+
+	flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(0),
+		     ~(TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK |
+		       TXGBE_RDB_FDIR_FLEX_CFG_MSK |
+		       TXGBE_RDB_FDIR_FLEX_CFG_OFST));
+	flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC |
+		 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT);
+
+	switch (input_mask->formatted.flex_bytes & 0xFFFF) {
+	case 0x0000:
+		/* Mask Flex Bytes */
+		flex |= TXGBE_RDB_FDIR_FLEX_CFG_MSK;
+	case 0xFFFF:
+		break;
+	default:
+		DEBUGOUT("Error on flexible byte mask\n");
+		return TXGBE_ERR_CONFIG;
+	}
+	wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(0), flex);
+
+	/* store the TCP/UDP port masks, bit reversed from port layout */
+	fdirtcpm = txgbe_get_fdirtcpm(input_mask);
+
+	/* write both the same so that UDP and TCP use the same mask */
+	wr32(hw, TXGBE_RDB_FDIR_TCP_MSK, ~fdirtcpm);
+	wr32(hw, TXGBE_RDB_FDIR_UDP_MSK, ~fdirtcpm);
+	wr32(hw, TXGBE_RDB_FDIR_SCTP_MSK, ~fdirtcpm);
+
+	/* store source and destination IP masks (little-enian) */
+	wr32(hw, TXGBE_RDB_FDIR_SA4_MSK,
+	     TXGBE_NTOHL(~input_mask->formatted.src_ip[0]));
+	wr32(hw, TXGBE_RDB_FDIR_DA4_MSK,
+	     TXGBE_NTOHL(~input_mask->formatted.dst_ip[0]));
+	return 0;
+}
+
+s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_input *input,
+				    u16 soft_id, u8 queue,
+				    bool cloud_mode)
+{
+	u32 fdirport, fdirvlan, fdirhash, fdircmd;
+	s32 err;
+
+	if (!cloud_mode) {
+		/* currently IPv6 is not supported, must be programmed with 0 */
+		wr32(hw, TXGBE_RDB_FDIR_IP6(2),
+		     TXGBE_NTOHL(input->formatted.src_ip[0]));
+		wr32(hw, TXGBE_RDB_FDIR_IP6(1),
+		     TXGBE_NTOHL(input->formatted.src_ip[1]));
+		wr32(hw, TXGBE_RDB_FDIR_IP6(0),
+		     TXGBE_NTOHL(input->formatted.src_ip[2]));
+
+		/* record the source address (little-endian) */
+		wr32(hw, TXGBE_RDB_FDIR_SA,
+		     TXGBE_NTOHL(input->formatted.src_ip[0]));
+
+		/* record the first 32 bits of the destination address
+		 * (little-endian)
+		 */
+		wr32(hw, TXGBE_RDB_FDIR_DA,
+		     TXGBE_NTOHL(input->formatted.dst_ip[0]));
+
+		/* record source and destination port (little-endian)*/
+		fdirport = TXGBE_NTOHS(input->formatted.dst_port);
+		fdirport <<= TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT;
+		fdirport |= TXGBE_NTOHS(input->formatted.src_port);
+		wr32(hw, TXGBE_RDB_FDIR_PORT, fdirport);
+	}
+
+	/* record packet type and flex_bytes(little-endian) */
+	fdirvlan = TXGBE_NTOHS(input->formatted.flex_bytes);
+	fdirvlan <<= TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT;
+
+	fdirvlan |= TXGBE_NTOHS(input->formatted.vlan_id);
+	wr32(hw, TXGBE_RDB_FDIR_FLEX, fdirvlan);
+
+	/* configure FDIRHASH register */
+	fdirhash = input->formatted.bkt_hash |
+		   0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT;
+	fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT;
+	wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+
+	/* flush all previous writes to make certain registers are
+	 * programmed prior to issuing the command
+	 */
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* configure FDIRCMD register */
+	fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW |
+		  TXGBE_RDB_FDIR_CMD_FILTER_UPDATE |
+		  TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN;
+	if (queue == TXGBE_RDB_FDIR_DROP_QUEUE)
+		fdircmd |= TXGBE_RDB_FDIR_CMD_DROP;
+	fdircmd |= input->formatted.flow_type <<
+		   TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT;
+	fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT;
+	fdircmd |= (u32)input->formatted.vm_pool <<
+			TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT;
+
+	wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd);
+	err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+	if (err) {
+		DEBUGOUT("Flow Director command did not complete!\n");
+		return err;
+	}
+
+	return 0;
+}
+
 /**
  *  txgbe_start_hw - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index 1b11735cc278..e6b78292c60b 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -130,6 +130,30 @@ s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
 					  bool autoneg_wait_to_complete);
 int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit);
 
+s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw);
+s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl);
+s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl,
+			    bool cloud_mode);
+s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_hash_dword input,
+				    union txgbe_atr_hash_dword common,
+				    u8 queue);
+s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw,
+			      union txgbe_atr_input *input_mask, bool cloud_mode);
+s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_input *input,
+				    u16 soft_id, u8 queue, bool cloud_mode);
+s32 txgbe_fdir_add_perfect_filter(struct txgbe_hw *hw,
+				  union txgbe_atr_input *input,
+				  union txgbe_atr_input *mask,
+				  u16 soft_id,
+				  u8 queue,
+				  bool cloud_mode);
+void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input,
+				    union txgbe_atr_input *mask);
+u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input,
+			       union txgbe_atr_hash_dword common);
+
 s32 txgbe_get_link_capabilities(struct txgbe_hw *hw,
 				u32 *speed, bool *autoneg);
 enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
index c7eafce35e87..b466421b41cf 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_lib.c
@@ -52,6 +52,23 @@ static bool txgbe_set_rss_queues(struct txgbe_adapter *adapter)
 	f->indices = rss_i;
 	f->mask = TXGBE_RSS_64Q_MASK;
 
+	/* disable ATR by default, it will be configured below */
+	adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+
+	/* Use Flow Director in addition to RSS to ensure the best
+	 * distribution of flows across cores, even when an FDIR flow
+	 * isn't matched.
+	 */
+	if (rss_i > 1 && adapter->atr_sample_rate) {
+		f = &adapter->ring_feature[RING_F_FDIR];
+
+		f->indices = f->limit;
+		rss_i = f->indices;
+
+		if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+			adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE;
+	}
+
 	adapter->num_rx_queues = rss_i;
 	adapter->num_tx_queues = rss_i;
 
@@ -179,12 +196,22 @@ static int txgbe_alloc_q_vector(struct txgbe_adapter *adapter,
 	int node = -1;
 	int cpu = -1;
 	int ring_count, size;
+	u16 rss_i = 0;
 
 	/* note this will allocate space for the ring structure as well! */
 	ring_count = txr_count + rxr_count;
 	size = sizeof(struct txgbe_q_vector) +
 	       (sizeof(struct txgbe_ring) * ring_count);
 
+	/* customize cpu for Flow Director mapping */
+	rss_i = adapter->ring_feature[RING_F_RSS].indices;
+	if (rss_i > 1 && adapter->atr_sample_rate) {
+		if (cpu_online(v_idx)) {
+			cpu = v_idx;
+			node = cpu_to_node(cpu);
+		}
+	}
+
 	/* allocate q_vector and rings */
 	q_vector = kzalloc_node(size, GFP_KERNEL, node);
 	if (!q_vector)
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 6f1836a61711..ea17747c0df9 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -16,6 +16,7 @@
 #include <linux/aer.h>
 #include <net/checksum.h>
 #include <net/ip6_checksum.h>
+#include <linux/ethtool.h>
 #include <net/vxlan.h>
 #include <linux/etherdevice.h>
 
@@ -1486,6 +1487,10 @@ void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
 
 	mask |= TXGBE_PX_MISC_IEN_OVER_HEAT;
 
+	if ((adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) &&
+	    !(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT))
+		mask |= TXGBE_PX_MISC_IEN_FLOW_DIR;
+
 	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
 
 	/* unmask interrupt */
@@ -1529,6 +1534,28 @@ static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
 		txgbe_service_event_schedule(adapter);
 	}
 
+	/* Handle Flow Director Full threshold interrupt */
+	if (eicr & TXGBE_PX_MISC_IC_FLOW_DIR) {
+		int reinit_count = 0;
+		int i;
+
+		for (i = 0; i < adapter->num_tx_queues; i++) {
+			struct txgbe_ring *ring = adapter->tx_ring[i];
+
+			if (test_and_clear_bit(__TXGBE_TX_FDIR_INIT_DONE,
+					       &ring->state))
+				reinit_count++;
+		}
+		if (reinit_count) {
+			/* no more flow director interrupts until after init */
+			wr32m(hw, TXGBE_PX_MISC_IEN,
+			      TXGBE_PX_MISC_IEN_FLOW_DIR, 0);
+			adapter->flags2 |=
+				TXGBE_FLAG2_FDIR_REQUIRES_REINIT;
+			txgbe_service_event_schedule(adapter);
+		}
+	}
+
 	txgbe_check_sfp_event(adapter, eicr);
 	txgbe_check_overtemp_event(adapter, eicr);
 
@@ -1645,6 +1672,13 @@ static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter)
 				  q_vector->name, err);
 			goto free_queue_irqs;
 		}
+
+		/* If Flow Director is enabled, set interrupt affinity */
+		if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+			/* assign the mask for this irq */
+			irq_set_affinity_hint(entry->vector,
+					      &q_vector->affinity_mask);
+		}
 	}
 
 	err = request_irq(adapter->msix_entries[vector].vector,
@@ -1868,6 +1902,15 @@ void txgbe_configure_tx_ring(struct txgbe_adapter *adapter,
 
 	txdctl |= 0x20 << TXGBE_PX_TR_CFG_WTHRESH_SHIFT;
 
+	/* reinitialize flowdirector state */
+	if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+		ring->atr_sample_rate = adapter->atr_sample_rate;
+		ring->atr_count = 0;
+		set_bit(__TXGBE_TX_FDIR_INIT_DONE, &ring->state);
+	} else {
+		ring->atr_sample_rate = 0;
+	}
+
 	/* initialize XPS */
 	if (!test_and_set_bit(__TXGBE_TX_XPS_INIT_DONE, &ring->state)) {
 		struct txgbe_q_vector *q_vector = ring->q_vector;
@@ -2853,11 +2896,59 @@ static void txgbe_pbthresh_setup(struct txgbe_adapter *adapter)
 static void txgbe_configure_pb(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
+	int hdrm;
+
+	if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE ||
+	    adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+		hdrm = 32 << adapter->fdir_pballoc;
+	else
+		hdrm = 0;
 
-	TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL);
+	TCALL(hw, mac.ops.setup_rxpba, 0, hdrm, PBA_STRATEGY_EQUAL);
 	txgbe_pbthresh_setup(adapter);
 }
 
+static void txgbe_fdir_filter_restore(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct hlist_node *node;
+	struct txgbe_fdir_filter *filter;
+	u8 queue = 0;
+
+	spin_lock(&adapter->fdir_perfect_lock);
+
+	if (!hlist_empty(&adapter->fdir_filter_list))
+		txgbe_fdir_set_input_mask(hw, &adapter->fdir_mask,
+					  adapter->cloud_mode);
+
+	hlist_for_each_entry_safe(filter, node,
+				  &adapter->fdir_filter_list, fdir_node) {
+		if (filter->action == TXGBE_RDB_FDIR_DROP_QUEUE) {
+			queue = TXGBE_RDB_FDIR_DROP_QUEUE;
+		} else {
+			u32 ring = ethtool_get_flow_spec_ring(filter->action);
+
+			if (ring >= adapter->num_rx_queues) {
+				txgbe_err(drv,
+					  "FDIR restore failed, ring:%u\n",
+					  ring);
+				continue;
+			}
+
+			/* Map the ring onto the absolute queue index */
+			queue = adapter->rx_ring[ring]->reg_idx;
+		}
+
+		txgbe_fdir_write_perfect_filter(hw,
+						&filter->filter,
+						filter->sw_idx,
+						queue,
+						adapter->cloud_mode);
+	}
+
+	spin_unlock(&adapter->fdir_perfect_lock);
+}
+
 void txgbe_configure_isb(struct txgbe_adapter *adapter)
 {
 	/* set ISB Address */
@@ -2903,6 +2994,16 @@ static void txgbe_configure(struct txgbe_adapter *adapter)
 
 	TCALL(hw, mac.ops.disable_sec_rx_path);
 
+	if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+		txgbe_init_fdir_signature(&adapter->hw,
+					  adapter->fdir_pballoc);
+	} else if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE) {
+		txgbe_init_fdir_perfect(&adapter->hw,
+					adapter->fdir_pballoc,
+					adapter->cloud_mode);
+		txgbe_fdir_filter_restore(adapter);
+	}
+
 	TCALL(hw, mac.ops.enable_sec_rx_path);
 
 	txgbe_configure_tx(adapter);
@@ -3217,6 +3318,23 @@ static void txgbe_clean_all_tx_rings(struct txgbe_adapter *adapter)
 		txgbe_clean_tx_ring(adapter->tx_ring[i]);
 }
 
+static void txgbe_fdir_filter_exit(struct txgbe_adapter *adapter)
+{
+	struct hlist_node *node;
+	struct txgbe_fdir_filter *filter;
+
+	spin_lock(&adapter->fdir_perfect_lock);
+
+	hlist_for_each_entry_safe(filter, node,
+				  &adapter->fdir_filter_list, fdir_node) {
+		hlist_del(&filter->fdir_node);
+		kfree(filter);
+	}
+	adapter->fdir_filter_count = 0;
+
+	spin_unlock(&adapter->fdir_perfect_lock);
+}
+
 void txgbe_disable_device(struct txgbe_adapter *adapter)
 {
 	struct net_device *netdev = adapter->netdev;
@@ -3246,7 +3364,8 @@ void txgbe_disable_device(struct txgbe_adapter *adapter)
 
 	txgbe_napi_disable_all(adapter);
 
-	adapter->flags2 &= ~(TXGBE_FLAG2_PF_RESET_REQUESTED |
+	adapter->flags2 &= ~(TXGBE_FLAG2_FDIR_REQUIRES_REINIT |
+			     TXGBE_FLAG2_PF_RESET_REQUESTED |
 			     TXGBE_FLAG2_GLOBAL_RESET_REQUESTED);
 	adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
 
@@ -3324,7 +3443,7 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 	struct txgbe_hw *hw = &adapter->hw;
 	struct pci_dev *pdev = adapter->pdev;
 	int err;
-	unsigned int rss;
+	unsigned int fdir, rss;
 
 	/* PCI config space info */
 	hw->vendor_id = pdev->vendor;
@@ -3377,8 +3496,14 @@ static int txgbe_sw_init(struct txgbe_adapter *adapter)
 	adapter->flags |= TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE;
 	adapter->flags2 |= TXGBE_FLAG2_RSC_CAPABLE;
 
+	fdir = min_t(int, TXGBE_MAX_FDIR_INDICES, num_online_cpus());
+	adapter->ring_feature[RING_F_FDIR].limit = fdir;
+	adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_64K;
 	adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
 
+	/* n-tuple support exists, always init our spinlock */
+	spin_lock_init(&adapter->fdir_perfect_lock);
+
 	/* default flow control settings */
 	hw->fc.requested_mode = txgbe_fc_full;
 	hw->fc.current_mode = txgbe_fc_full;
@@ -3808,6 +3933,8 @@ int txgbe_close(struct net_device *netdev)
 	txgbe_free_all_rx_resources(adapter);
 	txgbe_free_all_tx_resources(adapter);
 
+	txgbe_fdir_filter_exit(adapter);
+
 	txgbe_release_hw_control(adapter);
 
 	return 0;
@@ -4004,6 +4131,9 @@ void txgbe_update_stats(struct txgbe_adapter *adapter)
 				     rd32(hw, TXGBE_RDM_DRP_PKT);
 	hwstats->lxonrxc += rd32(hw, TXGBE_MAC_LXONRXC);
 
+	hwstats->fdirmatch += rd32(hw, TXGBE_RDB_FDIR_MATCH);
+	hwstats->fdirmiss += rd32(hw, TXGBE_RDB_FDIR_MISS);
+
 	bprc = rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW);
 	hwstats->bprc += bprc;
 	hwstats->mprc = 0;
@@ -4035,6 +4165,42 @@ void txgbe_update_stats(struct txgbe_adapter *adapter)
 	net_stats->rx_missed_errors = total_mpc;
 }
 
+/**
+ * txgbe_fdir_reinit_subtask - worker thread to reinit FDIR filter table
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_fdir_reinit_subtask(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	int i;
+
+	if (!(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT))
+		return;
+
+	adapter->flags2 &= ~TXGBE_FLAG2_FDIR_REQUIRES_REINIT;
+
+	/* if interface is down do nothing */
+	if (test_bit(__TXGBE_DOWN, &adapter->state))
+		return;
+
+	/* do nothing if we are not using signature filters */
+	if (!(adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE))
+		return;
+
+	adapter->fdir_overflow++;
+
+	if (txgbe_reinit_fdir_tables(hw) == 0) {
+		for (i = 0; i < adapter->num_tx_queues; i++)
+			set_bit(__TXGBE_TX_FDIR_INIT_DONE,
+				&adapter->tx_ring[i]->state);
+		/* re-enable flow director interrupts */
+		wr32m(hw, TXGBE_PX_MISC_IEN,
+		      TXGBE_PX_MISC_IEN_FLOW_DIR, TXGBE_PX_MISC_IEN_FLOW_DIR);
+	} else {
+		txgbe_err(probe, "failed to finish FDIR re-initialization, ignored adding FDIR ATR filters\n");
+	}
+}
+
 /**
  * txgbe_check_hang_subtask - check for hung queues and dropped interrupts
  * @adapter - pointer to the device adapter structure
@@ -4473,6 +4639,7 @@ static void txgbe_service_task(struct work_struct *work)
 	txgbe_sfp_link_config_subtask(adapter);
 	txgbe_check_overtemp_subtask(adapter);
 	txgbe_watchdog_subtask(adapter);
+	txgbe_fdir_reinit_subtask(adapter);
 	txgbe_check_hang_subtask(adapter);
 
 	txgbe_service_event_complete(adapter);
@@ -5139,6 +5306,89 @@ static int txgbe_tx_map(struct txgbe_ring *tx_ring,
 	return -1;
 }
 
+static void txgbe_atr(struct txgbe_ring *ring,
+		      struct txgbe_tx_buffer *first,
+		      struct txgbe_dptype dptype)
+{
+	struct txgbe_q_vector *q_vector = ring->q_vector;
+	union txgbe_atr_hash_dword input = { .dword = 0 };
+	union txgbe_atr_hash_dword common = { .dword = 0 };
+	union network_header hdr;
+	struct tcphdr *th;
+
+	/* if ring doesn't have a interrupt vector, cannot perform ATR */
+	if (!q_vector)
+		return;
+
+	/* do nothing if sampling is disabled */
+	if (!ring->atr_sample_rate)
+		return;
+
+	ring->atr_count++;
+
+	if (dptype.etype) {
+		if (TXGBE_PTYPE_TYPL4(dptype.ptype) != TXGBE_PTYPE_TYP_TCP)
+			return;
+		hdr.raw = (void *)skb_inner_network_header(first->skb);
+		th = inner_tcp_hdr(first->skb);
+	} else {
+		if (TXGBE_PTYPE_PKT(dptype.ptype) != TXGBE_PTYPE_PKT_IP ||
+		    TXGBE_PTYPE_TYPL4(dptype.ptype) != TXGBE_PTYPE_TYP_TCP)
+			return;
+		hdr.raw = (void *)skb_network_header(first->skb);
+		th = tcp_hdr(first->skb);
+	}
+
+	/* skip this packet since it is invalid or the socket is closing */
+	if (!th || th->fin)
+		return;
+
+	/* sample on all syn packets or once every atr sample count */
+	if (!th->syn && ring->atr_count < ring->atr_sample_rate)
+		return;
+
+	/* reset sample count */
+	ring->atr_count = 0;
+
+	/* src and dst are inverted, think how the receiver sees them
+	 *
+	 * The input is broken into two sections, a non-compressed section
+	 * containing vm_pool, vlan_id, and flow_type.  The rest of the data
+	 * is XORed together and stored in the compressed dword.
+	 */
+	input.formatted.vlan_id = htons((u16)dptype.ptype);
+
+	/* since src port and flex bytes occupy the same word XOR them together
+	 * and write the value to source port portion of compressed dword
+	 */
+	if (first->tx_flags & TXGBE_TX_FLAGS_SW_VLAN)
+		common.port.src ^= th->dest ^ first->skb->protocol;
+	else if (first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN)
+		common.port.src ^= th->dest ^ first->skb->vlan_proto;
+	else
+		common.port.src ^= th->dest ^ first->protocol;
+	common.port.dst ^= th->source;
+
+	if (TXGBE_PTYPE_PKT_IPV6 & TXGBE_PTYPE_PKT(dptype.ptype)) {
+		input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV6;
+		common.ip ^= hdr.ipv6->saddr.s6_addr32[0] ^
+					 hdr.ipv6->saddr.s6_addr32[1] ^
+					 hdr.ipv6->saddr.s6_addr32[2] ^
+					 hdr.ipv6->saddr.s6_addr32[3] ^
+					 hdr.ipv6->daddr.s6_addr32[0] ^
+					 hdr.ipv6->daddr.s6_addr32[1] ^
+					 hdr.ipv6->daddr.s6_addr32[2] ^
+					 hdr.ipv6->daddr.s6_addr32[3];
+	} else {
+		input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+		common.ip ^= hdr.ipv4->saddr ^ hdr.ipv4->daddr;
+	}
+
+	/* This assumes the Rx queue and Tx queue are bound to the same CPU */
+	txgbe_fdir_add_signature_filter(&q_vector->adapter->hw,
+					input, common, ring->queue_index);
+}
+
 /**
  * skb_pad - zero pad the tail of an skb
  * @skb: buffer to pad
@@ -5271,6 +5521,10 @@ netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
 	else if (!tso)
 		txgbe_tx_csum(tx_ring, first, dptype);
 
+	/* add the ATR filter if ATR is on */
+	if (test_bit(__TXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
+		txgbe_atr(tx_ring, first, dptype);
+
 	txgbe_tx_map(tx_ring, first, hdr_len);
 
 	return NETDEV_TX_OK;
@@ -5426,6 +5680,37 @@ static int txgbe_set_features(struct net_device *netdev,
 		}
 	}
 
+	/* Check if Flow Director n-tuple support was enabled or disabled.  If
+	 * the state changed, we need to reset.
+	 */
+	switch (features & NETIF_F_NTUPLE) {
+	case NETIF_F_NTUPLE:
+		/* turn off ATR, enable perfect filters and reset */
+		if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+			need_reset = true;
+
+		adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+		adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+		break;
+	default:
+		/* turn off perfect filters, enable ATR and reset */
+		if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+			need_reset = true;
+
+		adapter->flags &= ~TXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+
+		/* We cannot enable ATR if RSS is disabled */
+		if (adapter->ring_feature[RING_F_RSS].limit <= 1)
+			break;
+
+		/* A sample rate of 0 indicates ATR disabled */
+		if (!adapter->atr_sample_rate)
+			break;
+
+		adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE;
+		break;
+	}
+
 	if (features & NETIF_F_HW_VLAN_CTAG_RX)
 		txgbe_vlan_strip_enable(adapter);
 	else
@@ -5849,6 +6134,8 @@ static int txgbe_probe(struct pci_dev *pdev,
 	i_s_var += sprintf(info_string, "Enabled Features: ");
 	i_s_var += sprintf(i_s_var, "RxQ: %d TxQ: %d ",
 			   adapter->num_rx_queues, adapter->num_tx_queues);
+	if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE)
+		i_s_var += sprintf(i_s_var, "FdirHash ");
 	if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
 		i_s_var += sprintf(i_s_var, "RSC ");
 	if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE)
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index af9339777ea5..9416f80d07f0 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -1959,6 +1959,85 @@ struct txgbe_tx_context_desc {
 	__le32 mss_l4len_idx;
 };
 
+/************************* Flow Directory HASH *******************************/
+/* Software ATR hash keys */
+#define TXGBE_ATR_BUCKET_HASH_KEY       0x3DAD14E2
+#define TXGBE_ATR_SIGNATURE_HASH_KEY    0x174D3614
+
+/* Software ATR input stream values and masks */
+#define TXGBE_ATR_HASH_MASK             0x7fff
+#define TXGBE_ATR_L4TYPE_MASK           0x3
+#define TXGBE_ATR_L4TYPE_UDP            0x1
+#define TXGBE_ATR_L4TYPE_TCP            0x2
+#define TXGBE_ATR_L4TYPE_SCTP           0x3
+#define TXGBE_ATR_L4TYPE_IPV6_MASK      0x4
+#define TXGBE_ATR_L4TYPE_TUNNEL_MASK    0x10
+enum txgbe_atr_flow_type {
+	TXGBE_ATR_FLOW_TYPE_IPV4        = 0x0,
+	TXGBE_ATR_FLOW_TYPE_UDPV4       = 0x1,
+	TXGBE_ATR_FLOW_TYPE_TCPV4       = 0x2,
+	TXGBE_ATR_FLOW_TYPE_SCTPV4      = 0x3,
+	TXGBE_ATR_FLOW_TYPE_IPV6        = 0x4,
+	TXGBE_ATR_FLOW_TYPE_UDPV6       = 0x5,
+	TXGBE_ATR_FLOW_TYPE_TCPV6       = 0x6,
+	TXGBE_ATR_FLOW_TYPE_SCTPV6      = 0x7,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4       = 0x10,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4      = 0x11,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4      = 0x12,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4     = 0x13,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6       = 0x14,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6      = 0x15,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6      = 0x16,
+	TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6     = 0x17,
+};
+
+/* Flow Director ATR input struct. */
+union txgbe_atr_input {
+	/* Byte layout in order, all values with MSB first:
+	 *
+	 * vm_pool      - 1 byte
+	 * flow_type    - 1 byte
+	 * vlan_id      - 2 bytes
+	 * src_ip       - 16 bytes
+	 * inner_mac    - 6 bytes
+	 * cloud_mode   - 2 bytes
+	 * tni_vni      - 4 bytes
+	 * dst_ip       - 16 bytes
+	 * src_port     - 2 bytes
+	 * dst_port     - 2 bytes
+	 * flex_bytes   - 2 bytes
+	 * bkt_hash     - 2 bytes
+	 */
+	struct {
+		u8 vm_pool;
+		u8 flow_type;
+		__be16 vlan_id;
+		__be32 dst_ip[4];
+		__be32 src_ip[4];
+		__be16 src_port;
+		__be16 dst_port;
+		__be16 flex_bytes;
+		__be16 bkt_hash;
+	} formatted;
+	__be32 dword_stream[11];
+};
+
+/* Flow Director compressed ATR hash input struct */
+union txgbe_atr_hash_dword {
+	struct {
+		u8 vm_pool;
+		u8 flow_type;
+		__be16 vlan_id;
+	} formatted;
+	__be32 ip;
+	struct {
+		__be16 src;
+		__be16 dst;
+	} port;
+	__be16 flex_bytes;
+	__be32 dword;
+};
+
 /****************** Manageablility Host Interface defines ********************/
 #define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH  256 /* Num of bytes in range */
 #define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */
@@ -2330,6 +2409,8 @@ struct txgbe_hw_stats {
 	u64 qbtc[16];
 	u64 qprdc[16];
 	u64 pxon2offc[8];
+	u64 fdirmatch;
+	u64 fdirmiss;
 	u64 fccrc;
 	u64 fclast;
 	u64 ldpcec;
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 09/14] net: txgbe: Support PTP
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (7 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 08/14] net: txgbe: Support flow director Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 10/14] net: txgbe: Add ethtool support Jiawen Wu
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add support to enable PTP clock.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   2 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  34 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  95 +-
 .../net/ethernet/wangxun/txgbe/txgbe_ptp.c    | 840 ++++++++++++++++++
 4 files changed, 967 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_ptp.c

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index aceb0d2e56ee..f233bc45575c 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -8,4 +8,4 @@ obj-$(CONFIG_TXGBE) += txgbe.o
 
 txgbe-objs := txgbe_main.o \
               txgbe_hw.o txgbe_phy.o \
-              txgbe_lib.o
+              txgbe_lib.o txgbe_ptp.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 7a6683b33930..3b429cca494e 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -12,6 +12,8 @@
 #include <linux/etherdevice.h>
 #include <linux/timecounter.h>
 #include <linux/clocksource.h>
+#include <linux/net_tstamp.h>
+#include <linux/ptp_clock_kernel.h>
 #include <linux/aer.h>
 
 #include "txgbe_type.h"
@@ -110,6 +112,7 @@ enum txgbe_tx_flags {
  */
 struct txgbe_tx_buffer {
 	union txgbe_tx_desc *next_to_watch;
+	unsigned long time_stamp;
 	struct sk_buff *skb;
 	unsigned int bytecount;
 	unsigned short gso_segs;
@@ -200,6 +203,7 @@ struct txgbe_ring {
 	u8 reg_idx;
 	u16 next_to_use;
 	u16 next_to_clean;
+	unsigned long last_rx_timestamp;
 	u16 rx_buf_len;
 	union {
 		u16 next_to_alloc;
@@ -363,6 +367,8 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE         ((u32)(1 << 5))
 #define TXGBE_FLAG_FDIR_HASH_CAPABLE            ((u32)(1 << 6))
 #define TXGBE_FLAG_FDIR_PERFECT_CAPABLE         ((u32)(1 << 7))
+#define TXGBE_FLAG_RX_HWTSTAMP_ENABLED          ((u32)(1 << 8))
+#define TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER      ((u32)(1 << 9))
 
 /**
  * txgbe_adapter.flag2
@@ -477,6 +483,22 @@ struct txgbe_adapter {
 	bool netdev_registered;
 	u32 interrupt_event;
 
+	struct ptp_clock *ptp_clock;
+	struct ptp_clock_info ptp_caps;
+	struct work_struct ptp_tx_work;
+	struct sk_buff *ptp_tx_skb;
+	struct hwtstamp_config tstamp_config;
+	unsigned long ptp_tx_start;
+	unsigned long last_overflow_check;
+	unsigned long last_rx_ptp_check;
+	spinlock_t tmreg_lock;
+	struct cyclecounter hw_cc;
+	struct timecounter hw_tc;
+	u32 base_incval;
+	u32 tx_hwtstamp_timeouts;
+	u32 tx_hwtstamp_skipped;
+	u32 rx_hwtstamp_cleared;
+
 	struct txgbe_mac_addr *mac_table;
 
 	__le16 vxlan_port;
@@ -572,6 +594,18 @@ static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
 	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
 }
 
+void txgbe_ptp_init(struct txgbe_adapter *adapter);
+void txgbe_ptp_stop(struct txgbe_adapter *adapter);
+void txgbe_ptp_suspend(struct txgbe_adapter *adapter);
+void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter);
+void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter);
+void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb);
+int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr);
+int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr);
+void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter);
+void txgbe_ptp_reset(struct txgbe_adapter *adapter);
+void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter);
+
 /**
  * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
  * disable a interrupt by writing to PX_IMS with the corresponding bit=1
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index ea17747c0df9..b225417c27d5 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -459,12 +459,13 @@ static bool txgbe_clean_tx_irq(struct txgbe_q_vector *q_vector,
 			"  next_to_use          <%x>\n"
 			"  next_to_clean        <%x>\n"
 			"tx_buffer_info[next_to_clean]\n"
+			"  time_stamp           <%lx>\n"
 			"  jiffies              <%lx>\n",
 			tx_ring->queue_index,
 			rd32(hw, TXGBE_PX_TR_RP(tx_ring->reg_idx)),
 			rd32(hw, TXGBE_PX_TR_WP(tx_ring->reg_idx)),
 			tx_ring->next_to_use, i,
-			jiffies);
+			tx_ring->tx_buffer_info[i].time_stamp, jiffies);
 
 		pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value);
 		if (value == TXGBE_FAILED_READ_CFG_WORD)
@@ -735,17 +736,25 @@ static void txgbe_rx_vlan(struct txgbe_ring *ring,
  * @skb: pointer to current skb being populated
  *
  * This function checks the ring, descriptor, and packet information in
- * order to populate the hash, checksum, VLAN, protocol, and
+ * order to populate the hash, checksum, VLAN, timestamp, protocol, and
  * other fields within the skb.
  **/
 static void txgbe_process_skb_fields(struct txgbe_ring *rx_ring,
 				     union txgbe_rx_desc *rx_desc,
 				     struct sk_buff *skb)
 {
+	u32 flags = rx_ring->q_vector->adapter->flags;
+
 	txgbe_update_rsc_stats(rx_ring, skb);
 	txgbe_rx_hash(rx_ring, rx_desc, skb);
 	txgbe_rx_checksum(rx_ring, rx_desc, skb);
 
+	if (unlikely(flags & TXGBE_FLAG_RX_HWTSTAMP_ENABLED) &&
+	    unlikely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_TS))) {
+		txgbe_ptp_rx_hwtstamp(rx_ring->q_vector->adapter, skb);
+		rx_ring->last_rx_timestamp = jiffies;
+	}
+
 	txgbe_rx_vlan(rx_ring, rx_desc, skb);
 
 	skb_record_rx_queue(skb, rx_ring->queue_index);
@@ -1491,6 +1500,8 @@ void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
 	    !(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT))
 		mask |= TXGBE_PX_MISC_IEN_FLOW_DIR;
 
+	mask |= TXGBE_PX_MISC_IEN_TIMESYNC;
+
 	wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
 
 	/* unmask interrupt */
@@ -1559,6 +1570,9 @@ static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
 	txgbe_check_sfp_event(adapter, eicr);
 	txgbe_check_overtemp_event(adapter, eicr);
 
+	if (unlikely(eicr & TXGBE_PX_MISC_IC_TIMESYNC))
+		txgbe_ptp_check_pps_event(adapter);
+
 	/* re-enable the original interrupt state, no lsc, no queues */
 	if (!test_bit(__TXGBE_DOWN, &adapter->state))
 		txgbe_irq_enable(adapter, false, false);
@@ -1747,6 +1761,9 @@ static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
 	txgbe_check_sfp_event(adapter, eicr_misc);
 	txgbe_check_overtemp_event(adapter, eicr_misc);
 
+	if (unlikely(eicr_misc & TXGBE_PX_MISC_IC_TIMESYNC))
+		txgbe_ptp_check_pps_event(adapter);
+
 	adapter->isb_mem[TXGBE_ISB_MISC] = 0;
 	/* would disable interrupts here but it is auto disabled */
 	napi_schedule_irqoff(&q_vector->napi);
@@ -3193,6 +3210,9 @@ void txgbe_reset(struct txgbe_adapter *adapter)
 
 	/* update SAN MAC vmdq pool selection */
 	TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
+
+	if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+		txgbe_ptp_reset(adapter);
 }
 
 /**
@@ -3865,6 +3885,8 @@ int txgbe_open(struct net_device *netdev)
 	if (err)
 		goto err_set_queues;
 
+	txgbe_ptp_init(adapter);
+
 	txgbe_up_complete(adapter);
 
 	txgbe_clear_vxlan_port(adapter);
@@ -3898,6 +3920,8 @@ static void txgbe_close_suspend(struct txgbe_adapter *adapter)
 {
 	struct txgbe_hw *hw = &adapter->hw;
 
+	txgbe_ptp_suspend(adapter);
+
 	txgbe_disable_device(adapter);
 	if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
 		TCALL(hw, mac.ops.disable_tx_laser);
@@ -3926,6 +3950,8 @@ int txgbe_close(struct net_device *netdev)
 {
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
 
+	txgbe_ptp_stop(adapter);
+
 	txgbe_down(adapter);
 	txgbe_free_irq(adapter);
 
@@ -4264,6 +4290,11 @@ static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
 		TCALL(hw, mac.ops.fc_enable);
 		txgbe_set_rx_drop_en(adapter);
 
+		adapter->last_rx_ptp_check = jiffies;
+
+		if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+			txgbe_ptp_start_cyclecounter(adapter);
+
 		if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
 			wr32(hw, TXGBE_MAC_TX_CFG,
 			     (rd32(hw, TXGBE_MAC_TX_CFG) &
@@ -4341,6 +4372,9 @@ static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter)
 	if (!netif_carrier_ok(netdev))
 		return;
 
+	if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+		txgbe_ptp_start_cyclecounter(adapter);
+
 	txgbe_info(drv, "NIC Link is Down\n");
 	netif_carrier_off(netdev);
 	netif_tx_stop_all_queues(netdev);
@@ -4641,6 +4675,12 @@ static void txgbe_service_task(struct work_struct *work)
 	txgbe_watchdog_subtask(adapter);
 	txgbe_fdir_reinit_subtask(adapter);
 	txgbe_check_hang_subtask(adapter);
+	if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) {
+		txgbe_ptp_overflow_check(adapter);
+		if (unlikely(adapter->flags &
+			     TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER))
+			txgbe_ptp_rx_hang(adapter);
+	}
 
 	txgbe_service_event_complete(adapter);
 }
@@ -5102,6 +5142,10 @@ static u32 txgbe_tx_cmd_type(u32 tx_flags)
 	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSO,
 				   TXGBE_TXD_TSE);
 
+	/* set timestamp bit if present */
+	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSTAMP,
+				   TXGBE_TXD_MAC_TSTAMP);
+
 	cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_LINKSEC,
 				   TXGBE_TXD_LINKSEC);
 
@@ -5255,6 +5299,11 @@ static int txgbe_tx_map(struct txgbe_ring *tx_ring,
 
 	netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
 
+	/* set the timestamp */
+	first->time_stamp = jiffies;
+
+	skb_tx_timestamp(skb);
+
 	/* Force memory writes to complete before letting h/w know there
 	 * are new descriptors to fetch.  (Only applicable for weak-ordered
 	 * memory model archs, such as IA-64).
@@ -5509,6 +5558,22 @@ netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
 		vlan_addlen += VLAN_HLEN;
 	}
 
+	if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+	    adapter->ptp_clock) {
+		if (!test_and_set_bit_lock(__TXGBE_PTP_TX_IN_PROGRESS,
+					   &adapter->state)) {
+			skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+			tx_flags |= TXGBE_TX_FLAGS_TSTAMP;
+
+			/* schedule check for Tx timestamp */
+			adapter->ptp_tx_skb = skb_get(skb);
+			adapter->ptp_tx_start = jiffies;
+			schedule_work(&adapter->ptp_tx_work);
+		} else {
+			adapter->tx_hwtstamp_skipped++;
+		}
+	}
+
 	/* record initial flags and protocol */
 	first->tx_flags = tx_flags;
 	first->protocol = protocol;
@@ -5525,7 +5590,8 @@ netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
 	if (test_bit(__TXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
 		txgbe_atr(tx_ring, first, dptype);
 
-	txgbe_tx_map(tx_ring, first, hdr_len);
+	if (txgbe_tx_map(tx_ring, first, hdr_len))
+		goto cleanup_tx_tstamp;
 
 	return NETDEV_TX_OK;
 
@@ -5533,6 +5599,14 @@ netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
 	dev_kfree_skb_any(first->skb);
 	first->skb = NULL;
 
+cleanup_tx_tstamp:
+	if (unlikely(tx_flags & TXGBE_TX_FLAGS_TSTAMP)) {
+		dev_kfree_skb_any(adapter->ptp_tx_skb);
+		adapter->ptp_tx_skb = NULL;
+		cancel_work_sync(&adapter->ptp_tx_work);
+		clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state);
+	}
+
 	return NETDEV_TX_OK;
 }
 
@@ -5632,6 +5706,20 @@ static int txgbe_del_sanmac_netdev(struct net_device *dev)
 	return err;
 }
 
+static int txgbe_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	switch (cmd) {
+	case SIOCGHWTSTAMP:
+		return txgbe_ptp_get_ts_config(adapter, ifr);
+	case SIOCSHWTSTAMP:
+		return txgbe_ptp_set_ts_config(adapter, ifr);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 void txgbe_do_reset(struct net_device *netdev)
 {
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
@@ -5805,6 +5893,7 @@ static const struct net_device_ops txgbe_netdev_ops = {
 	.ndo_tx_timeout         = txgbe_tx_timeout,
 	.ndo_vlan_rx_add_vid    = txgbe_vlan_rx_add_vid,
 	.ndo_vlan_rx_kill_vid   = txgbe_vlan_rx_kill_vid,
+	.ndo_eth_ioctl          = txgbe_ioctl,
 	.ndo_get_stats64        = txgbe_get_stats64,
 	.ndo_fdb_add            = txgbe_ndo_fdb_add,
 	.ndo_features_check     = txgbe_features_check,
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ptp.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_ptp.c
new file mode 100644
index 000000000000..11184c725c2e
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ptp.c
@@ -0,0 +1,840 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe.h"
+#include <linux/ptp_classify.h>
+
+/**
+ * SYSTIME is defined by a fixed point system which allows the user to
+ * define the scale counter increment value at every level change of
+ * the oscillator driving SYSTIME value. The time unit is determined by
+ * the clock frequency of the oscillator and TIMINCA register.
+ * The cyclecounter and timecounter structures are used to to convert
+ * the scale counter into nanoseconds. SYSTIME registers need to be converted
+ * to ns values by use of only a right shift.
+ * The following math determines the largest incvalue that will fit into
+ * the available bits in the TIMINCA register:
+ *   Period * [ 2 ^ ( MaxWidth - PeriodWidth ) ]
+ * PeriodWidth: Number of bits to store the clock period
+ * MaxWidth: The maximum width value of the TIMINCA register
+ * Period: The clock period for the oscillator, which changes based on the link
+ * speed:
+ *   At 10Gb link or no link, the period is 6.4 ns.
+ *   At 1Gb link, the period is multiplied by 10. (64ns)
+ *   At 100Mb link, the period is multiplied by 100. (640ns)
+ * round(): discard the fractional portion of the calculation
+ *
+ * The calculated value allows us to right shift the SYSTIME register
+ * value in order to quickly convert it into a nanosecond clock,
+ * while allowing for the maximum possible adjustment value.
+ *
+ * LinkSpeed    ClockFreq   ClockPeriod  TIMINCA:IV
+ * 10000Mbps    156.25MHz   6.4*10^-9    0xCCCCCC(0xFFFFF/ns)
+ * 1000 Mbps    62.5  MHz   16 *10^-9    0x800000(0x7FFFF/ns)
+ * 100  Mbps    6.25  MHz   160*10^-9    0xA00000(0xFFFF/ns)
+ * 10   Mbps    0.625 MHz   1600*10^-9   0xC7F380(0xFFF/ns)
+ * FPGA         31.25 MHz   32 *10^-9    0x800000(0x3FFFF/ns)
+ *
+ * These diagrams are only for the 10Gb link period
+ *
+ *       +--------------+  +--------------+
+ *       |      32      |  | 8 | 3 |  20  |
+ *       *--------------+  +--------------+
+ *        \________ 43 bits ______/  fract
+ *
+ * The 43 bit SYSTIME overflows every
+ *   2^43 * 10^-9 / 3600 = 2.4 hours
+ **/
+#define TXGBE_INCVAL_10GB 0xCCCCCC
+#define TXGBE_INCVAL_1GB  0x800000
+#define TXGBE_INCVAL_100  0xA00000
+#define TXGBE_INCVAL_10   0xC7F380
+#define TXGBE_INCVAL_FPGA 0x800000
+
+#define TXGBE_INCVAL_SHIFT_10GB  20
+#define TXGBE_INCVAL_SHIFT_1GB   18
+#define TXGBE_INCVAL_SHIFT_100   15
+#define TXGBE_INCVAL_SHIFT_10    12
+#define TXGBE_INCVAL_SHIFT_FPGA  17
+
+#define TXGBE_OVERFLOW_PERIOD    (HZ * 30)
+#define TXGBE_PTP_TX_TIMEOUT     (HZ)
+
+/**
+ * txgbe_ptp_read - read raw cycle counter (to be used by time counter)
+ * @hw_cc: the cyclecounter structure
+ *
+ * this function reads the cyclecounter registers and is called by the
+ * cyclecounter structure used to construct a ns counter from the
+ * arbitrary fixed point registers
+ */
+static u64 txgbe_ptp_read(const struct cyclecounter *hw_cc)
+{
+	struct txgbe_adapter *adapter =
+		container_of(hw_cc, struct txgbe_adapter, hw_cc);
+	struct txgbe_hw *hw = &adapter->hw;
+	u64 stamp = 0;
+
+	stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIML);
+	stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIMH) << 32;
+
+	return stamp;
+}
+
+/**
+ * txgbe_ptp_convert_to_hwtstamp - convert register value to hw timestamp
+ * @adapter: private adapter structure
+ * @hwtstamp: stack timestamp structure
+ * @systim: unsigned 64bit system time value
+ *
+ * We need to convert the adapter's RX/TXSTMP registers into a hwtstamp value
+ * which can be used by the stack's ptp functions.
+ *
+ * The lock is used to protect consistency of the cyclecounter and the SYSTIME
+ * registers. However, it does not need to protect against the Rx or Tx
+ * timestamp registers, as there can't be a new timestamp until the old one is
+ * unlatched by reading.
+ *
+ * In addition to the timestamp in hardware, some controllers need a software
+ * overflow cyclecounter, and this function takes this into account as well.
+ **/
+static void txgbe_ptp_convert_to_hwtstamp(struct txgbe_adapter *adapter,
+					  struct skb_shared_hwtstamps *hwtstamp,
+					  u64 timestamp)
+{
+	unsigned long flags;
+	u64 ns;
+
+	memset(hwtstamp, 0, sizeof(*hwtstamp));
+
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	ns = timecounter_cyc2time(&adapter->hw_tc, timestamp);
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+	hwtstamp->hwtstamp = ns_to_ktime(ns);
+}
+
+/**
+ * txgbe_ptp_adjfreq
+ * @ptp: the ptp clock structure
+ * @ppb: parts per billion adjustment from base
+ *
+ * adjust the frequency of the ptp cycle counter by the
+ * indicated ppb from the base frequency.
+ */
+static int txgbe_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+	struct txgbe_adapter *adapter =
+		container_of(ptp, struct txgbe_adapter, ptp_caps);
+	struct txgbe_hw *hw = &adapter->hw;
+	u64 freq, incval;
+	u32 diff;
+	int neg_adj = 0;
+
+	if (ppb < 0) {
+		neg_adj = 1;
+		ppb = -ppb;
+	}
+
+	smp_mb();
+	incval = READ_ONCE(adapter->base_incval);
+
+	freq = incval;
+	freq *= ppb;
+	diff = div_u64(freq, 1000000000ULL);
+
+	incval = neg_adj ? (incval - diff) : (incval + diff);
+
+	if (incval > TXGBE_TSC_1588_INC_IV(~0))
+		txgbe_dev_warn("PTP ppb adjusted SYSTIME rate overflowed!\n");
+	wr32(hw, TXGBE_TSC_1588_INC,
+	     TXGBE_TSC_1588_INC_IVP(incval, 2));
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_adjtime
+ * @ptp: the ptp clock structure
+ * @delta: offset to adjust the cycle counter by ns
+ *
+ * adjust the timer by resetting the timecounter structure.
+ */
+static int txgbe_ptp_adjtime(struct ptp_clock_info *ptp,
+			     s64 delta)
+{
+	struct txgbe_adapter *adapter =
+		container_of(ptp, struct txgbe_adapter, ptp_caps);
+	unsigned long flags;
+
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	timecounter_adjtime(&adapter->hw_tc, delta);
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_gettime64
+ * @ptp: the ptp clock structure
+ * @ts: timespec64 structure to hold the current time value
+ *
+ * read the timecounter and return the correct value on ns,
+ * after converting it into a struct timespec64.
+ */
+static int txgbe_ptp_gettime64(struct ptp_clock_info *ptp,
+			       struct timespec64 *ts)
+{
+	struct txgbe_adapter *adapter =
+		container_of(ptp, struct txgbe_adapter, ptp_caps);
+	unsigned long flags;
+	u64 ns;
+
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	ns = timecounter_read(&adapter->hw_tc);
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+	*ts = ns_to_timespec64(ns);
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_settime64
+ * @ptp: the ptp clock structure
+ * @ts: the timespec64 containing the new time for the cycle counter
+ *
+ * reset the timecounter to use a new base value instead of the kernel
+ * wall timer value.
+ */
+static int txgbe_ptp_settime64(struct ptp_clock_info *ptp,
+			       const struct timespec64 *ts)
+{
+	struct txgbe_adapter *adapter =
+		container_of(ptp, struct txgbe_adapter, ptp_caps);
+	u64 ns;
+	unsigned long flags;
+
+	ns = timespec64_to_ns(ts);
+
+	/* reset the timecounter */
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	timecounter_init(&adapter->hw_tc, &adapter->hw_cc, ns);
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_check_pps_event
+ * @adapter: the private adapter structure
+ * @eicr: the interrupt cause register value
+ *
+ * This function is called by the interrupt routine when checking for
+ * interrupts. It will check and handle a pps event.
+ */
+void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter)
+{
+	struct ptp_clock_event event;
+
+	event.type = PTP_CLOCK_PPS;
+
+	/* this check is necessary in case the interrupt was enabled via some
+	 * alternative means (ex. debug_fs). Better to check here than
+	 * everywhere that calls this function.
+	 */
+	if (!adapter->ptp_clock)
+		return;
+
+	/* we don't config PPS on SDP yet, so just return.
+	 * ptp_clock_event(adapter->ptp_clock, &event);
+	 */
+}
+
+/**
+ * txgbe_ptp_overflow_check - watchdog task to detect SYSTIME overflow
+ * @adapter: private adapter struct
+ *
+ * this watchdog task periodically reads the timecounter
+ * in order to prevent missing when the system time registers wrap
+ * around. This needs to be run approximately twice a minute for the fastest
+ * overflowing hardware. We run it for all hardware since it shouldn't have a
+ * large impact.
+ */
+void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter)
+{
+	bool timeout = time_is_before_jiffies(adapter->last_overflow_check +
+					      TXGBE_OVERFLOW_PERIOD);
+	struct timespec64 ts;
+
+	if (timeout) {
+		txgbe_ptp_gettime64(&adapter->ptp_caps, &ts);
+		adapter->last_overflow_check = jiffies;
+	}
+}
+
+/**
+ * txgbe_ptp_rx_hang - detect error case when Rx timestamp registers latched
+ * @adapter: private network adapter structure
+ *
+ * this watchdog task is scheduled to detect error case where hardware has
+ * dropped an Rx packet that was timestamped when the ring is full. The
+ * particular error is rare but leaves the device in a state unable to timestamp
+ * any future packets.
+ */
+void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_ring *rx_ring;
+	u32 tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL);
+	unsigned long rx_event;
+	int n;
+
+	/* if we don't have a valid timestamp in the registers, just update the
+	 * timeout counter and exit
+	 */
+	if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID)) {
+		adapter->last_rx_ptp_check = jiffies;
+		return;
+	}
+
+	/* determine the most recent watchdog or rx_timestamp event */
+	rx_event = adapter->last_rx_ptp_check;
+	for (n = 0; n < adapter->num_rx_queues; n++) {
+		rx_ring = adapter->rx_ring[n];
+		if (time_after(rx_ring->last_rx_timestamp, rx_event))
+			rx_event = rx_ring->last_rx_timestamp;
+	}
+
+	/* only need to read the high RXSTMP register to clear the lock */
+	if (time_is_before_jiffies(rx_event + 5 * HZ)) {
+		rd32(hw, TXGBE_PSR_1588_STMPH);
+		adapter->last_rx_ptp_check = jiffies;
+
+		adapter->rx_hwtstamp_cleared++;
+		txgbe_warn(drv, "clearing RX Timestamp hang");
+	}
+}
+
+/**
+ * txgbe_ptp_clear_tx_timestamp - utility function to clear Tx timestamp state
+ * @adapter: the private adapter structure
+ *
+ * This function should be called whenever the state related to a Tx timestamp
+ * needs to be cleared. This helps ensure that all related bits are reset for
+ * the next Tx timestamp event.
+ */
+static void txgbe_ptp_clear_tx_timestamp(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+
+	rd32(hw, TXGBE_TSC_1588_STMPH);
+	if (adapter->ptp_tx_skb) {
+		dev_kfree_skb_any(adapter->ptp_tx_skb);
+		adapter->ptp_tx_skb = NULL;
+	}
+	clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state);
+}
+
+/**
+ * txgbe_ptp_tx_hwtstamp - utility function which checks for TX time stamp
+ * @adapter: the private adapter struct
+ *
+ * if the timestamp is valid, we convert it into the timecounter ns
+ * value, then store that result into the shhwtstamps structure which
+ * is passed up the network stack
+ */
+static void txgbe_ptp_tx_hwtstamp(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct skb_shared_hwtstamps shhwtstamps;
+	u64 regval = 0;
+
+	regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPL);
+	regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPH) << 32;
+
+	txgbe_ptp_convert_to_hwtstamp(adapter, &shhwtstamps, regval);
+	skb_tstamp_tx(adapter->ptp_tx_skb, &shhwtstamps);
+
+	txgbe_ptp_clear_tx_timestamp(adapter);
+}
+
+/**
+ * txgbe_ptp_tx_hwtstamp_work
+ * @work: pointer to the work struct
+ *
+ * This work item polls TSYNCTXCTL valid bit to determine when a Tx hardware
+ * timestamp has been taken for the current skb. It is necessary, because the
+ * descriptor's "done" bit does not correlate with the timestamp event.
+ */
+static void txgbe_ptp_tx_hwtstamp_work(struct work_struct *work)
+{
+	struct txgbe_adapter *adapter = container_of(work, struct txgbe_adapter,
+						     ptp_tx_work);
+	struct txgbe_hw *hw = &adapter->hw;
+	bool timeout = time_is_before_jiffies(adapter->ptp_tx_start +
+					      TXGBE_PTP_TX_TIMEOUT);
+	u32 tsynctxctl;
+
+	/* we have to have a valid skb to poll for a timestamp */
+	if (!adapter->ptp_tx_skb) {
+		txgbe_ptp_clear_tx_timestamp(adapter);
+		return;
+	}
+
+	/* stop polling once we have a valid timestamp */
+	tsynctxctl = rd32(hw, TXGBE_TSC_1588_CTL);
+	if (tsynctxctl & TXGBE_TSC_1588_CTL_VALID) {
+		txgbe_ptp_tx_hwtstamp(adapter);
+		return;
+	}
+
+	/* check timeout last in case timestamp event just occurred */
+	if (timeout) {
+		txgbe_ptp_clear_tx_timestamp(adapter);
+		adapter->tx_hwtstamp_timeouts++;
+		txgbe_warn(drv, "clearing Tx Timestamp hang");
+	} else {
+		/* reschedule to keep checking until we timeout */
+		schedule_work(&adapter->ptp_tx_work);
+	}
+}
+
+/**
+ * txgbe_ptp_rx_rgtstamp - utility function which checks for RX time stamp
+ * @q_vector: structure containing interrupt and ring information
+ * @skb: particular skb to send timestamp with
+ *
+ * if the timestamp is valid, we convert it into the timecounter ns
+ * value, then store that result into the shhwtstamps structure which
+ * is passed up the network stack
+ */
+void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u64 regval = 0;
+	u32 tsyncrxctl;
+
+	/* Read the tsyncrxctl register afterwards in order to prevent taking an
+	 * I/O hit on every packet.
+	 */
+	tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL);
+	if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID))
+		return;
+
+	regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPL);
+	regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPH) << 32;
+
+	txgbe_ptp_convert_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);
+}
+
+/**
+ * txgbe_ptp_get_ts_config - get current hardware timestamping configuration
+ * @adapter: pointer to adapter structure
+ * @ifreq: ioctl data
+ *
+ * This function returns the current timestamping settings. Rather than
+ * attempt to deconstruct registers to fill in the values, simply keep a copy
+ * of the old settings around, and return a copy when requested.
+ */
+int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr)
+{
+	struct hwtstamp_config *config = &adapter->tstamp_config;
+
+	return copy_to_user(ifr->ifr_data, config,
+			    sizeof(*config)) ? -EFAULT : 0;
+}
+
+/**
+ * txgbe_ptp_set_timestamp_mode - setup the hardware for the requested mode
+ * @adapter: the private txgbe adapter structure
+ * @config: the hwtstamp configuration requested
+ *
+ * Outgoing time stamping can be enabled and disabled. Play nice and
+ * disable it when requested, although it shouldn't cause any overhead
+ * when no packet needs it. At most one packet in the queue may be
+ * marked for time stamping, otherwise it would be impossible to tell
+ * for sure to which packet the hardware time stamp belongs.
+ *
+ * Incoming time stamping has to be configured via the hardware
+ * filters. Not all combinations are supported, in particular event
+ * type has to be specified. Matching the kind of event packet is
+ * not supported, with the exception of "all V2 events regardless of
+ * level 2 or 4".
+ *
+ * Since hardware always timestamps Path delay packets when timestamping V2
+ * packets, regardless of the type specified in the register, only use V2
+ * Event mode. This more accurately tells the user what the hardware is going
+ * to do anyways.
+ *
+ * Note: this may modify the hwtstamp configuration towards a more general
+ * mode, if required to support the specifically requested mode.
+ */
+static int txgbe_ptp_set_timestamp_mode(struct txgbe_adapter *adapter,
+					struct hwtstamp_config *config)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 tsync_tx_ctl = TXGBE_TSC_1588_CTL_ENABLED;
+	u32 tsync_rx_ctl = TXGBE_PSR_1588_CTL_ENABLED;
+	u32 tsync_rx_mtrl = PTP_EV_PORT << 16;
+	bool is_l2 = false;
+	u32 regval;
+
+	/* reserved for future extensions */
+	if (config->flags)
+		return -EINVAL;
+
+	switch (config->tx_type) {
+	case HWTSTAMP_TX_OFF:
+		tsync_tx_ctl = 0;
+	case HWTSTAMP_TX_ON:
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	switch (config->rx_filter) {
+	case HWTSTAMP_FILTER_NONE:
+		tsync_rx_ctl = 0;
+		tsync_rx_mtrl = 0;
+		adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+				    TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+		break;
+	case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+		tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1;
+		tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG;
+		adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+				   TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+		break;
+	case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+		tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1;
+		tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG;
+		adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+				   TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+		break;
+	case HWTSTAMP_FILTER_PTP_V2_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+	case HWTSTAMP_FILTER_PTP_V2_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+	case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+	case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+	case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+		tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_EVENT_V2;
+		is_l2 = true;
+		config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+		adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+				   TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+		break;
+	case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+	case HWTSTAMP_FILTER_ALL:
+	default:
+		/* register RXMTRL must be set in order to do V1 packets,
+		 * therefore it is not possible to time stamp both V1 Sync and
+		 * Delay_Req messages unless hardware supports timestamping all
+		 * packets => return error
+		 */
+		adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+				    TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+		config->rx_filter = HWTSTAMP_FILTER_NONE;
+		return -ERANGE;
+	}
+
+	/* define ethertype filter for timestamping L2 packets */
+	if (is_l2)
+		wr32(hw,
+		     TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588),
+		     (TXGBE_PSR_ETYPE_SWC_FILTER_EN | /* enable filter */
+		      TXGBE_PSR_ETYPE_SWC_1588 | /* enable timestamping */
+		      ETH_P_1588));     /* 1588 eth protocol type */
+	else
+		wr32(hw,
+		     TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588),
+		     0);
+
+	/* enable/disable TX */
+	regval = rd32(hw, TXGBE_TSC_1588_CTL);
+	regval &= ~TXGBE_TSC_1588_CTL_ENABLED;
+	regval |= tsync_tx_ctl;
+	wr32(hw, TXGBE_TSC_1588_CTL, regval);
+
+	/* enable/disable RX */
+	regval = rd32(hw, TXGBE_PSR_1588_CTL);
+	regval &= ~(TXGBE_PSR_1588_CTL_ENABLED | TXGBE_PSR_1588_CTL_TYPE_MASK);
+	regval |= tsync_rx_ctl;
+	wr32(hw, TXGBE_PSR_1588_CTL, regval);
+
+	/* define which PTP packets are time stamped */
+	wr32(hw, TXGBE_PSR_1588_MSGTYPE, tsync_rx_mtrl);
+
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* clear TX/RX timestamp state, just to be sure */
+	txgbe_ptp_clear_tx_timestamp(adapter);
+	rd32(hw, TXGBE_PSR_1588_STMPH);
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_set_ts_config - user entry point for timestamp mode
+ * @adapter: pointer to adapter struct
+ * @ifreq: ioctl data
+ *
+ * Set hardware to requested mode. If unsupported, return an error with no
+ * changes. Otherwise, store the mode for future reference.
+ */
+int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr)
+{
+	struct hwtstamp_config config;
+	int err;
+
+	if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+		return -EFAULT;
+
+	err = txgbe_ptp_set_timestamp_mode(adapter, &config);
+	if (err)
+		return err;
+
+	/* save these settings for future reference */
+	memcpy(&adapter->tstamp_config, &config,
+	       sizeof(adapter->tstamp_config));
+
+	return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+		-EFAULT : 0;
+}
+
+static void txgbe_ptp_link_speed_adjust(struct txgbe_adapter *adapter,
+					u32 *shift, u32 *incval)
+{
+	/**
+	 * Scale the NIC cycle counter by a large factor so that
+	 * relatively small corrections to the frequency can be added
+	 * or subtracted. The drawbacks of a large factor include
+	 * (a) the clock register overflows more quickly, (b) the cycle
+	 * counter structure must be able to convert the systime value
+	 * to nanoseconds using only a multiplier and a right-shift,
+	 * and (c) the value must fit within the timinca register space
+	 * => math based on internal DMA clock rate and available bits
+	 *
+	 * Note that when there is no link, internal DMA clock is same as when
+	 * link speed is 10Gb. Set the registers correctly even when link is
+	 * down to preserve the clock setting
+	 */
+	switch (adapter->link_speed) {
+	case TXGBE_LINK_SPEED_10_FULL:
+		*shift = TXGBE_INCVAL_SHIFT_10;
+		*incval = TXGBE_INCVAL_10;
+		break;
+	case TXGBE_LINK_SPEED_100_FULL:
+		*shift = TXGBE_INCVAL_SHIFT_100;
+		*incval = TXGBE_INCVAL_100;
+		break;
+	case TXGBE_LINK_SPEED_1GB_FULL:
+		*shift = TXGBE_INCVAL_SHIFT_1GB;
+		*incval = TXGBE_INCVAL_1GB;
+		break;
+	case TXGBE_LINK_SPEED_10GB_FULL:
+	default: /* TXGBE_LINK_SPEED_10GB_FULL */
+		*shift = TXGBE_INCVAL_SHIFT_10GB;
+		*incval = TXGBE_INCVAL_10GB;
+		break;
+	}
+}
+
+/**
+ * txgbe_ptp_start_cyclecounter - create the cycle counter from hw
+ * @adapter: pointer to the adapter structure
+ *
+ * This function should be called to set the proper values for the TIMINCA
+ * register and tell the cyclecounter structure what the tick rate of SYSTIME
+ * is. It does not directly modify SYSTIME registers or the timecounter
+ * structure. It should be called whenever a new TIMINCA value is necessary,
+ * such as during initialization or when the link speed changes.
+ */
+void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	unsigned long flags;
+	struct cyclecounter cc;
+	u32 incval = 0;
+
+	/* For some of the boards below this mask is technically incorrect.
+	 * The timestamp mask overflows at approximately 61bits. However the
+	 * particular hardware does not overflow on an even bitmask value.
+	 * Instead, it overflows due to conversion of upper 32bits billions of
+	 * cycles. Timecounters are not really intended for this purpose so
+	 * they do not properly function if the overflow point isn't 2^N-1.
+	 * However, the actual SYSTIME values in question take ~138 years to
+	 * overflow. In practice this means they won't actually overflow. A
+	 * proper fix to this problem would require modification of the
+	 * timecounter delta calculations.
+	 */
+	cc.mask = CLOCKSOURCE_MASK(64);
+	cc.mult = 1;
+	cc.shift = 0;
+
+	cc.read = txgbe_ptp_read;
+	txgbe_ptp_link_speed_adjust(adapter, &cc.shift, &incval);
+	wr32(hw, TXGBE_TSC_1588_INC,
+	     TXGBE_TSC_1588_INC_IVP(incval, 2));
+
+	/* update the base incval used to calculate frequency adjustment */
+	WRITE_ONCE(adapter->base_incval, incval);
+	smp_mb();
+
+	/* need lock to prevent incorrect read while modifying cyclecounter */
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	memcpy(&adapter->hw_cc, &cc, sizeof(adapter->hw_cc));
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+}
+
+/**
+ * txgbe_ptp_reset
+ * @adapter: the txgbe private board structure
+ *
+ * When the MAC resets, all of the hardware configuration for timesync is
+ * reset. This function should be called to re-enable the device for PTP,
+ * using the last known settings. However, we do lose the current clock time,
+ * so we fallback to resetting it based on the kernel's realtime clock.
+ *
+ * This function will maintain the hwtstamp_config settings, and it retriggers
+ * the SDP output if it's enabled.
+ */
+void txgbe_ptp_reset(struct txgbe_adapter *adapter)
+{
+	unsigned long flags;
+
+	/* reset the hardware timestamping mode */
+	txgbe_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+	txgbe_ptp_start_cyclecounter(adapter);
+
+	spin_lock_irqsave(&adapter->tmreg_lock, flags);
+	timecounter_init(&adapter->hw_tc, &adapter->hw_cc,
+			 ktime_to_ns(ktime_get_real()));
+	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+	adapter->last_overflow_check = jiffies;
+}
+
+/**
+ * txgbe_ptp_create_clock
+ * @adapter: the txgbe private adapter structure
+ *
+ * This function performs setup of the user entry point function table and
+ * initalizes the PTP clock device used by userspace to access the clock-like
+ * features of the PTP core. It will be called by txgbe_ptp_init, and may
+ * re-use a previously initialized clock (such as during a suspend/resume
+ * cycle).
+ */
+
+static long txgbe_ptp_create_clock(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	long err;
+
+	/* do nothing if we already have a clock device */
+	if (!IS_ERR_OR_NULL(adapter->ptp_clock))
+		return 0;
+
+	snprintf(adapter->ptp_caps.name, sizeof(adapter->ptp_caps.name),
+		 "%s", netdev->name);
+	adapter->ptp_caps.owner = THIS_MODULE;
+	adapter->ptp_caps.max_adj = 250000000; /* 10^-9s */
+	adapter->ptp_caps.n_alarm = 0;
+	adapter->ptp_caps.n_ext_ts = 0;
+	adapter->ptp_caps.n_per_out = 0;
+	adapter->ptp_caps.pps = 0;
+	adapter->ptp_caps.adjfreq = txgbe_ptp_adjfreq;
+	adapter->ptp_caps.adjtime = txgbe_ptp_adjtime;
+	adapter->ptp_caps.gettime64 = txgbe_ptp_gettime64;
+	adapter->ptp_caps.settime64 = txgbe_ptp_settime64;
+
+	adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
+						pci_dev_to_dev(adapter->pdev));
+	if (IS_ERR(adapter->ptp_clock)) {
+		err = PTR_ERR(adapter->ptp_clock);
+		adapter->ptp_clock = NULL;
+		txgbe_dev_err("ptp_clock_register failed\n");
+		return err;
+	}
+
+	txgbe_dev_info("registered PHC device on %s\n", netdev->name);
+
+	/* Set the default timestamp mode to disabled here. We do this in
+	 * create_clock instead of initialization, because we don't want to
+	 * override the previous settings during a suspend/resume cycle.
+	 */
+	adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+	adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+
+	return 0;
+}
+
+/**
+ * txgbe_ptp_init
+ * @adapter: the txgbe private adapter structure
+ *
+ * This function performs the required steps for enabling ptp
+ * support. If ptp support has already been loaded it simply calls the
+ * cyclecounter init routine and exits.
+ */
+void txgbe_ptp_init(struct txgbe_adapter *adapter)
+{
+	/* initialize the spin lock first, since the user might call the clock
+	 * functions any time after we've initialized the ptp clock device.
+	 */
+	spin_lock_init(&adapter->tmreg_lock);
+
+	/* obtain a ptp clock device, or re-use an existing device */
+	if (txgbe_ptp_create_clock(adapter))
+		return;
+
+	/* we have a clock, so we can initialize work for timestamps now */
+	INIT_WORK(&adapter->ptp_tx_work, txgbe_ptp_tx_hwtstamp_work);
+
+	/* reset the ptp related hardware bits */
+	txgbe_ptp_reset(adapter);
+
+	/* enter the TXGBE_PTP_RUNNING state */
+	set_bit(__TXGBE_PTP_RUNNING, &adapter->state);
+}
+
+/**
+ * txgbe_ptp_suspend - stop ptp work items
+ * @adapter: pointer to adapter struct
+ *
+ * This function suspends ptp activity, and prevents more work from being
+ * generated, but does not destroy the clock device.
+ */
+void txgbe_ptp_suspend(struct txgbe_adapter *adapter)
+{
+	/* leave the TXGBE_PTP_RUNNING STATE */
+	if (!test_and_clear_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+		return;
+
+	cancel_work_sync(&adapter->ptp_tx_work);
+	txgbe_ptp_clear_tx_timestamp(adapter);
+}
+
+/**
+ * txgbe_ptp_stop - destroy the ptp_clock device
+ * @adapter: pointer to adapter struct
+ *
+ * Completely destroy the ptp_clock device, and disable all PTP related
+ * features. Intended to be run when the device is being closed.
+ */
+void txgbe_ptp_stop(struct txgbe_adapter *adapter)
+{
+	/* first, suspend ptp activity */
+	txgbe_ptp_suspend(adapter);
+
+	/* now destroy the ptp clock device */
+	if (adapter->ptp_clock) {
+		ptp_clock_unregister(adapter->ptp_clock);
+		adapter->ptp_clock = NULL;
+		txgbe_dev_info("removed PHC on %s\n",
+			       adapter->netdev->name);
+	}
+}
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 10/14] net: txgbe: Add ethtool support
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (8 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 09/14] net: txgbe: Support PTP Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12 12:11   ` Andrew Lunn
  2022-05-11  3:26 ` [PATCH net-next 11/14] net: txgbe: Support PCIe recovery Jiawen Wu
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add to support the following ethtool commands:

ethtool -i|--driver DEVNAME
ethtool -s|--change DEVNAME
ethtool -a|--show-pause DEVNAME
ethtool -A|--pause DEVNAME
ethtool -c|--show-coalesce DEVNAME
ethtool -C|--coalesce DEVNAME
ethtool -g|--show-ring DEVNAME
ethtool -G|--set-ring DEVNAME
ethtool -k|--show-features DEVNAME
ethtool -K|--features DEVNAME
ethtool -d|--register-dump DEVNAME
ethtool -e|--eeprom-dump DEVNAME
ethtool -E|--change-eeprom DEVNAME
ethtool -r|--negotiate DEVNAME
ethtool -p|--identify DEVNAME
ethtool -t|--test DEVNAME
ethtool -n|-u|--show-nfc|--show-ntuple DEVNAME
ethtool -N|-U|--config-nfc|--config-ntuple DEVNAME
ethtool -T|--show-time-stamping DEVNAME
ethtool -x|--show-rxfh-indir|--show-rxfh DEVNAME
ethtool -X|--set-rxfh-indir|--rxfh DEVNAME
ethtool -f|--flash DEVNAME
ethtool -l|--show-channels DEVNAME
ethtool -L|--set-channels DEVNAME
ethtool -m|--dump-module-eeprom|--module-info DEVNAME

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |    2 +-
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |   37 +
 .../ethernet/wangxun/txgbe/txgbe_ethtool.c    | 3188 +++++++++++++++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c |  541 +++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   36 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  169 +
 .../net/ethernet/wangxun/txgbe/txgbe_phy.c    |   16 +
 .../net/ethernet/wangxun/txgbe/txgbe_phy.h    |   10 +
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |   53 +
 9 files changed, 4051 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index f233bc45575c..43e5d23109d7 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -6,6 +6,6 @@
 
 obj-$(CONFIG_TXGBE) += txgbe.o
 
-txgbe-objs := txgbe_main.o \
+txgbe-objs := txgbe_main.o txgbe_ethtool.o \
               txgbe_hw.o txgbe_phy.o \
               txgbe_lib.o txgbe_ptp.o
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 3b429cca494e..709805f7743c 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -453,6 +453,10 @@ struct txgbe_adapter {
 	struct txgbe_ring_feature ring_feature[RING_F_ARRAY_SIZE];
 	struct msix_entry *msix_entries;
 
+	u64 test_icr;
+	struct txgbe_ring test_tx_ring;
+	struct txgbe_ring test_rx_ring;
+
 	/* structs defined in txgbe_hw.h */
 	struct txgbe_hw hw;
 	u16 msg_enable;
@@ -478,10 +482,13 @@ struct txgbe_adapter {
 	spinlock_t fdir_perfect_lock;
 
 	u8 __iomem *io_addr;    /* Mainly for iounmap use */
+	u32 wol;
 
 	char eeprom_id[32];
+	u16 eeprom_cap;
 	bool netdev_registered;
 	u32 interrupt_event;
+	u32 led_reg;
 
 	struct ptp_clock *ptp_clock;
 	struct ptp_clock_info ptp_caps;
@@ -559,12 +566,18 @@ struct txgbe_cb {
 
 #define TXGBE_CB(skb) ((struct txgbe_cb *)(skb)->cb)
 
+/* needed by txgbe_ethtool.c */
+extern char txgbe_driver_name[];
+
 void txgbe_irq_disable(struct txgbe_adapter *adapter);
 void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush);
 int txgbe_open(struct net_device *netdev);
 int txgbe_close(struct net_device *netdev);
 void txgbe_up(struct txgbe_adapter *adapter);
 void txgbe_down(struct txgbe_adapter *adapter);
+void txgbe_reinit_locked(struct txgbe_adapter *adapter);
+void txgbe_reset(struct txgbe_adapter *adapter);
+void txgbe_set_ethtool_ops(struct net_device *netdev);
 int txgbe_setup_rx_resources(struct txgbe_ring *rx_ring);
 int txgbe_setup_tx_resources(struct txgbe_ring *tx_ring);
 void txgbe_free_rx_resources(struct txgbe_ring *rx_ring);
@@ -573,6 +586,7 @@ void txgbe_configure_rx_ring(struct txgbe_adapter *adapter,
 			     struct txgbe_ring *ring);
 void txgbe_configure_tx_ring(struct txgbe_adapter *adapter,
 			     struct txgbe_ring *ring);
+void txgbe_update_stats(struct txgbe_adapter *adapter);
 int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter);
 void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter);
 void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter);
@@ -584,16 +598,22 @@ void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *ring,
 				      struct txgbe_tx_buffer *tx_buffer);
 void txgbe_alloc_rx_buffers(struct txgbe_ring *rx_ring, u16 cleaned_count);
 void txgbe_set_rx_mode(struct net_device *netdev);
+int txgbe_setup_tc(struct net_device *dev, u8 tc);
 void txgbe_tx_ctxtdesc(struct txgbe_ring *tx_ring, u32 vlan_macip_lens,
 		       u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx);
+void txgbe_do_reset(struct net_device *netdev);
 void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
 int txgbe_poll(struct napi_struct *napi, int budget);
+void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
+			    struct txgbe_ring *ring);
 
 static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
 {
 	return netdev_get_tx_queue(ring->netdev, ring->queue_index);
 }
 
+int txgbe_wol_supported(struct txgbe_adapter *adapter);
+
 void txgbe_ptp_init(struct txgbe_adapter *adapter);
 void txgbe_ptp_stop(struct txgbe_adapter *adapter);
 void txgbe_ptp_suspend(struct txgbe_adapter *adapter);
@@ -606,6 +626,8 @@ void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter);
 void txgbe_ptp_reset(struct txgbe_adapter *adapter);
 void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter);
 
+void txgbe_store_reta(struct txgbe_adapter *adapter);
+
 /**
  * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
  * disable a interrupt by writing to PX_IMS with the corresponding bit=1
@@ -644,6 +666,18 @@ static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask)
 	/* skip the flush */
 }
 
+static inline void txgbe_intr_trigger(struct txgbe_hw *hw, u64 qmask)
+{
+	u32 mask;
+
+	mask = (qmask & 0xFFFFFFFF);
+	if (mask)
+		wr32(hw, TXGBE_PX_ICS(0), mask);
+	mask = (qmask >> 32);
+	if (mask)
+		wr32(hw, TXGBE_PX_ICS(1), mask);
+}
+
 #define TXGBE_RING_SIZE(R) ((R)->count < TXGBE_MAX_TXD ? (R)->count / 128 : 0)
 
 #define TXGBE_CPU_TO_BE16(_x) cpu_to_be16(_x)
@@ -713,8 +747,11 @@ static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
 #define TXGBE_FAILED_READ_CFG_WORD  0xffffU
 #define TXGBE_FAILED_READ_CFG_BYTE  0xffU
 
+extern u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet);
 extern u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg);
 
+#define TXGBE_R32_Q(h, r) txgbe_read_reg(h, r, true)
+
 #define TXGBE_HTONL(_i) htonl(_i)
 #define TXGBE_NTOHL(_i) ntohl(_i)
 #define TXGBE_NTOHS(_i) ntohs(_i)
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
new file mode 100644
index 000000000000..fcb894cf223a
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
@@ -0,0 +1,3188 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/firmware.h>
+#include <linux/net_tstamp.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+
+#include "txgbe.h"
+#include "txgbe_hw.h"
+#include "txgbe_phy.h"
+
+#define TXGBE_ALL_RAR_ENTRIES 16
+#define TXGBE_RSS_INDIR_TBL_MAX 64
+
+enum {NETDEV_STATS, TXGBE_STATS};
+
+struct txgbe_stats {
+	char stat_string[ETH_GSTRING_LEN];
+	int type;
+	int sizeof_stat;
+	int stat_offset;
+};
+
+#define TXGBE_STAT(m)		TXGBE_STATS, \
+				sizeof(((struct txgbe_adapter *)0)->m), \
+				offsetof(struct txgbe_adapter, m)
+#define TXGBE_NETDEV_STAT(m)	NETDEV_STATS, \
+				sizeof(((struct rtnl_link_stats64 *)0)->m), \
+				offsetof(struct rtnl_link_stats64, m)
+
+static const struct txgbe_stats txgbe_gstrings_stats[] = {
+	{"rx_packets", TXGBE_NETDEV_STAT(rx_packets)},
+	{"tx_packets", TXGBE_NETDEV_STAT(tx_packets)},
+	{"rx_bytes", TXGBE_NETDEV_STAT(rx_bytes)},
+	{"tx_bytes", TXGBE_NETDEV_STAT(tx_bytes)},
+	{"rx_pkts_nic", TXGBE_STAT(stats.gprc)},
+	{"tx_pkts_nic", TXGBE_STAT(stats.gptc)},
+	{"rx_bytes_nic", TXGBE_STAT(stats.gorc)},
+	{"tx_bytes_nic", TXGBE_STAT(stats.gotc)},
+	{"lsc_int", TXGBE_STAT(lsc_int)},
+	{"tx_busy", TXGBE_STAT(tx_busy)},
+	{"non_eop_descs", TXGBE_STAT(non_eop_descs)},
+	{"rx_errors", TXGBE_NETDEV_STAT(rx_errors)},
+	{"tx_errors", TXGBE_NETDEV_STAT(tx_errors)},
+	{"rx_dropped", TXGBE_NETDEV_STAT(rx_dropped)},
+	{"tx_dropped", TXGBE_NETDEV_STAT(tx_dropped)},
+	{"multicast", TXGBE_NETDEV_STAT(multicast)},
+	{"broadcast", TXGBE_STAT(stats.bprc)},
+	{"rx_no_buffer_count", TXGBE_STAT(stats.rnbc[0]) },
+	{"collisions", TXGBE_NETDEV_STAT(collisions)},
+	{"rx_over_errors", TXGBE_NETDEV_STAT(rx_over_errors)},
+	{"rx_crc_errors", TXGBE_NETDEV_STAT(rx_crc_errors)},
+	{"rx_frame_errors", TXGBE_NETDEV_STAT(rx_frame_errors)},
+	{"hw_rsc_aggregated", TXGBE_STAT(rsc_total_count)},
+	{"hw_rsc_flushed", TXGBE_STAT(rsc_total_flush)},
+	{"fdir_match", TXGBE_STAT(stats.fdirmatch)},
+	{"fdir_miss", TXGBE_STAT(stats.fdirmiss)},
+	{"fdir_overflow", TXGBE_STAT(fdir_overflow)},
+	{"rx_fifo_errors", TXGBE_NETDEV_STAT(rx_fifo_errors)},
+	{"rx_missed_errors", TXGBE_NETDEV_STAT(rx_missed_errors)},
+	{"tx_aborted_errors", TXGBE_NETDEV_STAT(tx_aborted_errors)},
+	{"tx_carrier_errors", TXGBE_NETDEV_STAT(tx_carrier_errors)},
+	{"tx_fifo_errors", TXGBE_NETDEV_STAT(tx_fifo_errors)},
+	{"tx_heartbeat_errors", TXGBE_NETDEV_STAT(tx_heartbeat_errors)},
+	{"tx_timeout_count", TXGBE_STAT(tx_timeout_count)},
+	{"tx_restart_queue", TXGBE_STAT(restart_queue)},
+	{"rx_long_length_count", TXGBE_STAT(stats.roc)},
+	{"rx_short_length_count", TXGBE_STAT(stats.ruc)},
+	{"tx_flow_control_xon", TXGBE_STAT(stats.lxontxc)},
+	{"rx_flow_control_xon", TXGBE_STAT(stats.lxonrxc)},
+	{"tx_flow_control_xoff", TXGBE_STAT(stats.lxofftxc)},
+	{"rx_flow_control_xoff", TXGBE_STAT(stats.lxoffrxc)},
+	{"rx_csum_offload_good_count", TXGBE_STAT(hw_csum_rx_good)},
+	{"rx_csum_offload_errors", TXGBE_STAT(hw_csum_rx_error)},
+	{"alloc_rx_page_failed", TXGBE_STAT(alloc_rx_page_failed)},
+	{"alloc_rx_buff_failed", TXGBE_STAT(alloc_rx_buff_failed)},
+	{"rx_no_dma_resources", TXGBE_STAT(hw_rx_no_dma_resources)},
+	{"os2bmc_rx_by_bmc", TXGBE_STAT(stats.o2bgptc)},
+	{"os2bmc_tx_by_bmc", TXGBE_STAT(stats.b2ospc)},
+	{"os2bmc_tx_by_host", TXGBE_STAT(stats.o2bspc)},
+	{"os2bmc_rx_by_host", TXGBE_STAT(stats.b2ogprc)},
+	{"tx_hwtstamp_timeouts", TXGBE_STAT(tx_hwtstamp_timeouts)},
+	{"rx_hwtstamp_cleared", TXGBE_STAT(rx_hwtstamp_cleared)},
+};
+
+/* txgbe allocates num_tx_queues and num_rx_queues symmetrically so
+ * we set the num_rx_queues to evaluate to num_tx_queues. This is
+ * used because we do not have a good way to get the max number of
+ * rx queues with CONFIG_RPS disabled.
+ */
+#define TXGBE_NUM_RX_QUEUES netdev->num_tx_queues
+#define TXGBE_NUM_TX_QUEUES netdev->num_tx_queues
+
+#define TXGBE_QUEUE_STATS_LEN ( \
+		(TXGBE_NUM_TX_QUEUES + TXGBE_NUM_RX_QUEUES) * \
+		(sizeof(struct txgbe_queue_stats) / sizeof(u64)))
+#define TXGBE_GLOBAL_STATS_LEN  ARRAY_SIZE(txgbe_gstrings_stats)
+#define TXGBE_PB_STATS_LEN ( \
+		(sizeof(((struct txgbe_adapter *)0)->stats.pxonrxc) + \
+		 sizeof(((struct txgbe_adapter *)0)->stats.pxontxc) + \
+		 sizeof(((struct txgbe_adapter *)0)->stats.pxoffrxc) + \
+		 sizeof(((struct txgbe_adapter *)0)->stats.pxofftxc)) \
+		/ sizeof(u64))
+#define TXGBE_STATS_LEN (TXGBE_GLOBAL_STATS_LEN + \
+			 TXGBE_PB_STATS_LEN + \
+			 TXGBE_QUEUE_STATS_LEN)
+
+static const char txgbe_gstrings_test[][ETH_GSTRING_LEN] = {
+	"Register test  (offline)", "Eeprom test    (offline)",
+	"Interrupt test (offline)", "Loopback test  (offline)",
+	"Link test   (on/offline)"
+};
+
+#define TXGBE_TEST_LEN  (sizeof(txgbe_gstrings_test) / ETH_GSTRING_LEN)
+
+/* currently supported speeds for 10G */
+#define ADVERTISED_MASK_10G (SUPPORTED_10000baseT_Full | \
+			     SUPPORTED_10000baseKX4_Full | \
+			     SUPPORTED_10000baseKR_Full)
+
+#define txgbe_isbackplane(type)  \
+			((type == txgbe_media_type_backplane) ? true : false)
+
+static __u32 txgbe_backplane_type(struct txgbe_hw *hw)
+{
+	__u32 mode = 0x00;
+
+	switch (hw->phy.link_mode) {
+	case TXGBE_PHYSICAL_LAYER_10GBASE_KX4:
+		mode = SUPPORTED_10000baseKX4_Full;
+		break;
+	case TXGBE_PHYSICAL_LAYER_10GBASE_KR:
+		mode = SUPPORTED_10000baseKR_Full;
+		break;
+	case TXGBE_PHYSICAL_LAYER_1000BASE_KX:
+		mode = SUPPORTED_1000baseKX_Full;
+		break;
+	default:
+		mode = (SUPPORTED_10000baseKX4_Full |
+			SUPPORTED_10000baseKR_Full |
+			SUPPORTED_1000baseKX_Full);
+		break;
+	}
+	return mode;
+}
+
+int txgbe_get_link_ksettings(struct net_device *netdev,
+			     struct ethtool_link_ksettings *cmd)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 supported_link;
+	u32 link_speed = 0;
+	bool autoneg = false;
+	u32 supported, advertising;
+	bool link_up;
+
+	ethtool_convert_link_mode_to_legacy_u32(&supported,
+						cmd->link_modes.supported);
+
+	TCALL(hw, mac.ops.get_link_capabilities, &supported_link, &autoneg);
+
+	if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+		autoneg = adapter->backplane_an ? 1 : 0;
+	else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII)
+		autoneg = adapter->an37 ? 1 : 0;
+
+	/* set the supported link speeds */
+	if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+		supported |= (txgbe_isbackplane(hw->phy.media_type)) ?
+			txgbe_backplane_type(hw) : SUPPORTED_10000baseT_Full;
+	if (supported_link & TXGBE_LINK_SPEED_1GB_FULL)
+		supported |= (txgbe_isbackplane(hw->phy.media_type)) ?
+			SUPPORTED_1000baseKX_Full : SUPPORTED_1000baseT_Full;
+	if (supported_link & TXGBE_LINK_SPEED_100_FULL)
+		supported |= SUPPORTED_100baseT_Full;
+	if (supported_link & TXGBE_LINK_SPEED_10_FULL)
+		supported |= SUPPORTED_10baseT_Full;
+
+	/* default advertised speed if phy.autoneg_advertised isn't set */
+	advertising = supported;
+
+	/* set the advertised speeds */
+	if (hw->phy.autoneg_advertised) {
+		advertising = 0;
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL)
+			advertising |= ADVERTISED_100baseT_Full;
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL)
+			advertising |= (supported & ADVERTISED_MASK_10G);
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) {
+			if (supported & SUPPORTED_1000baseKX_Full)
+				advertising |= ADVERTISED_1000baseKX_Full;
+			else
+				advertising |= ADVERTISED_1000baseT_Full;
+		}
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL)
+			advertising |= ADVERTISED_10baseT_Full;
+	} else {
+		/* default modes in case phy.autoneg_advertised isn't set */
+		if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+			advertising |= ADVERTISED_10000baseT_Full;
+		if (supported_link & TXGBE_LINK_SPEED_1GB_FULL)
+			advertising |= ADVERTISED_1000baseT_Full;
+		if (supported_link & TXGBE_LINK_SPEED_100_FULL)
+			advertising |= ADVERTISED_100baseT_Full;
+		if (hw->phy.multispeed_fiber && !autoneg) {
+			if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+				advertising = ADVERTISED_10000baseT_Full;
+		}
+		if (supported_link & TXGBE_LINK_SPEED_10_FULL)
+			advertising |= ADVERTISED_10baseT_Full;
+	}
+
+	if (autoneg) {
+		supported |= SUPPORTED_Autoneg;
+		advertising |= ADVERTISED_Autoneg;
+		cmd->base.autoneg = AUTONEG_ENABLE;
+	} else {
+		cmd->base.autoneg = AUTONEG_DISABLE;
+	}
+
+	/* Determine the remaining settings based on the PHY type. */
+	switch (adapter->hw.phy.type) {
+	case txgbe_phy_tn:
+	case txgbe_phy_aq:
+	case txgbe_phy_cu_unknown:
+		supported |= SUPPORTED_TP;
+		advertising |= ADVERTISED_TP;
+		cmd->base.port = PORT_TP;
+		break;
+	case txgbe_phy_qt:
+		supported |= SUPPORTED_FIBRE;
+		advertising |= ADVERTISED_FIBRE;
+		cmd->base.port = PORT_FIBRE;
+		break;
+	case txgbe_phy_nl:
+	case txgbe_phy_sfp_passive_tyco:
+	case txgbe_phy_sfp_passive_unknown:
+	case txgbe_phy_sfp_ftl:
+	case txgbe_phy_sfp_avago:
+	case txgbe_phy_sfp_intel:
+	case txgbe_phy_sfp_unknown:
+		switch (adapter->hw.phy.sfp_type) {
+			/* SFP+ devices, further checking needed */
+		case txgbe_sfp_type_da_cu:
+		case txgbe_sfp_type_da_cu_core0:
+		case txgbe_sfp_type_da_cu_core1:
+			supported |= SUPPORTED_FIBRE;
+			advertising |= ADVERTISED_FIBRE;
+			cmd->base.port = PORT_DA;
+			break;
+		case txgbe_sfp_type_sr:
+		case txgbe_sfp_type_lr:
+		case txgbe_sfp_type_srlr_core0:
+		case txgbe_sfp_type_srlr_core1:
+		case txgbe_sfp_type_1g_sx_core0:
+		case txgbe_sfp_type_1g_sx_core1:
+		case txgbe_sfp_type_1g_lx_core0:
+		case txgbe_sfp_type_1g_lx_core1:
+			supported |= SUPPORTED_FIBRE;
+			advertising |= ADVERTISED_FIBRE;
+			cmd->base.port = PORT_FIBRE;
+			break;
+		case txgbe_sfp_type_not_present:
+			supported |= SUPPORTED_FIBRE;
+			advertising |= ADVERTISED_FIBRE;
+			cmd->base.port = PORT_NONE;
+			break;
+		case txgbe_sfp_type_1g_cu_core0:
+		case txgbe_sfp_type_1g_cu_core1:
+			supported |= SUPPORTED_TP;
+			advertising |= ADVERTISED_TP;
+			cmd->base.port = PORT_TP;
+			break;
+		case txgbe_sfp_type_unknown:
+		default:
+			supported |= SUPPORTED_FIBRE;
+			advertising |= ADVERTISED_FIBRE;
+			cmd->base.port = PORT_OTHER;
+			break;
+		}
+		break;
+	case txgbe_phy_xaui:
+		supported |= SUPPORTED_TP;
+		advertising |= ADVERTISED_TP;
+		cmd->base.port = PORT_TP;
+		break;
+	case txgbe_phy_unknown:
+	case txgbe_phy_generic:
+	case txgbe_phy_sfp_unsupported:
+	default:
+		supported |= SUPPORTED_FIBRE;
+		advertising |= ADVERTISED_FIBRE;
+		cmd->base.port = PORT_OTHER;
+		break;
+	}
+
+	if (!in_interrupt()) {
+		TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+	} else {
+		/* this case is a special workaround for RHEL5 bonding
+		 * that calls this routine from interrupt context
+		 */
+		link_speed = adapter->link_speed;
+		link_up = adapter->link_up;
+	}
+
+	supported |= SUPPORTED_Pause;
+
+	switch (hw->fc.requested_mode) {
+	case txgbe_fc_full:
+		advertising |= ADVERTISED_Pause;
+		break;
+	case txgbe_fc_rx_pause:
+		advertising |= ADVERTISED_Pause |
+				ADVERTISED_Asym_Pause;
+		break;
+	case txgbe_fc_tx_pause:
+		advertising |= ADVERTISED_Asym_Pause;
+		break;
+	default:
+		advertising &= ~(ADVERTISED_Pause |
+				 ADVERTISED_Asym_Pause);
+	}
+
+	if (link_up) {
+		switch (link_speed) {
+		case TXGBE_LINK_SPEED_10GB_FULL:
+			cmd->base.speed = SPEED_10000;
+			break;
+		case TXGBE_LINK_SPEED_1GB_FULL:
+			cmd->base.speed = SPEED_1000;
+			break;
+		case TXGBE_LINK_SPEED_100_FULL:
+			cmd->base.speed = SPEED_100;
+			break;
+		case TXGBE_LINK_SPEED_10_FULL:
+			cmd->base.speed = SPEED_10;
+			break;
+		default:
+			break;
+		}
+		cmd->base.duplex = DUPLEX_FULL;
+	} else {
+		cmd->base.speed = -1;
+		cmd->base.duplex = -1;
+	}
+
+	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+						supported);
+	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising,
+						advertising);
+	return 0;
+}
+
+static int txgbe_set_link_ksettings(struct net_device *netdev,
+				    const struct ethtool_link_ksettings *cmd)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 advertised, old;
+	s32 err = 0;
+	u32 supported, advertising;
+
+	ethtool_convert_link_mode_to_legacy_u32(&supported,
+						cmd->link_modes.supported);
+	ethtool_convert_link_mode_to_legacy_u32(&advertising,
+						cmd->link_modes.advertising);
+
+	if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+		adapter->backplane_an = cmd->base.autoneg ? 1 : 0;
+	else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII)
+		adapter->an37 = cmd->base.autoneg ? 1 : 0;
+
+	if (hw->phy.multispeed_fiber) {
+		/* this function does not support duplex forcing, but can
+		 * limit the advertising of the adapter to the specified speed
+		 */
+		if (advertising & ~supported)
+			return -EINVAL;
+
+		/* only allow one speed at a time if no autoneg */
+		if (!cmd->base.autoneg && hw->phy.multispeed_fiber) {
+			if (advertising == (ADVERTISED_10000baseT_Full |
+			    ADVERTISED_1000baseT_Full))
+				return -EINVAL;
+		}
+		old = hw->phy.autoneg_advertised;
+		advertised = 0;
+		if (advertising & ADVERTISED_10000baseT_Full)
+			advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+		if (advertising & ADVERTISED_1000baseT_Full)
+			advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+		if (advertising & ADVERTISED_100baseT_Full)
+			advertised |= TXGBE_LINK_SPEED_100_FULL;
+
+		if (advertising & ADVERTISED_10baseT_Full)
+			advertised |= TXGBE_LINK_SPEED_10_FULL;
+
+		if (old == advertised)
+			return err;
+		/* this sets the link speed and restarts auto-neg */
+		while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+			usleep_range(1000, 2000);
+
+		hw->mac.autotry_restart = true;
+		err = TCALL(hw, mac.ops.setup_link, advertised, true);
+		if (err) {
+			txgbe_info(probe, "setup link failed with code %d\n", err);
+			TCALL(hw, mac.ops.setup_link, old, true);
+		}
+		if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)
+			TCALL(hw, mac.ops.flap_tx_laser);
+		clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+	} else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4 ||
+		   (hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+		if (!cmd->base.autoneg) {
+			if (advertising == (ADVERTISED_10000baseKR_Full |
+					    ADVERTISED_1000baseKX_Full |
+					    ADVERTISED_10000baseKX4_Full))
+				return -EINVAL;
+		} else {
+			err = txgbe_set_link_to_kr(hw, 1);
+			return err;
+		}
+		advertised = 0;
+		if (advertising & ADVERTISED_10000baseKR_Full) {
+			err = txgbe_set_link_to_kr(hw, 1);
+			advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+		} else if (advertising & ADVERTISED_10000baseKX4_Full) {
+			err = txgbe_set_link_to_kx4(hw, 1);
+			advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+		} else if (advertising & ADVERTISED_1000baseKX_Full) {
+			advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+			err = txgbe_set_link_to_kx(hw, TXGBE_LINK_SPEED_1GB_FULL, 0);
+		}
+	} else {
+		/* in this case we currently only support 10Gb/FULL */
+		u32 speed = cmd->base.speed;
+
+		if (cmd->base.autoneg == AUTONEG_ENABLE ||
+		    advertising != ADVERTISED_10000baseT_Full ||
+		    (speed + cmd->base.duplex != SPEED_10000 + DUPLEX_FULL))
+			return -EINVAL;
+	}
+
+	return err;
+}
+
+static void txgbe_get_pauseparam(struct net_device *netdev,
+				 struct ethtool_pauseparam *pause)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (txgbe_device_supports_autoneg_fc(hw) &&
+	    !hw->fc.disable_fc_autoneg)
+		pause->autoneg = 1;
+	else
+		pause->autoneg = 0;
+
+	if (hw->fc.current_mode == txgbe_fc_rx_pause) {
+		pause->rx_pause = 1;
+	} else if (hw->fc.current_mode == txgbe_fc_tx_pause) {
+		pause->tx_pause = 1;
+	} else if (hw->fc.current_mode == txgbe_fc_full) {
+		pause->rx_pause = 1;
+		pause->tx_pause = 1;
+	}
+}
+
+static int txgbe_set_pauseparam(struct net_device *netdev,
+				struct ethtool_pauseparam *pause)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_fc_info fc = hw->fc;
+
+	/* some devices do not support autoneg of flow control */
+	if (pause->autoneg == AUTONEG_ENABLE &&
+	    !txgbe_device_supports_autoneg_fc(hw))
+		return -EINVAL;
+
+	fc.disable_fc_autoneg = (pause->autoneg != AUTONEG_ENABLE);
+
+	if ((pause->rx_pause && pause->tx_pause) || pause->autoneg)
+		fc.requested_mode = txgbe_fc_full;
+	else if (pause->rx_pause)
+		fc.requested_mode = txgbe_fc_rx_pause;
+	else if (pause->tx_pause)
+		fc.requested_mode = txgbe_fc_tx_pause;
+	else
+		fc.requested_mode = txgbe_fc_none;
+
+	/* if the thing changed then we'll update and use new autoneg */
+	if (memcmp(&fc, &hw->fc, sizeof(struct txgbe_fc_info))) {
+		hw->fc = fc;
+		if (netif_running(netdev))
+			txgbe_reinit_locked(adapter);
+		else
+			txgbe_reset(adapter);
+	}
+
+	return 0;
+}
+
+static u32 txgbe_get_msglevel(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	return adapter->msg_enable;
+}
+
+static void txgbe_set_msglevel(struct net_device *netdev, u32 data)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	adapter->msg_enable = data;
+}
+
+#define TXGBE_REGS_LEN  4096
+static int txgbe_get_regs_len(struct net_device __always_unused *netdev)
+{
+	return TXGBE_REGS_LEN * sizeof(u32);
+}
+
+#define TXGBE_GET_STAT(_A_, _R_)        (_A_->stats._R_)
+
+static void txgbe_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
+			   void *p)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 *regs_buff = p;
+	u32 i;
+	u32 id = 0;
+
+	memset(p, 0, TXGBE_REGS_LEN * sizeof(u32));
+	regs_buff[TXGBE_REGS_LEN - 1] = 0x55555555;
+
+	regs->version = hw->revision_id << 16 |
+			hw->device_id;
+
+	/* Global Registers */
+	/* chip control */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PWR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PF_SM);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_SWSM);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST_ST);
+	/* pvt sensor */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_EN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_THRE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_DALARM_THRE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_INT_EN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_ST);
+	/* Fmgr Register */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMD);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_DATA);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_STATUS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_USR_CMD);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG0);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG1);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_STATUS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_SWPTR);
+
+	/* Port Registers */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_EX_VTYPE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN_GPE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_GENEVE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TEREDO);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TCP_TIME);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_LED_CTL);
+	/* GPIO */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DDR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTEN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTMASK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTSTATUS);
+	/* I2C */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CON);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TAR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DATA_CMD);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_HCNT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_LCNT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_HCNT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_LCNT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SCL_HCNT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_STAT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_MASK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RAW_INTR_STAT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RX_TL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_TL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_INTR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_UNDER);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_OVER);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_OVER);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RD_REQ);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_ABRT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_DONE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_ACTIVITY);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_STOP_DET);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_START_DET);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_GEN_CALL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_STATUS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TXFLR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RXFLR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_HOLD);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_ABRT_SOURCE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_SETUP);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE_STATUS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SPKLEN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SPKLEN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_SCL_STUCK_DET);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DEVICE_ID);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_PARAM_1);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_VERSION);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_TYPE);
+	/* TX TPH */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_TDESC);
+	/* RX TPH */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RDESC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RHDR);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RPL);
+
+	/* TDMA */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(1));
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PB_THRE(i));
+	for (i = 0; i < 4; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_LLQ(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_H);
+	for (i = 0; i < 64; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_INS(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETAG_INS(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MMW);
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CFG(i));
+	for (i = 0; i < 128; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VM_CREDIT(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_EOF);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_SOF);
+
+	/* RDMA */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(1));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_RSC_CTL);
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CFG(i));
+	for (i = 0; i < 4; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_QDE(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_HIDE(i));
+	}
+
+	/* RDB */
+	/*flow control */
+	for (i = 0; i < 4; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCV(i));
+	for (i = 0; i < 8; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCH(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCRT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCC);
+	/* receive packet buffer */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_WRAP);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_UP2TC);
+	for (i = 0; i < 8; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_SZ(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_MPCNT(i));
+	}
+	/* lli interrupt */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_LLI_THRE);
+	/* ring assignment */
+	for (i = 0; i < 64; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PL_CFG(i));
+	for (i = 0; i < 32; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSTBL(i));
+	for (i = 0; i < 10; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSRK(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSS_TC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RA_CTL);
+	for (i = 0; i < 128; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SA(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_DA(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SDP(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL0(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL1(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_SYN_CLS);
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_ETYPE_CLS(i));
+	/* fcoe redirection table */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_CTL);
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_TBL(i));
+	/*flow director */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HKEY);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SKEY);
+	for (i = 0; i < 16; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX_CFG(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FREE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_LEN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_USE_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FAIL_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MATCH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MISS);
+	for (i = 0; i < 3; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_PORT);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HASH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CMD);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA4_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA4_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_TCP_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_UDP_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SCTP_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6_MSK);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_OTHER_MSK);
+
+	/* PSR */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_CTL);
+	for (i = 0; i < 64; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_L2CTL(i));
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_ETYPE_SWC(i));
+	for (i = 0; i < 128; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MC_TBL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_UC_TBL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_TBL(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_IDX);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_IDX);
+	for (i = 0; i < 4; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_CTL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_L(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_H(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_L(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_H(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_MSGTYPE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IPV);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);
+	for (i = 0; i < 4; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP4TBL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP6TBL(i));
+	}
+	for (i = 0; i < 16; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_L(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_H(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_MSK(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);
+
+	/* TDB */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_RFCS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PB_SZ(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_UP2TC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CTL);
+	for (i = 0; i < 8; i++)
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CFG(i));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_MNG_TC);
+
+	/* tsec */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_ST);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AF);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_MIN_IFG);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIML);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIMH);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_INC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJH);
+
+	/* RSEC */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_CTL);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_ST);
+
+	/* BAR register */
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IC);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_ICS);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IEN);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_GPIE);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(1));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(1));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(1));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(0));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(1));
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_L);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_H);
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITRSEL);
+	for (i = 0; i < 64; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITR(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IVAR(i));
+	}
+	regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IVAR);
+	for (i = 0; i < 128; i++) {
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAH(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_WP(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_RP(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_CFG(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAL(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAH(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_WP(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_RP(i));
+		regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_CFG(i));
+	}
+}
+
+static int txgbe_get_eeprom_len(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	return adapter->hw.eeprom.word_size * 2;
+}
+
+static int txgbe_get_eeprom(struct net_device *netdev,
+			    struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u16 *eeprom_buff;
+	int first_word, last_word, eeprom_len;
+	int ret_val = 0;
+	u16 i;
+
+	if (eeprom->len == 0)
+		return -EINVAL;
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	first_word = eeprom->offset >> 1;
+	last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+	eeprom_len = last_word - first_word + 1;
+
+	eeprom_buff = kmalloc_array(eeprom_len, sizeof(u16), GFP_KERNEL);
+	if (!eeprom_buff)
+		return -ENOMEM;
+
+	ret_val = TCALL(hw, eeprom.ops.read_buffer, first_word, eeprom_len,
+			eeprom_buff);
+
+	/* Device's eeprom is always little-endian, word addressable */
+	for (i = 0; i < eeprom_len; i++)
+		le16_to_cpus(&eeprom_buff[i]);
+
+	memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len);
+	kfree(eeprom_buff);
+
+	return ret_val;
+}
+
+static int txgbe_set_eeprom(struct net_device *netdev,
+			    struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u16 *eeprom_buff;
+	void *ptr;
+	int max_len, first_word, last_word, ret_val = 0;
+	u16 i;
+
+	if (eeprom->len == 0)
+		return -EINVAL;
+
+	if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16)))
+		return -EINVAL;
+
+	max_len = hw->eeprom.word_size * 2;
+
+	first_word = eeprom->offset >> 1;
+	last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+	eeprom_buff = kmalloc(max_len, GFP_KERNEL);
+	if (!eeprom_buff)
+		return -ENOMEM;
+
+	ptr = eeprom_buff;
+
+	if (eeprom->offset & 1) {
+		/* need read/modify/write of first changed EEPROM word
+		 * only the second byte of the word is being modified
+		 */
+		ret_val = TCALL(hw, eeprom.ops.read, first_word,
+				&eeprom_buff[0]);
+		if (ret_val)
+			goto err;
+
+		ptr++;
+	}
+	if (((eeprom->offset + eeprom->len) & 1) && ret_val == 0) {
+		/* need read/modify/write of last changed EEPROM word
+		 * only the first byte of the word is being modified
+		 */
+		ret_val = TCALL(hw, eeprom.ops.read, last_word,
+				&eeprom_buff[last_word - first_word]);
+		if (ret_val)
+			goto err;
+	}
+
+	/* Device's eeprom is always little-endian, word addressable */
+	for (i = 0; i < last_word - first_word + 1; i++)
+		le16_to_cpus(&eeprom_buff[i]);
+
+	memcpy(ptr, bytes, eeprom->len);
+
+	for (i = 0; i < last_word - first_word + 1; i++)
+		cpu_to_le16s(&eeprom_buff[i]);
+
+	ret_val = TCALL(hw, eeprom.ops.write_buffer, first_word,
+			last_word - first_word + 1,
+			eeprom_buff);
+
+	/* Update the checksum */
+	if (ret_val == 0)
+		TCALL(hw, eeprom.ops.update_checksum);
+
+err:
+	kfree(eeprom_buff);
+	return ret_val;
+}
+
+static void txgbe_get_drvinfo(struct net_device *netdev,
+			      struct ethtool_drvinfo *drvinfo)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	strncpy(drvinfo->driver, txgbe_driver_name,
+		sizeof(drvinfo->driver) - 1);
+	strncpy(drvinfo->fw_version, adapter->eeprom_id,
+		sizeof(drvinfo->fw_version));
+	strncpy(drvinfo->bus_info, pci_name(adapter->pdev),
+		sizeof(drvinfo->bus_info) - 1);
+	if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) {
+		drvinfo->n_stats = TXGBE_STATS_LEN -
+				   (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues) *
+				   (sizeof(struct txgbe_queue_stats) / sizeof(u64)) * 2;
+	} else {
+		drvinfo->n_stats = TXGBE_STATS_LEN;
+	}
+	drvinfo->testinfo_len = TXGBE_TEST_LEN;
+	drvinfo->regdump_len = txgbe_get_regs_len(netdev);
+}
+
+static void txgbe_get_ringparam(struct net_device *netdev,
+				struct ethtool_ringparam *ring,
+				struct kernel_ethtool_ringparam *kernel_ring,
+				struct netlink_ext_ack *extack)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	ring->rx_max_pending = TXGBE_MAX_RXD;
+	ring->tx_max_pending = TXGBE_MAX_TXD;
+	ring->rx_mini_max_pending = 0;
+	ring->rx_jumbo_max_pending = 0;
+	ring->rx_pending = adapter->rx_ring_count;
+	ring->tx_pending = adapter->tx_ring_count;
+	ring->rx_mini_pending = 0;
+	ring->rx_jumbo_pending = 0;
+}
+
+static int txgbe_set_ringparam(struct net_device *netdev,
+			       struct ethtool_ringparam *ring,
+			       struct kernel_ethtool_ringparam *kernel_ring,
+			       struct netlink_ext_ack *extack)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_ring *temp_ring;
+	int i, err = 0;
+	u32 new_rx_count, new_tx_count;
+
+	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
+		return -EINVAL;
+
+	new_tx_count = clamp_t(u32, ring->tx_pending,
+			       TXGBE_MIN_TXD, TXGBE_MAX_TXD);
+	new_tx_count = ALIGN(new_tx_count, TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE);
+
+	new_rx_count = clamp_t(u32, ring->rx_pending,
+			       TXGBE_MIN_RXD, TXGBE_MAX_RXD);
+	new_rx_count = ALIGN(new_rx_count, TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE);
+
+	if (new_tx_count == adapter->tx_ring_count &&
+	    new_rx_count == adapter->rx_ring_count) {
+		/* nothing to do */
+		return 0;
+	}
+
+	while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
+		usleep_range(1000, 2000);
+
+	if (!netif_running(adapter->netdev)) {
+		for (i = 0; i < adapter->num_tx_queues; i++)
+			adapter->tx_ring[i]->count = new_tx_count;
+		for (i = 0; i < adapter->num_rx_queues; i++)
+			adapter->rx_ring[i]->count = new_rx_count;
+		adapter->tx_ring_count = new_tx_count;
+		adapter->rx_ring_count = new_rx_count;
+		goto clear_reset;
+	}
+
+	/* allocate temporary buffer to store rings in */
+	i = max_t(int, adapter->num_tx_queues, adapter->num_rx_queues);
+	temp_ring = vmalloc(i * sizeof(struct txgbe_ring));
+
+	if (!temp_ring) {
+		err = -ENOMEM;
+		goto clear_reset;
+	}
+
+	txgbe_down(adapter);
+
+	/* Setup new Tx resources and free the old Tx resources in that order.
+	 * We can then assign the new resources to the rings via a memcpy.
+	 * The advantage to this approach is that we are guaranteed to still
+	 * have resources even in the case of an allocation failure.
+	 */
+	if (new_tx_count != adapter->tx_ring_count) {
+		for (i = 0; i < adapter->num_tx_queues; i++) {
+			memcpy(&temp_ring[i], adapter->tx_ring[i],
+			       sizeof(struct txgbe_ring));
+
+			temp_ring[i].count = new_tx_count;
+			err = txgbe_setup_tx_resources(&temp_ring[i]);
+			if (err) {
+				while (i) {
+					i--;
+					txgbe_free_tx_resources(&temp_ring[i]);
+				}
+				goto err_setup;
+			}
+		}
+
+		for (i = 0; i < adapter->num_tx_queues; i++) {
+			txgbe_free_tx_resources(adapter->tx_ring[i]);
+
+			memcpy(adapter->tx_ring[i], &temp_ring[i],
+			       sizeof(struct txgbe_ring));
+		}
+
+		adapter->tx_ring_count = new_tx_count;
+	}
+
+	/* Repeat the process for the Rx rings if needed */
+	if (new_rx_count != adapter->rx_ring_count) {
+		for (i = 0; i < adapter->num_rx_queues; i++) {
+			memcpy(&temp_ring[i], adapter->rx_ring[i],
+			       sizeof(struct txgbe_ring));
+
+			temp_ring[i].count = new_rx_count;
+			err = txgbe_setup_rx_resources(&temp_ring[i]);
+			if (err) {
+				while (i) {
+					i--;
+					txgbe_free_rx_resources(&temp_ring[i]);
+				}
+				goto err_setup;
+			}
+		}
+
+		for (i = 0; i < adapter->num_rx_queues; i++) {
+			txgbe_free_rx_resources(adapter->rx_ring[i]);
+			memcpy(adapter->rx_ring[i], &temp_ring[i],
+			       sizeof(struct txgbe_ring));
+		}
+
+		adapter->rx_ring_count = new_rx_count;
+	}
+
+err_setup:
+	txgbe_up(adapter);
+	vfree(temp_ring);
+clear_reset:
+	clear_bit(__TXGBE_RESETTING, &adapter->state);
+	return err;
+}
+
+static int txgbe_get_sset_count(struct net_device *netdev, int sset)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	switch (sset) {
+	case ETH_SS_TEST:
+		return TXGBE_TEST_LEN;
+	case ETH_SS_STATS:
+		if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) {
+			return TXGBE_STATS_LEN -
+			       (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues) *
+			       (sizeof(struct txgbe_queue_stats) / sizeof(u64)) * 2;
+		} else {
+			return TXGBE_STATS_LEN;
+		}
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static void txgbe_get_ethtool_stats(struct net_device *netdev,
+				    struct ethtool_stats *stats, u64 *data)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct rtnl_link_stats64 temp;
+	const struct rtnl_link_stats64 *net_stats;
+
+	unsigned int start;
+	struct txgbe_ring *ring;
+	int i, j;
+	char *p;
+
+	txgbe_update_stats(adapter);
+	net_stats = dev_get_stats(netdev, &temp);
+
+	for (i = 0; i < TXGBE_GLOBAL_STATS_LEN; i++) {
+		switch (txgbe_gstrings_stats[i].type) {
+		case NETDEV_STATS:
+			p = (char *)net_stats +
+				txgbe_gstrings_stats[i].stat_offset;
+			break;
+		case TXGBE_STATS:
+			p = (char *)adapter +
+				txgbe_gstrings_stats[i].stat_offset;
+			break;
+		default:
+			data[i] = 0;
+			continue;
+		}
+
+		data[i] = (txgbe_gstrings_stats[i].sizeof_stat ==
+			   sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+	}
+
+	for (j = 0; j < adapter->num_tx_queues; j++) {
+		ring = adapter->tx_ring[j];
+		if (!ring) {
+			data[i++] = 0;
+			data[i++] = 0;
+			continue;
+		}
+
+		do {
+			start = u64_stats_fetch_begin_irq(&ring->syncp);
+			data[i] = ring->stats.packets;
+			data[i + 1] = ring->stats.bytes;
+		} while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+		i += 2;
+	}
+	for (j = 0; j < adapter->num_rx_queues; j++) {
+		ring = adapter->rx_ring[j];
+		if (!ring) {
+			data[i++] = 0;
+			data[i++] = 0;
+			continue;
+		}
+
+		do {
+			start = u64_stats_fetch_begin_irq(&ring->syncp);
+			data[i] = ring->stats.packets;
+			data[i + 1] = ring->stats.bytes;
+		} while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+		i += 2;
+	}
+	for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) {
+		data[i++] = adapter->stats.pxontxc[j];
+		data[i++] = adapter->stats.pxofftxc[j];
+	}
+	for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) {
+		data[i++] = adapter->stats.pxonrxc[j];
+		data[i++] = adapter->stats.pxoffrxc[j];
+	}
+}
+
+static void txgbe_get_strings(struct net_device *netdev, u32 stringset,
+			      u8 *data)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	char *p = (char *)data;
+	int i;
+
+	switch (stringset) {
+	case ETH_SS_TEST:
+		memcpy(data, *txgbe_gstrings_test,
+		       TXGBE_TEST_LEN * ETH_GSTRING_LEN);
+		break;
+	case ETH_SS_STATS:
+		for (i = 0; i < TXGBE_GLOBAL_STATS_LEN; i++) {
+			memcpy(p, txgbe_gstrings_stats[i].stat_string,
+			       ETH_GSTRING_LEN);
+			p += ETH_GSTRING_LEN;
+		}
+
+		for (i = 0; i < adapter->num_tx_queues; i++) {
+			sprintf(p, "tx_queue_%u_packets", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_queue_%u_bytes", i);
+			p += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < adapter->num_rx_queues; i++) {
+			sprintf(p, "rx_queue_%u_packets", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_queue_%u_bytes", i);
+			p += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) {
+			sprintf(p, "tx_pb_%u_pxon", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "tx_pb_%u_pxoff", i);
+			p += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) {
+			sprintf(p, "rx_pb_%u_pxon", i);
+			p += ETH_GSTRING_LEN;
+			sprintf(p, "rx_pb_%u_pxoff", i);
+			p += ETH_GSTRING_LEN;
+		}
+		break;
+	}
+}
+
+static int txgbe_link_test(struct txgbe_adapter *adapter, u64 *data)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	bool link_up;
+	u32 link_speed = 0;
+
+	if (TXGBE_REMOVED(hw->hw_addr)) {
+		*data = 1;
+		return 1;
+	}
+	*data = 0;
+	TCALL(hw, mac.ops.check_link, &link_speed, &link_up, true);
+	if (!link_up)
+		*data = 1;
+	return *data;
+}
+
+/* ethtool register test data */
+struct txgbe_reg_test {
+	u32 reg;
+	u8  array_len;
+	u8  test_type;
+	u32 mask;
+	u32 write;
+};
+
+/* In the hardware, registers are laid out either singly, in arrays
+ * spaced 0x40 bytes apart, or in contiguous tables.  We assume
+ * most tests take place on arrays or single registers (handled
+ * as a single-element array) and special-case the tables.
+ * Table tests are always pattern tests.
+ *
+ * We also make provision for some required setup steps by specifying
+ * registers to be written without any read-back testing.
+ */
+
+#define PATTERN_TEST    1
+#define SET_READ_TEST   2
+#define WRITE_NO_TEST   3
+#define TABLE32_TEST    4
+#define TABLE64_TEST_LO 5
+#define TABLE64_TEST_HI 6
+
+/* default sapphire register test */
+static struct txgbe_reg_test reg_test_sapphire[] = {
+	{ TXGBE_RDB_RFCL(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+	{ TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+	{ TXGBE_PSR_VLAN_CTL, 1, PATTERN_TEST, 0x00000000, 0x00000000 },
+	{ TXGBE_PX_RR_BAL(0), 4, PATTERN_TEST, 0xFFFFFF80, 0xFFFFFF80 },
+	{ TXGBE_PX_RR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+	{ TXGBE_PX_RR_CFG(0), 4, WRITE_NO_TEST, 0, TXGBE_PX_RR_CFG_RR_EN },
+	{ TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+	{ TXGBE_RDB_RFCV(0), 1, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+	{ TXGBE_PX_TR_BAL(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+	{ TXGBE_PX_TR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+	{ TXGBE_RDB_PB_CTL, 1, SET_READ_TEST, 0x00000001, 0x00000001 },
+	{ TXGBE_PSR_MC_TBL(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+	{ .reg = 0 }
+};
+
+static bool reg_pattern_test(struct txgbe_adapter *adapter, u64 *data, int reg,
+			     u32 mask, u32 write)
+{
+	u32 pat, val, before;
+	static const u32 test_pattern[] = {
+		0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF
+	};
+
+	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+		*data = 1;
+		return true;
+	}
+	for (pat = 0; pat < ARRAY_SIZE(test_pattern); pat++) {
+		before = rd32(&adapter->hw, reg);
+		wr32(&adapter->hw, reg, test_pattern[pat] & write);
+		val = rd32(&adapter->hw, reg);
+		if (val != (test_pattern[pat] & write & mask)) {
+			txgbe_err(drv,
+				  "pattern test reg %04X failed: got 0x%08X expected 0x%08X\n",
+				  reg, val, test_pattern[pat] & write & mask);
+			*data = reg;
+			wr32(&adapter->hw, reg, before);
+			return true;
+		}
+		wr32(&adapter->hw, reg, before);
+	}
+	return false;
+}
+
+static bool reg_set_and_check(struct txgbe_adapter *adapter, u64 *data, int reg,
+			      u32 mask, u32 write)
+{
+	u32 val, before;
+
+	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+		*data = 1;
+		return true;
+	}
+	before = rd32(&adapter->hw, reg);
+	wr32(&adapter->hw, reg, write & mask);
+	val = rd32(&adapter->hw, reg);
+	if ((write & mask) != (val & mask)) {
+		txgbe_err(drv,
+			  "set/check reg %04X test failed: got 0x%08X expected 0x%08X\n",
+			  reg, (val & mask), (write & mask));
+		*data = reg;
+		wr32(&adapter->hw, reg, before);
+		return true;
+	}
+	wr32(&adapter->hw, reg, before);
+	return false;
+}
+
+static bool txgbe_reg_test(struct txgbe_adapter *adapter, u64 *data)
+{
+	struct txgbe_reg_test *test;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 i;
+
+	if (TXGBE_REMOVED(hw->hw_addr)) {
+		txgbe_err(drv, "Adapter removed - register test blocked\n");
+		*data = 1;
+		return true;
+	}
+
+	test = reg_test_sapphire;
+
+	/* Perform the remainder of the register test, looping through
+	 * the test table until we either fail or reach the null entry.
+	 */
+	while (test->reg) {
+		for (i = 0; i < test->array_len; i++) {
+			bool b = false;
+
+			switch (test->test_type) {
+			case PATTERN_TEST:
+				b = reg_pattern_test(adapter, data,
+						     test->reg + (i * 0x40),
+						     test->mask,
+						     test->write);
+				break;
+			case SET_READ_TEST:
+				b = reg_set_and_check(adapter, data,
+						      test->reg + (i * 0x40),
+						      test->mask,
+						      test->write);
+				break;
+			case WRITE_NO_TEST:
+				wr32(hw, test->reg + (i * 0x40),
+				     test->write);
+				break;
+			case TABLE32_TEST:
+				b = reg_pattern_test(adapter, data,
+						     test->reg + (i * 4),
+						     test->mask,
+						     test->write);
+				break;
+			case TABLE64_TEST_LO:
+				b = reg_pattern_test(adapter, data,
+						     test->reg + (i * 8),
+						     test->mask,
+						     test->write);
+				break;
+			case TABLE64_TEST_HI:
+				b = reg_pattern_test(adapter, data,
+						     (test->reg + 4) + (i * 8),
+						     test->mask,
+						     test->write);
+				break;
+			}
+			if (b)
+				return true;
+		}
+		test++;
+	}
+
+	*data = 0;
+	return false;
+}
+
+static bool txgbe_eeprom_test(struct txgbe_adapter *adapter, u64 *data)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	bool status = true;
+
+	if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) {
+		*data = 1;
+	} else {
+		*data = 0;
+		status = false;
+	}
+	return status;
+}
+
+static irqreturn_t txgbe_test_intr(int __always_unused irq, void *data)
+{
+	struct net_device *netdev = (struct net_device *)data;
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	u64 icr;
+
+	/* get misc interrupt, as cannot get ring interrupt status */
+	icr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC1);
+	icr <<= 32;
+	icr |= txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
+
+	adapter->test_icr = icr;
+
+	return IRQ_HANDLED;
+}
+
+static int txgbe_intr_test(struct txgbe_adapter *adapter, u64 *data)
+{
+	struct net_device *netdev = adapter->netdev;
+	u64 mask;
+	u32 i = 0, shared_int = true;
+	u32 irq = adapter->pdev->irq;
+
+	if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+		*data = 1;
+		return -1;
+	}
+	*data = 0;
+
+	/* Hook up test interrupt handler just for this test */
+	if (adapter->msix_entries) {
+		/* NOTE: we don't test MSI-X interrupts here, yet */
+		return 0;
+	} else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) {
+		shared_int = false;
+		if (request_irq(irq, &txgbe_test_intr, 0, netdev->name,
+				netdev)) {
+			*data = 1;
+			return -1;
+		}
+	} else if (!request_irq(irq, &txgbe_test_intr, IRQF_PROBE_SHARED,
+				netdev->name, netdev)) {
+		shared_int = false;
+	} else if (request_irq(irq, &txgbe_test_intr, IRQF_SHARED,
+			       netdev->name, netdev)) {
+		*data = 1;
+		return -1;
+	}
+	txgbe_info(hw, "testing %s interrupt\n",
+		   (shared_int ? "shared" : "unshared"));
+
+	/* Disable all the interrupts */
+	txgbe_irq_disable(adapter);
+	TXGBE_WRITE_FLUSH(&adapter->hw);
+	usleep_range(10000, 20000);
+
+	/* Test each interrupt */
+	for (; i < 1; i++) {
+		/* Interrupt to test */
+		mask = 1ULL << i;
+
+		if (!shared_int) {
+			/* Disable the interrupts to be reported in
+			 * the cause register and then force the same
+			 * interrupt and see if one gets posted.  If
+			 * an interrupt was posted to the bus, the
+			 * test failed.
+			 */
+			adapter->test_icr = 0;
+			txgbe_intr_disable(&adapter->hw, ~mask);
+			txgbe_intr_trigger(&adapter->hw, mask);
+			TXGBE_WRITE_FLUSH(&adapter->hw);
+			usleep_range(10000, 20000);
+
+			if (adapter->test_icr & mask) {
+				*data = 3;
+				break;
+			}
+		}
+
+		/* Enable the interrupt to be reported in the cause
+		 * register and then force the same interrupt and see
+		 * if one gets posted.  If an interrupt was not posted
+		 * to the bus, the test failed.
+		 */
+		adapter->test_icr = 0;
+		txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+		txgbe_intr_trigger(&adapter->hw, mask);
+		TXGBE_WRITE_FLUSH(&adapter->hw);
+		usleep_range(10000, 20000);
+
+		if (!(adapter->test_icr & mask)) {
+			*data = 4;
+			break;
+		}
+	}
+
+	/* Disable all the interrupts */
+	txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+	TXGBE_WRITE_FLUSH(&adapter->hw);
+	usleep_range(10000, 20000);
+
+	/* Unhook test interrupt handler */
+	free_irq(irq, netdev);
+
+	return *data;
+}
+
+static void txgbe_free_desc_rings(struct txgbe_adapter *adapter)
+{
+	struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+	struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+	struct txgbe_hw *hw = &adapter->hw;
+
+	/* shut down the DMA engines now so they can be reinitialized later */
+
+	/* first Rx */
+	TCALL(hw, mac.ops.disable_rx);
+	txgbe_disable_rx_queue(adapter, rx_ring);
+
+	/* now Tx */
+	wr32(hw, TXGBE_PX_TR_CFG(tx_ring->reg_idx), 0);
+
+	wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
+
+	txgbe_reset(adapter);
+
+	txgbe_free_tx_resources(&adapter->test_tx_ring);
+	txgbe_free_rx_resources(&adapter->test_rx_ring);
+}
+
+static int txgbe_setup_desc_rings(struct txgbe_adapter *adapter)
+{
+	struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+	struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+	struct txgbe_hw *hw = &adapter->hw;
+	int ret_val;
+	int err;
+
+	TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL);
+
+	/* Setup Tx descriptor ring and Tx buffers */
+	tx_ring->count = TXGBE_DEFAULT_TXD;
+	tx_ring->queue_index = 0;
+	tx_ring->dev = pci_dev_to_dev(adapter->pdev);
+	tx_ring->netdev = adapter->netdev;
+	tx_ring->reg_idx = adapter->tx_ring[0]->reg_idx;
+
+	err = txgbe_setup_tx_resources(tx_ring);
+	if (err)
+		return 1;
+
+	wr32m(&adapter->hw, TXGBE_TDM_CTL,
+	      TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE);
+
+	txgbe_configure_tx_ring(adapter, tx_ring);
+
+	/* enable mac transmitter */
+	wr32m(hw, TXGBE_MAC_TX_CFG,
+	      TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_MASK,
+	      TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_10G);
+
+	/* Setup Rx Descriptor ring and Rx buffers */
+	rx_ring->count = TXGBE_DEFAULT_RXD;
+	rx_ring->queue_index = 0;
+	rx_ring->dev = pci_dev_to_dev(adapter->pdev);
+	rx_ring->netdev = adapter->netdev;
+	rx_ring->reg_idx = adapter->rx_ring[0]->reg_idx;
+
+	err = txgbe_setup_rx_resources(rx_ring);
+	if (err) {
+		ret_val = 4;
+		goto err_nomem;
+	}
+
+	TCALL(hw, mac.ops.disable_rx);
+
+	txgbe_configure_rx_ring(adapter, rx_ring);
+
+	TCALL(hw, mac.ops.enable_rx);
+
+	return 0;
+
+err_nomem:
+	txgbe_free_desc_rings(adapter);
+	return ret_val;
+}
+
+static int txgbe_setup_config(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 reg_data;
+
+	/* Setup traffic loopback */
+	reg_data = rd32(hw, TXGBE_PSR_CTL);
+	reg_data |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_UPE |
+		TXGBE_PSR_CTL_MPE | TXGBE_PSR_CTL_TPE;
+	wr32(hw, TXGBE_PSR_CTL, reg_data);
+
+	wr32(hw, TXGBE_RSC_CTL,
+	     (rd32(hw, TXGBE_RSC_CTL) |
+	      TXGBE_RSC_CTL_SAVE_MAC_ERR) &
+	     ~TXGBE_RSC_CTL_SECRX_DIS);
+
+	wr32(hw, TXGBE_RSC_LSEC_CTL, 0x4);
+
+	wr32(hw, TXGBE_PSR_VLAN_CTL,
+	     rd32(hw, TXGBE_PSR_VLAN_CTL) &
+	     ~TXGBE_PSR_VLAN_CTL_VFE);
+
+	wr32m(&adapter->hw, TXGBE_MAC_RX_CFG,
+	      TXGBE_MAC_RX_CFG_LM, ~TXGBE_MAC_RX_CFG_LM);
+	wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+	      TXGBE_CFG_PORT_CTL_FORCE_LKUP,
+	      ~TXGBE_CFG_PORT_CTL_FORCE_LKUP);
+
+	TXGBE_WRITE_FLUSH(hw);
+	usleep_range(10000, 20000);
+
+	return 0;
+}
+
+static int txgbe_setup_phy_loopback_test(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 value;
+	/* setup phy loopback */
+	value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0);
+	value |= TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 |
+		TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1;
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value);
+
+	value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+	txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+			value | TXGBE_SR_PMA_MMD_CTL1_LB_EN);
+	return 0;
+}
+
+static void txgbe_phy_loopback_cleanup(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 value;
+
+	value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0);
+	value &= ~(TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 |
+		   TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1);
+
+	txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value);
+	value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+	txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+			value & ~TXGBE_SR_PMA_MMD_CTL1_LB_EN);
+}
+
+static void txgbe_create_lbtest_frame(struct sk_buff *skb,
+				      unsigned int frame_size)
+{
+	memset(skb->data, 0xFF, frame_size);
+	frame_size >>= 1;
+	memset(&skb->data[frame_size], 0xAA, frame_size / 2 - 1);
+	skb->data[frame_size + 10] = 0xBE;
+	skb->data[frame_size + 12] = 0xAF;
+}
+
+static bool txgbe_check_lbtest_frame(struct txgbe_rx_buffer *rx_buffer,
+				     unsigned int frame_size)
+{
+	unsigned char *data;
+	bool match = true;
+
+	frame_size >>= 1;
+	data = kmap(rx_buffer->page) + rx_buffer->page_offset;
+
+	if (data[3] != 0xFF ||
+	    data[frame_size + 10] != 0xBE ||
+	    data[frame_size + 12] != 0xAF)
+		match = false;
+
+	kunmap(rx_buffer->page);
+	return match;
+}
+
+static u16 txgbe_clean_test_rings(struct txgbe_ring *rx_ring,
+				  struct txgbe_ring *tx_ring,
+				  unsigned int size)
+{
+	union txgbe_rx_desc *rx_desc;
+	struct txgbe_rx_buffer *rx_buffer;
+	struct txgbe_tx_buffer *tx_buffer;
+	const int bufsz = txgbe_rx_bufsz(rx_ring);
+	u16 rx_ntc, tx_ntc, count = 0;
+
+	/* initialize next to clean and descriptor values */
+	rx_ntc = rx_ring->next_to_clean;
+	tx_ntc = tx_ring->next_to_clean;
+	rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc);
+
+	while (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD)) {
+		/* unmap buffer on Tx side */
+		tx_buffer = &tx_ring->tx_buffer_info[tx_ntc];
+		txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+
+		/* check Rx buffer */
+		rx_buffer = &rx_ring->rx_buffer_info[rx_ntc];
+
+		/* sync Rx buffer for CPU read */
+		dma_sync_single_for_cpu(rx_ring->dev,
+					rx_buffer->page_dma,
+					bufsz,
+					DMA_FROM_DEVICE);
+
+		/* verify contents of skb */
+		if (txgbe_check_lbtest_frame(rx_buffer, size))
+			count++;
+
+		/* sync Rx buffer for device write */
+		dma_sync_single_for_device(rx_ring->dev,
+					   rx_buffer->page_dma,
+					   bufsz,
+					   DMA_FROM_DEVICE);
+
+		/* increment Rx/Tx next to clean counters */
+		rx_ntc++;
+		if (rx_ntc == rx_ring->count)
+			rx_ntc = 0;
+		tx_ntc++;
+		if (tx_ntc == tx_ring->count)
+			tx_ntc = 0;
+
+		/* fetch next descriptor */
+		rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc);
+	}
+
+	/* re-map buffers to ring, store next to clean values */
+	txgbe_alloc_rx_buffers(rx_ring, count);
+	rx_ring->next_to_clean = rx_ntc;
+	tx_ring->next_to_clean = tx_ntc;
+
+	return count;
+}
+
+static int txgbe_run_loopback_test(struct txgbe_adapter *adapter)
+{
+	struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+	struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+	int i, j, lc, good_cnt, ret_val = 0;
+	unsigned int size = 1024;
+	netdev_tx_t tx_ret_val;
+	struct sk_buff *skb;
+	u32 flags_orig = adapter->flags;
+
+	/* allocate test skb */
+	skb = alloc_skb(size, GFP_KERNEL);
+	if (!skb)
+		return 11;
+
+	/* place data into test skb */
+	txgbe_create_lbtest_frame(skb, size);
+	skb_put(skb, size);
+
+	/* Calculate the loop count based on the largest descriptor ring
+	 * The idea is to wrap the largest ring a number of times using 64
+	 * send/receive pairs during each loop
+	 */
+
+	if (rx_ring->count <= tx_ring->count)
+		lc = ((tx_ring->count / 64) * 2) + 1;
+	else
+		lc = ((rx_ring->count / 64) * 2) + 1;
+
+	for (j = 0; j <= lc; j++) {
+		/* reset count of good packets */
+		good_cnt = 0;
+
+		/* place 64 packets on the transmit queue*/
+		for (i = 0; i < 64; i++) {
+			skb_get(skb);
+			tx_ret_val = txgbe_xmit_frame_ring(skb,
+							   adapter,
+							   tx_ring);
+			if (tx_ret_val == NETDEV_TX_OK)
+				good_cnt++;
+		}
+
+		if (good_cnt != 64) {
+			ret_val = 12;
+			break;
+		}
+
+		/* allow 200 milliseconds for packets to go from Tx to Rx */
+		msleep(200);
+
+		good_cnt = txgbe_clean_test_rings(rx_ring, tx_ring, size);
+		if (j == 0) {
+			continue;
+		} else if (good_cnt != 64) {
+			ret_val = 13;
+			break;
+		}
+	}
+
+	/* free the original skb */
+	kfree_skb(skb);
+	adapter->flags = flags_orig;
+
+	return ret_val;
+}
+
+static int txgbe_loopback_test(struct txgbe_adapter *adapter, u64 *data)
+{
+	*data = txgbe_setup_desc_rings(adapter);
+	if (*data)
+		goto out;
+
+	*data = txgbe_setup_config(adapter);
+	if (*data)
+		goto err_loopback;
+
+	*data = txgbe_setup_phy_loopback_test(adapter);
+	if (*data)
+		goto err_loopback;
+	*data = txgbe_run_loopback_test(adapter);
+	if (*data)
+		txgbe_info(hw, "phy loopback testing failed\n");
+	txgbe_phy_loopback_cleanup(adapter);
+
+err_loopback:
+	txgbe_free_desc_rings(adapter);
+out:
+	return *data;
+}
+
+static void txgbe_diag_test(struct net_device *netdev,
+			    struct ethtool_test *eth_test, u64 *data)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	bool if_running = netif_running(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (TXGBE_REMOVED(hw->hw_addr)) {
+		txgbe_err(hw, "Adapter removed - test blocked\n");
+		data[0] = 1;
+		data[1] = 1;
+		data[2] = 1;
+		data[3] = 1;
+		data[4] = 1;
+		eth_test->flags |= ETH_TEST_FL_FAILED;
+		return;
+	}
+
+	set_bit(__TXGBE_TESTING, &adapter->state);
+	if (eth_test->flags == ETH_TEST_FL_OFFLINE) {
+		/* Offline tests */
+		txgbe_info(hw, "offline testing starting\n");
+
+		/* Link test performed before hardware reset so autoneg doesn't
+		 * interfere with test result
+		 */
+		if (txgbe_link_test(adapter, &data[4]))
+			eth_test->flags |= ETH_TEST_FL_FAILED;
+
+		if (if_running)
+			/* indicate we're in test mode */
+			txgbe_close(netdev);
+		else
+			txgbe_reset(adapter);
+
+		txgbe_info(hw, "register testing starting\n");
+		if (txgbe_reg_test(adapter, &data[0]))
+			eth_test->flags |= ETH_TEST_FL_FAILED;
+
+		txgbe_reset(adapter);
+		txgbe_info(hw, "eeprom testing starting\n");
+		if (txgbe_eeprom_test(adapter, &data[1]))
+			eth_test->flags |= ETH_TEST_FL_FAILED;
+
+		txgbe_reset(adapter);
+		txgbe_info(hw, "interrupt testing starting\n");
+		if (txgbe_intr_test(adapter, &data[2]))
+			eth_test->flags |= ETH_TEST_FL_FAILED;
+
+		if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+		      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+			txgbe_reset(adapter);
+			txgbe_info(hw, "loopback testing starting\n");
+			if (txgbe_loopback_test(adapter, &data[3]))
+				eth_test->flags |= ETH_TEST_FL_FAILED;
+		}
+
+		data[3] = 0;
+
+		txgbe_reset(adapter);
+
+		/* clear testing bit and return adapter to previous state */
+		clear_bit(__TXGBE_TESTING, &adapter->state);
+		if (if_running)
+			txgbe_open(netdev);
+		else
+			TCALL(hw, mac.ops.disable_tx_laser);
+	} else {
+		txgbe_info(hw, "online testing starting\n");
+
+		/* Online tests */
+		if (txgbe_link_test(adapter, &data[4]))
+			eth_test->flags |= ETH_TEST_FL_FAILED;
+
+		/* Offline tests aren't run; pass by default */
+		data[0] = 0;
+		data[1] = 0;
+		data[2] = 0;
+		data[3] = 0;
+
+		clear_bit(__TXGBE_TESTING, &adapter->state);
+	}
+
+	msleep_interruptible(4 * 1000);
+}
+
+static int txgbe_wol_exclusion(struct txgbe_adapter *adapter,
+			       struct ethtool_wolinfo *wol)
+{
+	int retval = 0;
+
+	/* WOL not supported for all devices */
+	if (!txgbe_wol_supported(adapter)) {
+		retval = 1;
+		wol->supported = 0;
+	}
+
+	return retval;
+}
+
+static void txgbe_get_wol(struct net_device *netdev,
+			  struct ethtool_wolinfo *wol)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	wol->supported = WAKE_UCAST | WAKE_MCAST |
+			 WAKE_BCAST | WAKE_MAGIC;
+	wol->wolopts = 0;
+
+	if (txgbe_wol_exclusion(adapter, wol) ||
+	    !device_can_wakeup(pci_dev_to_dev(adapter->pdev)))
+		return;
+	if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP)
+		return;
+
+	if (adapter->wol & TXGBE_PSR_WKUP_CTL_EX)
+		wol->wolopts |= WAKE_UCAST;
+	if (adapter->wol & TXGBE_PSR_WKUP_CTL_MC)
+		wol->wolopts |= WAKE_MCAST;
+	if (adapter->wol & TXGBE_PSR_WKUP_CTL_BC)
+		wol->wolopts |= WAKE_BCAST;
+	if (adapter->wol & TXGBE_PSR_WKUP_CTL_MAG)
+		wol->wolopts |= WAKE_MAGIC;
+}
+
+static int txgbe_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE))
+		return -EOPNOTSUPP;
+
+	if (txgbe_wol_exclusion(adapter, wol))
+		return wol->wolopts ? -EOPNOTSUPP : 0;
+	if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP)
+		return -EOPNOTSUPP;
+
+	adapter->wol = 0;
+
+	if (wol->wolopts & WAKE_UCAST)
+		adapter->wol |= TXGBE_PSR_WKUP_CTL_EX;
+	if (wol->wolopts & WAKE_MCAST)
+		adapter->wol |= TXGBE_PSR_WKUP_CTL_MC;
+	if (wol->wolopts & WAKE_BCAST)
+		adapter->wol |= TXGBE_PSR_WKUP_CTL_BC;
+	if (wol->wolopts & WAKE_MAGIC)
+		adapter->wol |= TXGBE_PSR_WKUP_CTL_MAG;
+
+	hw->wol_enabled = !!(adapter->wol);
+	wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol);
+
+	device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol);
+
+	return 0;
+}
+
+static int txgbe_nway_reset(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	if (netif_running(netdev))
+		txgbe_reinit_locked(adapter);
+
+	return 0;
+}
+
+static int txgbe_set_phys_id(struct net_device *netdev,
+			     enum ethtool_phys_id_state state)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_hw *hw = &adapter->hw;
+
+	switch (state) {
+	case ETHTOOL_ID_ACTIVE:
+		adapter->led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+		return 2;
+
+	case ETHTOOL_ID_ON:
+		TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP);
+		break;
+
+	case ETHTOOL_ID_OFF:
+		TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP);
+		break;
+
+	case ETHTOOL_ID_INACTIVE:
+		/* Restore LED settings */
+		wr32(&adapter->hw, TXGBE_CFG_LED_CTL, adapter->led_reg);
+		break;
+	}
+
+	return 0;
+}
+
+static int txgbe_get_coalesce(struct net_device *netdev,
+			      struct ethtool_coalesce *ec,
+			      struct kernel_ethtool_coalesce *kernel_coal,
+			      struct netlink_ext_ack *extack)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	ec->tx_max_coalesced_frames_irq = adapter->tx_work_limit;
+	/* only valid if in constant ITR mode */
+	if (adapter->rx_itr_setting <= 1)
+		ec->rx_coalesce_usecs = adapter->rx_itr_setting;
+	else
+		ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2;
+
+	/* if in mixed tx/rx queues per vector mode, report only rx settings */
+	if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
+		return 0;
+
+	/* only valid if in constant ITR mode */
+	if (adapter->tx_itr_setting <= 1)
+		ec->tx_coalesce_usecs = adapter->tx_itr_setting;
+	else
+		ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2;
+
+	return 0;
+}
+
+/* this function must be called before setting the new value of
+ * rx_itr_setting
+ */
+static bool txgbe_update_rsc(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+
+	/* nothing to do if LRO or RSC are not enabled */
+	if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) ||
+	    !(netdev->features & NETIF_F_LRO))
+		return false;
+
+	/* check the feature flag value and enable RSC if necessary */
+	if (adapter->rx_itr_setting == 1 ||
+	    adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) {
+		if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) {
+			adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+			txgbe_info(probe,
+				   "rx-usecs value high enough to re-enable RSC\n");
+			return true;
+		}
+	/* if interrupt rate is too high then disable RSC */
+	} else if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) {
+		adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED;
+		txgbe_info(probe, "rx-usecs set too low, disabling RSC\n");
+		return true;
+	}
+	return false;
+}
+
+static int txgbe_set_coalesce(struct net_device *netdev,
+			      struct ethtool_coalesce *ec,
+			      struct kernel_ethtool_coalesce *kernel_coal,
+			      struct netlink_ext_ack *extack)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	struct txgbe_q_vector *q_vector;
+	int i;
+	u16 tx_itr_param, rx_itr_param;
+	u16  tx_itr_prev;
+	bool need_reset = false;
+
+	if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count) {
+		/* reject Tx specific changes in case of mixed RxTx vectors */
+		if (ec->tx_coalesce_usecs)
+			return -EINVAL;
+		tx_itr_prev = adapter->rx_itr_setting;
+	} else {
+		tx_itr_prev = adapter->tx_itr_setting;
+	}
+
+	if (ec->tx_max_coalesced_frames_irq)
+		adapter->tx_work_limit = ec->tx_max_coalesced_frames_irq;
+
+	if ((ec->rx_coalesce_usecs > (TXGBE_MAX_EITR >> 2)) ||
+	    (ec->tx_coalesce_usecs > (TXGBE_MAX_EITR >> 2)))
+		return -EINVAL;
+
+	if (ec->rx_coalesce_usecs > 1)
+		adapter->rx_itr_setting = ec->rx_coalesce_usecs << 2;
+	else
+		adapter->rx_itr_setting = ec->rx_coalesce_usecs;
+
+	if (adapter->rx_itr_setting == 1)
+		rx_itr_param = TXGBE_20K_ITR;
+	else
+		rx_itr_param = adapter->rx_itr_setting;
+
+	if (ec->tx_coalesce_usecs > 1)
+		adapter->tx_itr_setting = ec->tx_coalesce_usecs << 2;
+	else
+		adapter->tx_itr_setting = ec->tx_coalesce_usecs;
+
+	if (adapter->tx_itr_setting == 1)
+		tx_itr_param = TXGBE_12K_ITR;
+	else
+		tx_itr_param = adapter->tx_itr_setting;
+
+	/* mixed Rx/Tx */
+	if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
+		adapter->tx_itr_setting = adapter->rx_itr_setting;
+
+	/* detect ITR changes that require update of TXDCTL.WTHRESH */
+	if (adapter->tx_itr_setting != 1 &&
+	    adapter->tx_itr_setting < TXGBE_100K_ITR) {
+		if (tx_itr_prev == 1 ||
+		    tx_itr_prev >= TXGBE_100K_ITR)
+			need_reset = true;
+	} else {
+		if (tx_itr_prev != 1 &&
+		    tx_itr_prev < TXGBE_100K_ITR)
+			need_reset = true;
+	}
+
+	/* check the old value and enable RSC if necessary */
+	need_reset |= txgbe_update_rsc(adapter);
+
+	for (i = 0; i < adapter->num_q_vectors; i++) {
+		q_vector = adapter->q_vector[i];
+		q_vector->tx.work_limit = adapter->tx_work_limit;
+		q_vector->rx.work_limit = adapter->rx_work_limit;
+		if (q_vector->tx.count && !q_vector->rx.count)
+			/* tx only */
+			q_vector->itr = tx_itr_param;
+		else
+			/* rx only or mixed */
+			q_vector->itr = rx_itr_param;
+		txgbe_write_eitr(q_vector);
+	}
+
+	/* do reset here at the end to make sure EITR==0 case is handled
+	 * correctly w.r.t stopping tx, and changing TXDCTL.WTHRESH settings
+	 * also locks in RSC enable/disable which requires reset
+	 */
+	if (need_reset)
+		txgbe_do_reset(netdev);
+
+	return 0;
+}
+
+static int txgbe_get_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+					struct ethtool_rxnfc *cmd)
+{
+	union txgbe_atr_input *mask = &adapter->fdir_mask;
+	struct ethtool_rx_flow_spec *fsp =
+		(struct ethtool_rx_flow_spec *)&cmd->fs;
+	struct hlist_node *node;
+	struct txgbe_fdir_filter *rule = NULL;
+
+	/* report total rule count */
+	cmd->data = (1024 << adapter->fdir_pballoc) - 2;
+
+	hlist_for_each_entry_safe(rule, node,
+				  &adapter->fdir_filter_list, fdir_node) {
+		if (fsp->location <= rule->sw_idx)
+			break;
+	}
+
+	if (!rule || fsp->location != rule->sw_idx)
+		return -EINVAL;
+
+	/* fill out the flow spec entry */
+
+	/* set flow type field */
+	switch (rule->filter.formatted.flow_type) {
+	case TXGBE_ATR_FLOW_TYPE_TCPV4:
+		fsp->flow_type = TCP_V4_FLOW;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_UDPV4:
+		fsp->flow_type = UDP_V4_FLOW;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+		fsp->flow_type = SCTP_V4_FLOW;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_IPV4:
+		fsp->flow_type = IP_USER_FLOW;
+		fsp->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4;
+		fsp->h_u.usr_ip4_spec.proto = 0;
+		fsp->m_u.usr_ip4_spec.proto = 0;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	fsp->h_u.tcp_ip4_spec.psrc = rule->filter.formatted.src_port;
+	fsp->m_u.tcp_ip4_spec.psrc = mask->formatted.src_port;
+	fsp->h_u.tcp_ip4_spec.pdst = rule->filter.formatted.dst_port;
+	fsp->m_u.tcp_ip4_spec.pdst = mask->formatted.dst_port;
+	fsp->h_u.tcp_ip4_spec.ip4src = rule->filter.formatted.src_ip[0];
+	fsp->m_u.tcp_ip4_spec.ip4src = mask->formatted.src_ip[0];
+	fsp->h_u.tcp_ip4_spec.ip4dst = rule->filter.formatted.dst_ip[0];
+	fsp->m_u.tcp_ip4_spec.ip4dst = mask->formatted.dst_ip[0];
+	fsp->h_ext.vlan_etype = rule->filter.formatted.flex_bytes;
+	fsp->m_ext.vlan_etype = mask->formatted.flex_bytes;
+	fsp->h_ext.data[1] = htonl(rule->filter.formatted.vm_pool);
+	fsp->m_ext.data[1] = htonl(mask->formatted.vm_pool);
+	fsp->flow_type |= FLOW_EXT;
+
+	/* record action */
+	if (rule->action == TXGBE_RDB_FDIR_DROP_QUEUE)
+		fsp->ring_cookie = RX_CLS_FLOW_DISC;
+	else
+		fsp->ring_cookie = rule->action;
+
+	return 0;
+}
+
+static int txgbe_get_ethtool_fdir_all(struct txgbe_adapter *adapter,
+				      struct ethtool_rxnfc *cmd,
+				      u32 *rule_locs)
+{
+	struct hlist_node *node;
+	struct txgbe_fdir_filter *rule;
+	int cnt = 0;
+
+	/* report total rule count */
+	cmd->data = (1024 << adapter->fdir_pballoc) - 2;
+
+	hlist_for_each_entry_safe(rule, node,
+				  &adapter->fdir_filter_list, fdir_node) {
+		if (cnt == cmd->rule_cnt)
+			return -EMSGSIZE;
+		rule_locs[cnt] = rule->sw_idx;
+		cnt++;
+	}
+
+	cmd->rule_cnt = cnt;
+
+	return 0;
+}
+
+static int txgbe_get_rss_hash_opts(struct txgbe_adapter *adapter,
+				   struct ethtool_rxnfc *cmd)
+{
+	cmd->data = 0;
+
+	/* Report default options for RSS on txgbe */
+	switch (cmd->flow_type) {
+	case TCP_V4_FLOW:
+		cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+		fallthrough;
+	case UDP_V4_FLOW:
+		if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+			cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+		fallthrough;
+	case SCTP_V4_FLOW:
+	case AH_ESP_V4_FLOW:
+	case AH_V4_FLOW:
+	case ESP_V4_FLOW:
+	case IPV4_FLOW:
+		cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+		break;
+	case TCP_V6_FLOW:
+		cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+		fallthrough;
+	case UDP_V6_FLOW:
+		if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+			cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+		fallthrough;
+	case SCTP_V6_FLOW:
+	case AH_ESP_V6_FLOW:
+	case AH_V6_FLOW:
+	case ESP_V6_FLOW:
+	case IPV6_FLOW:
+		cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int txgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+			   u32 *rule_locs)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	int ret = -EOPNOTSUPP;
+
+	switch (cmd->cmd) {
+	case ETHTOOL_GRXRINGS:
+		cmd->data = adapter->num_rx_queues;
+		ret = 0;
+		break;
+	case ETHTOOL_GRXCLSRLCNT:
+		cmd->rule_cnt = adapter->fdir_filter_count;
+		ret = 0;
+		break;
+	case ETHTOOL_GRXCLSRULE:
+		ret = txgbe_get_ethtool_fdir_entry(adapter, cmd);
+		break;
+	case ETHTOOL_GRXCLSRLALL:
+		ret = txgbe_get_ethtool_fdir_all(adapter, cmd,
+						 (u32 *)rule_locs);
+		break;
+	case ETHTOOL_GRXFH:
+		ret = txgbe_get_rss_hash_opts(adapter, cmd);
+		break;
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int txgbe_update_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+					   struct txgbe_fdir_filter *input,
+					   u16 sw_idx)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	struct hlist_node *node, *parent;
+	struct txgbe_fdir_filter *rule;
+	bool deleted = false;
+	s32 err;
+
+	parent = NULL;
+	rule = NULL;
+
+	hlist_for_each_entry_safe(rule, node,
+				  &adapter->fdir_filter_list, fdir_node) {
+		/* hash found, or no matching entry */
+		if (rule->sw_idx >= sw_idx)
+			break;
+		parent = node;
+	}
+
+	/* if there is an old rule occupying our place remove it */
+	if (rule && rule->sw_idx == sw_idx) {
+		/* hardware filters are only configured when interface is up,
+		 * and we should not issue filter commands while the interface
+		 * is down
+		 */
+		if (netif_running(adapter->netdev) &&
+		    (!input || rule->filter.formatted.bkt_hash !=
+		     input->filter.formatted.bkt_hash)) {
+			err = txgbe_fdir_erase_perfect_filter(hw,
+							      &rule->filter,
+							      sw_idx);
+			if (err)
+				return -EINVAL;
+		}
+
+		hlist_del(&rule->fdir_node);
+		kfree(rule);
+		adapter->fdir_filter_count--;
+		deleted = true;
+	}
+
+	/* If we weren't given an input, then this was a request to delete a
+	 * filter. We should return -EINVAL if the filter wasn't found, but
+	 * return 0 if the rule was successfully deleted.
+	 */
+	if (!input)
+		return deleted ? 0 : -EINVAL;
+
+	/* initialize node and set software index */
+	INIT_HLIST_NODE(&input->fdir_node);
+
+	/* add filter to the list */
+	if (parent)
+		hlist_add_behind(&input->fdir_node, parent);
+	else
+		hlist_add_head(&input->fdir_node,
+			       &adapter->fdir_filter_list);
+
+	/* update counts */
+	adapter->fdir_filter_count++;
+
+	return 0;
+}
+
+static int txgbe_flowspec_to_flow_type(struct ethtool_rx_flow_spec *fsp,
+				       u8 *flow_type)
+{
+	switch (fsp->flow_type & ~FLOW_EXT) {
+	case TCP_V4_FLOW:
+		*flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+		break;
+	case UDP_V4_FLOW:
+		*flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4;
+		break;
+	case SCTP_V4_FLOW:
+		*flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4;
+		break;
+	case IP_USER_FLOW:
+		switch (fsp->h_u.usr_ip4_spec.proto) {
+		case IPPROTO_TCP:
+			*flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+			break;
+		case IPPROTO_UDP:
+			*flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4;
+			break;
+		case IPPROTO_SCTP:
+			*flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4;
+			break;
+		case 0:
+			if (!fsp->m_u.usr_ip4_spec.proto) {
+				*flow_type = TXGBE_ATR_FLOW_TYPE_IPV4;
+				break;
+			}
+			fallthrough;
+		default:
+			return 0;
+		}
+		break;
+	default:
+		return 0;
+	}
+
+	return 1;
+}
+
+static bool txgbe_match_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+					   struct txgbe_fdir_filter *input)
+{
+	struct hlist_node *node2;
+	struct txgbe_fdir_filter *rule = NULL;
+
+	hlist_for_each_entry_safe(rule, node2,
+				  &adapter->fdir_filter_list, fdir_node) {
+		if (rule->filter.formatted.bkt_hash ==
+		    input->filter.formatted.bkt_hash &&
+		    rule->action == input->action) {
+			txgbe_info(drv, "FDIR entry already exist\n");
+			return true;
+		}
+	}
+	return false;
+}
+
+static int txgbe_add_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+					struct ethtool_rxnfc *cmd)
+{
+	struct ethtool_rx_flow_spec *fsp =
+		(struct ethtool_rx_flow_spec *)&cmd->fs;
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_fdir_filter *input;
+	union txgbe_atr_input mask;
+	u8 queue;
+	int err;
+	u16 ptype = 0;
+
+	if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+		return -EOPNOTSUPP;
+
+	/* ring_cookie is a masked into a set of queues and txgbe pools or
+	 * we use drop index
+	 */
+	if (fsp->ring_cookie == RX_CLS_FLOW_DISC) {
+		queue = TXGBE_RDB_FDIR_DROP_QUEUE;
+	} else {
+		u32 ring = ethtool_get_flow_spec_ring(fsp->ring_cookie);
+
+		if (ring >= adapter->num_rx_queues)
+			return -EINVAL;
+
+		/* Map the ring onto the absolute queue index */
+		queue = adapter->rx_ring[ring]->reg_idx;
+	}
+
+	/* Don't allow indexes to exist outside of available space */
+	if (fsp->location >= ((1024 << adapter->fdir_pballoc) - 2)) {
+		txgbe_err(drv, "Location out of range\n");
+		return -EINVAL;
+	}
+
+	input = kzalloc(sizeof(*input), GFP_ATOMIC);
+	if (!input)
+		return -ENOMEM;
+
+	memset(&mask, 0, sizeof(union txgbe_atr_input));
+
+	/* set SW index */
+	input->sw_idx = fsp->location;
+
+	/* record flow type */
+	if (!txgbe_flowspec_to_flow_type(fsp,
+					 &input->filter.formatted.flow_type)) {
+		txgbe_err(drv, "Unrecognized flow type\n");
+		goto err_out;
+	}
+
+	mask.formatted.flow_type = TXGBE_ATR_L4TYPE_IPV6_MASK |
+				   TXGBE_ATR_L4TYPE_MASK;
+
+	if (input->filter.formatted.flow_type == TXGBE_ATR_FLOW_TYPE_IPV4)
+		mask.formatted.flow_type &= TXGBE_ATR_L4TYPE_IPV6_MASK;
+
+	/* Copy input into formatted structures */
+	input->filter.formatted.src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4src;
+	mask.formatted.src_ip[0] = fsp->m_u.tcp_ip4_spec.ip4src;
+	input->filter.formatted.dst_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst;
+	mask.formatted.dst_ip[0] = fsp->m_u.tcp_ip4_spec.ip4dst;
+	input->filter.formatted.src_port = fsp->h_u.tcp_ip4_spec.psrc;
+	mask.formatted.src_port = fsp->m_u.tcp_ip4_spec.psrc;
+	input->filter.formatted.dst_port = fsp->h_u.tcp_ip4_spec.pdst;
+	mask.formatted.dst_port = fsp->m_u.tcp_ip4_spec.pdst;
+
+	if (fsp->flow_type & FLOW_EXT) {
+		input->filter.formatted.vm_pool =
+				(unsigned char)ntohl(fsp->h_ext.data[1]);
+		mask.formatted.vm_pool =
+				(unsigned char)ntohl(fsp->m_ext.data[1]);
+		input->filter.formatted.flex_bytes =
+						fsp->h_ext.vlan_etype;
+		mask.formatted.flex_bytes = fsp->m_ext.vlan_etype;
+	}
+
+	switch (input->filter.formatted.flow_type) {
+	case TXGBE_ATR_FLOW_TYPE_TCPV4:
+		ptype = TXGBE_PTYPE_L2_IPV4_TCP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_UDPV4:
+		ptype = TXGBE_PTYPE_L2_IPV4_UDP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+		ptype = TXGBE_PTYPE_L2_IPV4_SCTP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_IPV4:
+		ptype = TXGBE_PTYPE_L2_IPV4;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_TCPV6:
+		ptype = TXGBE_PTYPE_L2_IPV6_TCP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_UDPV6:
+		ptype = TXGBE_PTYPE_L2_IPV6_UDP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_SCTPV6:
+		ptype = TXGBE_PTYPE_L2_IPV6_SCTP;
+		break;
+	case TXGBE_ATR_FLOW_TYPE_IPV6:
+		ptype = TXGBE_PTYPE_L2_IPV6;
+		break;
+	default:
+		break;
+	}
+
+	input->filter.formatted.vlan_id = htons(ptype);
+	if (mask.formatted.flow_type & TXGBE_ATR_L4TYPE_MASK)
+		mask.formatted.vlan_id = 0xFFFF;
+	else
+		mask.formatted.vlan_id = htons(0xFFF8);
+
+	/* determine if we need to drop or route the packet */
+	if (fsp->ring_cookie == RX_CLS_FLOW_DISC)
+		input->action = TXGBE_RDB_FDIR_DROP_QUEUE;
+	else
+		input->action = fsp->ring_cookie;
+
+	spin_lock(&adapter->fdir_perfect_lock);
+
+	if (hlist_empty(&adapter->fdir_filter_list)) {
+		/* save mask and program input mask into HW */
+		memcpy(&adapter->fdir_mask, &mask, sizeof(mask));
+		err = txgbe_fdir_set_input_mask(hw, &mask,
+						adapter->cloud_mode);
+		if (err) {
+			txgbe_err(drv, "Error writing mask\n");
+			goto err_out_w_lock;
+		}
+	} else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) {
+		txgbe_err(drv,
+			  "Hardware only supports one mask per port. To change the mask you must first delete all the rules.\n");
+		goto err_out_w_lock;
+	}
+
+	/* apply mask and compute/store hash */
+	txgbe_atr_compute_perfect_hash(&input->filter, &mask);
+
+	/* check if new entry does not exist on filter list */
+	if (txgbe_match_ethtool_fdir_entry(adapter, input))
+		goto err_out_w_lock;
+
+	/* only program filters to hardware if the net device is running, as
+	 * we store the filters in the Rx buffer which is not allocated when
+	 * the device is down
+	 */
+	if (netif_running(adapter->netdev)) {
+		err = txgbe_fdir_write_perfect_filter(hw,
+						      &input->filter, input->sw_idx, queue,
+						      adapter->cloud_mode);
+		if (err)
+			goto err_out_w_lock;
+	}
+
+	txgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
+
+	spin_unlock(&adapter->fdir_perfect_lock);
+
+	return err;
+err_out_w_lock:
+	spin_unlock(&adapter->fdir_perfect_lock);
+err_out:
+	kfree(input);
+	return -EINVAL;
+}
+
+static int txgbe_del_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+					struct ethtool_rxnfc *cmd)
+{
+	struct ethtool_rx_flow_spec *fsp =
+		(struct ethtool_rx_flow_spec *)&cmd->fs;
+	int err;
+
+	spin_lock(&adapter->fdir_perfect_lock);
+	err = txgbe_update_ethtool_fdir_entry(adapter, NULL, fsp->location);
+	spin_unlock(&adapter->fdir_perfect_lock);
+
+	return err;
+}
+
+#define UDP_RSS_FLAGS (TXGBE_FLAG2_RSS_FIELD_IPV4_UDP | \
+		       TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+static int txgbe_set_rss_hash_opt(struct txgbe_adapter *adapter,
+				  struct ethtool_rxnfc *nfc)
+{
+	u32 flags2 = adapter->flags2;
+
+	/* RSS does not support anything other than hashing
+	 * to queues on src and dst IPs and ports
+	 */
+	if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
+			  RXH_L4_B_0_1 | RXH_L4_B_2_3))
+		return -EINVAL;
+
+	switch (nfc->flow_type) {
+	case TCP_V4_FLOW:
+	case TCP_V6_FLOW:
+		if (!(nfc->data & RXH_IP_SRC) ||
+		    !(nfc->data & RXH_IP_DST) ||
+		    !(nfc->data & RXH_L4_B_0_1) ||
+		    !(nfc->data & RXH_L4_B_2_3))
+			return -EINVAL;
+		break;
+	case UDP_V4_FLOW:
+		if (!(nfc->data & RXH_IP_SRC) ||
+		    !(nfc->data & RXH_IP_DST))
+			return -EINVAL;
+		switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+		case 0:
+			flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV4_UDP;
+			break;
+		case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+			flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV4_UDP;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	case UDP_V6_FLOW:
+		if (!(nfc->data & RXH_IP_SRC) ||
+		    !(nfc->data & RXH_IP_DST))
+			return -EINVAL;
+		switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+		case 0:
+			flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV6_UDP;
+			break;
+		case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+			flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV6_UDP;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	case AH_ESP_V4_FLOW:
+	case AH_V4_FLOW:
+	case ESP_V4_FLOW:
+	case SCTP_V4_FLOW:
+	case AH_ESP_V6_FLOW:
+	case AH_V6_FLOW:
+	case ESP_V6_FLOW:
+	case SCTP_V6_FLOW:
+		if (!(nfc->data & RXH_IP_SRC) ||
+		    !(nfc->data & RXH_IP_DST) ||
+		    (nfc->data & RXH_L4_B_0_1) ||
+		    (nfc->data & RXH_L4_B_2_3))
+			return -EINVAL;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* if we changed something we need to update flags */
+	if (flags2 != adapter->flags2) {
+		struct txgbe_hw *hw = &adapter->hw;
+		u32 mrqc;
+
+		mrqc = rd32(hw, TXGBE_RDB_RA_CTL);
+
+		if ((flags2 & UDP_RSS_FLAGS) &&
+		    !(adapter->flags2 & UDP_RSS_FLAGS))
+			txgbe_warn(drv,
+				   "enabling UDP RSS: fragmented packets may arrive out of order to the stack above\n");
+
+		adapter->flags2 = flags2;
+
+		/* Perform hash on these packet types */
+		mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4
+		      | TXGBE_RDB_RA_CTL_RSS_IPV4_TCP
+		      | TXGBE_RDB_RA_CTL_RSS_IPV6
+		      | TXGBE_RDB_RA_CTL_RSS_IPV6_TCP;
+
+		mrqc &= ~(TXGBE_RDB_RA_CTL_RSS_IPV4_UDP |
+			  TXGBE_RDB_RA_CTL_RSS_IPV6_UDP);
+
+		if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+			mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP;
+
+		if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+			mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP;
+
+		wr32(hw, TXGBE_RDB_RA_CTL, mrqc);
+	}
+
+	return 0;
+}
+
+static int txgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	int ret = -EOPNOTSUPP;
+
+	switch (cmd->cmd) {
+	case ETHTOOL_SRXCLSRLINS:
+		ret = txgbe_add_ethtool_fdir_entry(adapter, cmd);
+		break;
+	case ETHTOOL_SRXCLSRLDEL:
+		ret = txgbe_del_ethtool_fdir_entry(adapter, cmd);
+		break;
+	case ETHTOOL_SRXFH:
+		ret = txgbe_set_rss_hash_opt(adapter, cmd);
+		break;
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static u32 txgbe_get_rxfh_key_size(struct net_device *netdev)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	return sizeof(adapter->rss_key);
+}
+
+static u32 txgbe_rss_indir_size(struct net_device *netdev)
+{
+	return 128;
+}
+
+static void txgbe_get_reta(struct txgbe_adapter *adapter, u32 *indir)
+{
+	int i, reta_size = 128;
+
+	for (i = 0; i < reta_size; i++)
+		indir[i] = adapter->rss_indir_tbl[i];
+}
+
+static int txgbe_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+			  u8 *hfunc)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	if (hfunc)
+		*hfunc = ETH_RSS_HASH_TOP;
+
+	if (indir)
+		txgbe_get_reta(adapter, indir);
+
+	if (key)
+		memcpy(key, adapter->rss_key, txgbe_get_rxfh_key_size(netdev));
+
+	return 0;
+}
+
+static int txgbe_set_rxfh(struct net_device *netdev, const u32 *indir,
+			  const u8 *key, const u8 hfunc)
+{
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+	int i;
+	u32 reta_entries = 128;
+
+	if (hfunc)
+		return -EINVAL;
+
+	/* Fill out the redirection table */
+	if (indir) {
+		int max_queues = min_t(int, adapter->num_rx_queues,
+				       TXGBE_RSS_INDIR_TBL_MAX);
+
+		/* Verify user input. */
+		for (i = 0; i < reta_entries; i++)
+			if (indir[i] >= max_queues)
+				return -EINVAL;
+
+		for (i = 0; i < reta_entries; i++)
+			adapter->rss_indir_tbl[i] = indir[i];
+	}
+
+	/* Fill out the rss hash key */
+	if (key)
+		memcpy(adapter->rss_key, key, txgbe_get_rxfh_key_size(netdev));
+
+	txgbe_store_reta(adapter);
+
+	return 0;
+}
+
+static int txgbe_get_ts_info(struct net_device *dev,
+			     struct ethtool_ts_info *info)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+
+	/* we always support timestamping disabled */
+	info->rx_filters = 1 << HWTSTAMP_FILTER_NONE;
+
+	info->so_timestamping =
+		SOF_TIMESTAMPING_TX_SOFTWARE |
+		SOF_TIMESTAMPING_RX_SOFTWARE |
+		SOF_TIMESTAMPING_SOFTWARE |
+		SOF_TIMESTAMPING_TX_HARDWARE |
+		SOF_TIMESTAMPING_RX_HARDWARE |
+		SOF_TIMESTAMPING_RAW_HARDWARE;
+
+	if (adapter->ptp_clock)
+		info->phc_index = ptp_clock_index(adapter->ptp_clock);
+	else
+		info->phc_index = -1;
+
+	info->tx_types =
+		(1 << HWTSTAMP_TX_OFF) |
+		(1 << HWTSTAMP_TX_ON);
+
+	info->rx_filters |=
+		(1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+		(1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+		(1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+
+	return 0;
+}
+
+static unsigned int txgbe_max_channels(struct txgbe_adapter *adapter)
+{
+	unsigned int max_combined;
+
+	if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
+		/* We only support one q_vector without MSI-X */
+		max_combined = 1;
+	} else if (adapter->atr_sample_rate) {
+		/* support up to 64 queues with ATR */
+		max_combined = TXGBE_MAX_FDIR_INDICES;
+	} else {
+		/* support up to max allowed queues with RSS */
+		max_combined = TXGBE_MAX_RSS_INDICES;
+	}
+
+	return max_combined;
+}
+
+static void txgbe_get_channels(struct net_device *dev,
+			       struct ethtool_channels *ch)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+
+	/* report maximum channels */
+	ch->max_combined = txgbe_max_channels(adapter);
+
+	/* report info for other vector */
+	if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+		ch->max_other = NON_Q_VECTORS;
+		ch->other_count = NON_Q_VECTORS;
+	}
+
+	/* record RSS queues */
+	ch->combined_count = adapter->ring_feature[RING_F_RSS].indices;
+
+	/* nothing else to report if RSS is disabled */
+	if (ch->combined_count == 1)
+		return;
+
+	/* if ATR is disabled we can exit */
+	if (!adapter->atr_sample_rate)
+		return;
+
+	/* report flow director queues as maximum channels */
+	ch->combined_count = adapter->ring_feature[RING_F_FDIR].indices;
+}
+
+static int txgbe_set_channels(struct net_device *dev,
+			      struct ethtool_channels *ch)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	unsigned int count = ch->combined_count;
+	u8 max_rss_indices = TXGBE_MAX_RSS_INDICES;
+
+	/* verify they are not requesting separate vectors */
+	if (!count || ch->rx_count || ch->tx_count)
+		return -EINVAL;
+
+	/* verify other_count has not changed */
+	if (ch->other_count != NON_Q_VECTORS)
+		return -EINVAL;
+
+	/* verify the number of channels does not exceed hardware limits */
+	if (count > txgbe_max_channels(adapter))
+		return -EINVAL;
+
+	/* update feature limits from largest to smallest supported values */
+	adapter->ring_feature[RING_F_FDIR].limit = count;
+
+	/* cap RSS limit */
+	if (count > max_rss_indices)
+		count = max_rss_indices;
+	adapter->ring_feature[RING_F_RSS].limit = count;
+
+	/* use setup TC to update any traffic class queue mapping */
+	return txgbe_setup_tc(dev, netdev_get_num_tc(dev));
+}
+
+static int txgbe_get_module_info(struct net_device *dev,
+				 struct ethtool_modinfo *modinfo)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 status;
+	u8 sff8472_rev, addr_mode;
+	bool page_swap = false;
+
+	/* Check whether we support SFF-8472 or not */
+	status = TCALL(hw, phy.ops.read_i2c_eeprom,
+		       TXGBE_SFF_SFF_8472_COMP,
+		       &sff8472_rev);
+	if (status != 0)
+		return -EIO;
+
+	/* addressing mode is not supported */
+	status = TCALL(hw, phy.ops.read_i2c_eeprom,
+		       TXGBE_SFF_SFF_8472_SWAP,
+		       &addr_mode);
+	if (status != 0)
+		return -EIO;
+
+	if (addr_mode & TXGBE_SFF_ADDRESSING_MODE) {
+		txgbe_err(drv,
+			  "Address change required to access page 0xA2, but not supported. Please report the module type to the driver maintainers.\n");
+		page_swap = true;
+	}
+
+	if (sff8472_rev == TXGBE_SFF_SFF_8472_UNSUP || page_swap) {
+		/* We have a SFP, but it does not support SFF-8472 */
+		modinfo->type = ETH_MODULE_SFF_8079;
+		modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+	} else {
+		/* We have a SFP which supports a revision of SFF-8472. */
+		modinfo->type = ETH_MODULE_SFF_8472;
+		modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+	}
+
+	return 0;
+}
+
+static int txgbe_get_module_eeprom(struct net_device *dev,
+				   struct ethtool_eeprom *ee,
+				   u8 *data)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+	u8 databyte = 0xFF;
+	int i = 0;
+
+	if (ee->len == 0)
+		return -EINVAL;
+
+	for (i = ee->offset; i < ee->offset + ee->len; i++) {
+		/* I2C reads can take long time */
+		if (test_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+			return -EBUSY;
+
+		if (i < ETH_MODULE_SFF_8079_LEN)
+			status = TCALL(hw, phy.ops.read_i2c_eeprom, i,
+				       &databyte);
+		else
+			status = TCALL(hw, phy.ops.read_i2c_sff8472, i,
+				       &databyte);
+
+		if (status != 0)
+			return -EIO;
+
+		data[i - ee->offset] = databyte;
+	}
+
+	return 0;
+}
+
+static int txgbe_set_flash(struct net_device *netdev, struct ethtool_flash *ef)
+{
+	int ret;
+	const struct firmware *fw;
+	struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+	ret = request_firmware(&fw, ef->data, &netdev->dev);
+	if (ret < 0)
+		return ret;
+
+	if (ef->region == 0) {
+		ret = txgbe_upgrade_flash(&adapter->hw, ef->region,
+					  fw->data, fw->size);
+	} else {
+		if (txgbe_mng_present(&adapter->hw))
+			ret = txgbe_upgrade_flash_hostif(&adapter->hw,
+							 ef->region,
+							 fw->data, fw->size);
+		else
+			ret = -EOPNOTSUPP;
+	}
+
+	release_firmware(fw);
+	if (!ret)
+		dev_info(&netdev->dev,
+			 "loaded firmware %s, reboot to make firmware work\n",
+			 ef->data);
+	return ret;
+}
+
+static const struct ethtool_ops txgbe_ethtool_ops = {
+	.supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS,
+	.get_link_ksettings     = txgbe_get_link_ksettings,
+	.set_link_ksettings     = txgbe_set_link_ksettings,
+	.get_drvinfo            = txgbe_get_drvinfo,
+	.get_regs_len           = txgbe_get_regs_len,
+	.get_regs               = txgbe_get_regs,
+	.get_wol                = txgbe_get_wol,
+	.set_wol                = txgbe_set_wol,
+	.nway_reset             = txgbe_nway_reset,
+	.get_link               = ethtool_op_get_link,
+	.get_eeprom_len         = txgbe_get_eeprom_len,
+	.get_eeprom             = txgbe_get_eeprom,
+	.set_eeprom             = txgbe_set_eeprom,
+	.get_ringparam          = txgbe_get_ringparam,
+	.set_ringparam          = txgbe_set_ringparam,
+	.get_pauseparam         = txgbe_get_pauseparam,
+	.set_pauseparam         = txgbe_set_pauseparam,
+	.get_msglevel           = txgbe_get_msglevel,
+	.set_msglevel           = txgbe_set_msglevel,
+	.self_test              = txgbe_diag_test,
+	.get_strings            = txgbe_get_strings,
+	.set_phys_id            = txgbe_set_phys_id,
+	.get_sset_count         = txgbe_get_sset_count,
+	.get_ethtool_stats      = txgbe_get_ethtool_stats,
+	.get_coalesce           = txgbe_get_coalesce,
+	.set_coalesce           = txgbe_set_coalesce,
+	.get_rxnfc              = txgbe_get_rxnfc,
+	.set_rxnfc              = txgbe_set_rxnfc,
+	.get_channels           = txgbe_get_channels,
+	.set_channels           = txgbe_set_channels,
+	.get_module_info        = txgbe_get_module_info,
+	.get_module_eeprom      = txgbe_get_module_eeprom,
+	.get_ts_info            = txgbe_get_ts_info,
+	.get_rxfh_indir_size    = txgbe_rss_indir_size,
+	.get_rxfh_key_size      = txgbe_get_rxfh_key_size,
+	.get_rxfh               = txgbe_get_rxfh,
+	.set_rxfh               = txgbe_set_rxfh,
+	.flash_device           = txgbe_set_flash,
+};
+
+void txgbe_set_ethtool_ops(struct net_device *netdev)
+{
+	netdev->ethtool_ops = &txgbe_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 040ddb0e46fd..35279209c51d 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -158,6 +158,46 @@ s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ * txgbe_device_supports_autoneg_fc - Check if device supports autonegotiation
+ * of flow control
+ * @hw: pointer to hardware structure
+ *
+ * This function returns true if the device supports flow control
+ * autonegotiation, and false if it does not.
+ *
+ **/
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw)
+{
+	bool supported = false;
+	u32 speed;
+	bool link_up;
+	u8 device_type = hw->subsystem_id & 0xF0;
+
+	switch (hw->phy.media_type) {
+	case txgbe_media_type_fiber:
+		TCALL(hw, mac.ops.check_link, &speed, &link_up, false);
+		/* if link is down, assume supported */
+		if (link_up)
+			supported = speed == TXGBE_LINK_SPEED_1GB_FULL ?
+				true : false;
+		else
+			supported = true;
+		break;
+	case txgbe_media_type_backplane:
+		supported = (device_type != TXGBE_ID_MAC_XAUI &&
+			     device_type != TXGBE_ID_MAC_SGMII);
+		break;
+	default:
+		break;
+	}
+
+	ERROR_REPORT2(TXGBE_ERROR_UNSUPPORTED,
+		      "Device %x does not support flow control autoneg",
+		      hw->device_id);
+	return supported;
+}
+
 /**
  *  txgbe_setup_fc - Set up flow control
  *  @hw: pointer to hardware structure
@@ -1903,6 +1943,337 @@ s32 txgbe_reset_hostif(struct txgbe_hw *hw)
 	return status;
 }
 
+u16 txgbe_crc16_ccitt(const u8 *buf, int size)
+{
+	u16 crc = 0;
+	int i;
+
+	while (--size >= 0) {
+		crc ^= (u16)*buf++ << 8;
+		for (i = 0; i < 8; i++) {
+			if (crc & 0x8000)
+				crc = crc << 1 ^ 0x1021;
+			else
+				crc <<= 1;
+		}
+	}
+	return crc;
+}
+
+s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region,
+			       const u8 *data, u32 size)
+{
+	struct txgbe_hic_upg_start start_cmd;
+	struct txgbe_hic_upg_write write_cmd;
+	struct txgbe_hic_upg_verify verify_cmd;
+	u32 offset;
+	s32 status = 0;
+
+	start_cmd.hdr.cmd = FW_FLASH_UPGRADE_START_CMD;
+	start_cmd.hdr.buf_len = FW_FLASH_UPGRADE_START_LEN;
+	start_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	start_cmd.module_id = (u8)region;
+	start_cmd.hdr.checksum = 0;
+	start_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&start_cmd,
+				(FW_CEM_HDR_LEN + start_cmd.hdr.buf_len));
+	start_cmd.pad2 = 0;
+	start_cmd.pad3 = 0;
+
+	status = txgbe_host_interface_command(hw, (u32 *)&start_cmd,
+					      sizeof(start_cmd),
+					      TXGBE_HI_FLASH_ERASE_TIMEOUT,
+					      true);
+
+	if (start_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS) {
+		status = 0;
+	} else {
+		status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+		return status;
+	}
+
+	for (offset = 0; offset < size;) {
+		write_cmd.hdr.cmd = FW_FLASH_UPGRADE_WRITE_CMD;
+		if (size - offset > 248) {
+			write_cmd.data_len = 248 / 4;
+			write_cmd.eof_flag = 0;
+		} else {
+			write_cmd.data_len = (u8)((size - offset) / 4);
+			write_cmd.eof_flag = 1;
+		}
+		memcpy((u8 *)write_cmd.data, &data[offset], write_cmd.data_len * 4);
+		write_cmd.hdr.buf_len = (write_cmd.data_len + 1) * 4;
+		write_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+		write_cmd.check_sum = txgbe_crc16_ccitt((u8 *)write_cmd.data,
+							write_cmd.data_len * 4);
+
+		status = txgbe_host_interface_command(hw, (u32 *)&write_cmd,
+						      sizeof(write_cmd),
+						      TXGBE_HI_FLASH_UPDATE_TIMEOUT,
+						      true);
+		if (start_cmd.hdr.cmd_or_resp.ret_status ==
+						FW_CEM_RESP_STATUS_SUCCESS) {
+			status = 0;
+		} else {
+			status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+			return status;
+		}
+		offset += write_cmd.data_len * 4;
+	}
+
+	verify_cmd.hdr.cmd = FW_FLASH_UPGRADE_VERIFY_CMD;
+	verify_cmd.hdr.buf_len = FW_FLASH_UPGRADE_VERIFY_LEN;
+	verify_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	switch (region) {
+	case TXGBE_MODULE_EEPROM:
+		verify_cmd.action_flag = TXGBE_RELOAD_EEPROM;
+		break;
+	case TXGBE_MODULE_FIRMWARE:
+		verify_cmd.action_flag = TXGBE_RESET_FIRMWARE;
+		break;
+	case TXGBE_MODULE_HARDWARE:
+		verify_cmd.action_flag = TXGBE_RESET_LAN;
+		break;
+	default:
+		return status;
+	}
+
+	verify_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&verify_cmd,
+				(FW_CEM_HDR_LEN + verify_cmd.hdr.buf_len));
+
+	status = txgbe_host_interface_command(hw, (u32 *)&verify_cmd,
+					      sizeof(verify_cmd),
+					      TXGBE_HI_FLASH_VERIFY_TIMEOUT,
+					      true);
+
+	if (verify_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS)
+		status = 0;
+	else
+		status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+	return status;
+}
+
+/**
+ * cmd_addr is used for some special command:
+ *	 1. to be sector address, when implemented erase sector command
+ *	 2. to be flash address when implemented read, write flash address
+ **/
+u8 fmgr_cmd_op(struct txgbe_hw *hw, u32 cmd, u32 cmd_addr)
+{
+	u32 cmd_val = 0;
+	u32 time_out = 0;
+
+	cmd_val = (cmd << SPI_CLK_CMD_OFFSET) |
+		  (SPI_CLK_DIV << SPI_CLK_DIV_OFFSET) | cmd_addr;
+	wr32(hw, SPI_H_CMD_REG_ADDR, cmd_val);
+	while (1) {
+		if (rd32(hw, SPI_H_STA_REG_ADDR) & 0x1)
+			break;
+
+		if (time_out == SPI_TIME_OUT_VALUE)
+			return 1;
+
+		time_out = time_out + 1;
+		usleep_range(10, 20);
+	}
+
+	return 0;
+}
+
+u8 fmgr_usr_cmd_op(struct txgbe_hw *hw, u32 usr_cmd)
+{
+	u8 status = 0;
+
+	wr32(hw, SPI_H_USR_CMD_REG_ADDR, usr_cmd);
+	status = fmgr_cmd_op(hw, SPI_CMD_USER_CMD, 0);
+
+	return status;
+}
+
+u8 flash_erase_chip(struct txgbe_hw *hw)
+{
+	u8 status = fmgr_cmd_op(hw, SPI_CMD_ERASE_CHIP, 0);
+	return status;
+}
+
+u8 flash_erase_sector(struct txgbe_hw *hw, u32 sec_addr)
+{
+	u8 status = fmgr_cmd_op(hw, SPI_CMD_ERASE_SECTOR, sec_addr);
+	return status;
+}
+
+u32 flash_read_dword(struct txgbe_hw *hw, u32 addr)
+{
+	u8 status = fmgr_cmd_op(hw, SPI_CMD_READ_DWORD, addr);
+
+	if (status)
+		return (u32)status;
+
+	return rd32(hw, SPI_H_DAT_REG_ADDR);
+}
+
+u8 flash_write_dword(struct txgbe_hw *hw, u32 addr, u32 dword)
+{
+	u8 status = 0;
+
+	wr32(hw, SPI_H_DAT_REG_ADDR, dword);
+	status = fmgr_cmd_op(hw, SPI_CMD_WRITE_DWORD, addr);
+	if (status)
+		return status;
+
+	if (dword != flash_read_dword(hw, addr))
+		return 1;
+
+	return 0;
+}
+
+int txgbe_upgrade_flash(struct txgbe_hw *hw, u32 region,
+			const u8 *data, u32 size)
+{
+	u32 sector_num = 0;
+	u32 read_data = 0;
+	u8 status = 0;
+	u8 skip = 0;
+	u32 i = 0;
+	u8 flash_vendor = 0;
+	u32 mac_addr0_dword0_t;
+	u32 mac_addr0_dword1_t;
+	u32 mac_addr1_dword0_t;
+	u32 mac_addr1_dword1_t;
+	u32 serial_num_dword0_t;
+	u32 serial_num_dword1_t;
+	u32 serial_num_dword2_t;
+
+	/* check sub_id */;
+	DEBUGOUT("Checking sub_id .......\n");
+	DEBUGOUT1("The card's sub_id : %04x\n", hw->subsystem_id);
+	DEBUGOUT1("The image's sub_id : %04x\n",
+		  data[0xfffdc] << 8 | data[0xfffdd]);
+	if ((hw->subsystem_id & 0xfff) == ((data[0xfffdc] << 8 |
+					    data[0xfffdd]) & 0xfff)) {
+		DEBUGOUT("It is a right image\n");
+	} else if (hw->subsystem_id == 0xffff) {
+		DEBUGOUT("update anyway\n");
+	} else {
+		DEBUGOUT("====The Gigabit image is not match the Gigabit card====\n");
+		DEBUGOUT("====Please check your image====\n");
+		return -EOPNOTSUPP;
+	}
+
+	/* check dev_id */
+	DEBUGOUT("Checking dev_id .......\n");
+	DEBUGOUT1("The image's dev_id : %04x\n",
+		  data[0xfffde] << 8 | data[0xfffdf]);
+	DEBUGOUT1("The card's dev_id : %04x\n", hw->device_id);
+	if (!((hw->device_id & 0xfff0) == ((data[0xfffde] << 8 | data[0xfffdf]) & 0xfff0)) &&
+	    !(hw->device_id == 0xffff)) {
+		DEBUGOUT("====The Gigabit image is not match the Gigabit card====\n");
+		DEBUGOUT("====Please check your image====\n");
+		return -EOPNOTSUPP;
+	}
+
+	/* unlock flash write protect */
+	wr32(hw, TXGBE_SPI_CMDCFG0, 0x9f050206);
+	wr32(hw, 0x10194, 0x9f050206);
+
+	msleep(1000);
+
+	mac_addr0_dword0_t = flash_read_dword(hw, MAC_ADDR0_WORD0_OFFSET_1G);
+	mac_addr0_dword1_t = flash_read_dword(hw, MAC_ADDR0_WORD1_OFFSET_1G) &
+				0xffff;
+	mac_addr1_dword0_t = flash_read_dword(hw, MAC_ADDR1_WORD0_OFFSET_1G);
+	mac_addr1_dword1_t = flash_read_dword(hw, MAC_ADDR1_WORD1_OFFSET_1G) &
+				0xffff;
+
+	serial_num_dword0_t = flash_read_dword(hw,
+					       PRODUCT_SERIAL_NUM_OFFSET_1G);
+	serial_num_dword1_t = flash_read_dword(hw,
+					       PRODUCT_SERIAL_NUM_OFFSET_1G + 4);
+	serial_num_dword2_t = flash_read_dword(hw,
+					       PRODUCT_SERIAL_NUM_OFFSET_1G + 8);
+	DEBUGOUT2("Old: MAC Address0 is: 0x%04x%08x\n",
+		  mac_addr0_dword1_t, mac_addr0_dword0_t);
+	DEBUGOUT2("     MAC Address1 is: 0x%04x%08x\n",
+		  mac_addr1_dword1_t, mac_addr1_dword0_t);
+
+	status = fmgr_usr_cmd_op(hw, 0x6);  /* write enable */
+	status = fmgr_usr_cmd_op(hw, 0x98); /* global protection un-lock */
+	msleep(1000);
+
+	/* Note: for Spanish FLASH, first 8 sectors (4KB) in sector0 (64KB)
+	 * need to use a special erase command (4K sector erase)
+	 */
+	if (flash_vendor == 1) {
+		for (i = 0; i < 8; i++) {
+			flash_erase_sector(hw, i * 128);
+			msleep(20); /* 20 ms */
+		}
+		wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c7d8);
+	}
+
+	sector_num = size / SPI_SECTOR_SIZE;
+
+	/* Winbond Flash, erase chip command is okay,
+	 * but erase sector doestn't work.
+	 */
+	if (flash_vendor == 2) {
+		status = flash_erase_chip(hw);
+		DEBUGOUT1("Erase chip command, return status = %0d\n", status);
+		msleep(1000);
+	} else {
+		wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c720);
+		for (i = 0; i < sector_num; i++) {
+			status = flash_erase_sector(hw, i * SPI_SECTOR_SIZE);
+			DEBUGOUT2("Erase sector[%2d] command, return status = %0d\n",
+				  i, status);
+			msleep(50);
+		}
+		wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c7d8);
+	}
+
+	/* Program Image file in dword */
+	for (i = 0; i < size / 4; i++) {
+		read_data = data[4 * i + 3] << 24 | data[4 * i + 2] << 16 |
+				data[4 * i + 1] << 8 | data[4 * i];
+		read_data = __le32_to_cpu(read_data);
+		skip = ((i * 4 == MAC_ADDR0_WORD0_OFFSET_1G) ||
+			(i * 4 == MAC_ADDR0_WORD1_OFFSET_1G) ||
+			(i * 4 == MAC_ADDR1_WORD0_OFFSET_1G) ||
+			(i * 4 == MAC_ADDR1_WORD1_OFFSET_1G) ||
+			(i * 4 >= PRODUCT_SERIAL_NUM_OFFSET_1G &&
+			 i * 4 <= PRODUCT_SERIAL_NUM_OFFSET_1G + 8));
+		if (read_data != 0xffffffff && !skip) {
+			status = flash_write_dword(hw, i * 4, read_data);
+			if (status) {
+				DEBUGOUT2("ERROR: Program 0x%08x @addr: 0x%08x is failed !!\n",
+					  read_data, i * 4);
+				read_data = flash_read_dword(hw, i * 4);
+				DEBUGOUT1("Read data from Flash is: 0x%08x\n",
+					  read_data);
+				return 1;
+			}
+		}
+		if (i % 1024 == 0)
+			DEBUGOUT1("\b\b\b\b%3d%%", (int)(i * 4 * 100 / size));
+	}
+	flash_write_dword(hw, MAC_ADDR0_WORD0_OFFSET_1G,
+			  mac_addr0_dword0_t);
+	flash_write_dword(hw, MAC_ADDR0_WORD1_OFFSET_1G,
+			  mac_addr0_dword1_t | 0x80000000); /* lan0 */
+	flash_write_dword(hw, MAC_ADDR1_WORD0_OFFSET_1G,
+			  mac_addr1_dword0_t);
+	flash_write_dword(hw, MAC_ADDR1_WORD1_OFFSET_1G,
+			  mac_addr1_dword1_t | 0x80000000); /* lan1 */
+	flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G,
+			  serial_num_dword0_t);
+	flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 4,
+			  serial_num_dword1_t);
+	flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 8,
+			  serial_num_dword2_t);
+
+	return 0;
+}
+
 /**
  * txgbe_set_rxpba - Initialize Rx packet buffer
  * @hw: pointer to hardware structure
@@ -2633,6 +3004,7 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 
 	/* PHY */
 	phy->ops.read_i2c_byte = txgbe_read_i2c_byte;
+	phy->ops.read_i2c_sff8472 = txgbe_read_i2c_sff8472;
 	phy->ops.read_i2c_eeprom = txgbe_read_i2c_eeprom;
 	phy->ops.identify_sfp = txgbe_identify_module;
 	phy->ops.check_overtemp = txgbe_check_overtemp;
@@ -2694,6 +3066,9 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	eeprom->ops.calc_checksum = txgbe_calc_eeprom_checksum;
 	eeprom->ops.read = txgbe_read_ee_hostif;
 	eeprom->ops.read_buffer = txgbe_read_ee_hostif_buffer;
+	eeprom->ops.write = txgbe_write_ee_hostif;
+	eeprom->ops.write_buffer = txgbe_write_ee_hostif_buffer;
+	eeprom->ops.update_checksum = txgbe_update_eeprom_checksum;
 	eeprom->ops.validate_checksum = txgbe_validate_eeprom_checksum;
 
 	/* Manageability interface */
@@ -4674,6 +5049,43 @@ s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
 	return 0;
 }
 
+s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_input *input,
+				    u16 soft_id)
+{
+	u32 fdirhash;
+	u32 fdircmd;
+	s32 err;
+
+	/* configure FDIRHASH register */
+	fdirhash = input->formatted.bkt_hash;
+	fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT;
+	wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+
+	/* flush hash to HW */
+	TXGBE_WRITE_FLUSH(hw);
+
+	/* Query if filter is present */
+	wr32(hw, TXGBE_RDB_FDIR_CMD,
+	     TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT);
+
+	err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+	if (err) {
+		DEBUGOUT("Flow Director command did not complete!\n");
+		return err;
+	}
+
+	/* if filter exists in hardware then remove it */
+	if (fdircmd & TXGBE_RDB_FDIR_CMD_FILTER_VALID) {
+		wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+		TXGBE_WRITE_FLUSH(hw);
+		wr32(hw, TXGBE_RDB_FDIR_CMD,
+		     TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW);
+	}
+
+	return 0;
+}
+
 /**
  *  txgbe_start_hw - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
@@ -4959,6 +5371,102 @@ s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
 	return status;
 }
 
+/**
+ *  txgbe_write_ee_hostif - Write EEPROM word using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @data: word write to the EEPROM
+ *
+ *  Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif_data(struct txgbe_hw *hw, u16 offset,
+			       u16 data)
+{
+	s32 status;
+	struct txgbe_hic_write_shadow_ram buffer;
+
+	buffer.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD;
+	buffer.hdr.req.buf_lenh = 0;
+	buffer.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN;
+	buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+	/* one word */
+	buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16));
+	buffer.data = data;
+	buffer.address = TXGBE_CPU_TO_BE32(offset * 2);
+
+	status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+					      sizeof(buffer),
+					      TXGBE_HI_COMMAND_TIMEOUT, false);
+
+	return status;
+}
+
+/**
+ *  txgbe_write_ee_hostif - Write EEPROM word using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @data: word write to the EEPROM
+ *
+ *  Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset,
+			  u16 data)
+{
+	s32 status = 0;
+
+	if (TCALL(hw, mac.ops.acquire_swfw_sync,
+		  TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) {
+		status = txgbe_write_ee_hostif_data(hw, offset, data);
+		TCALL(hw, mac.ops.release_swfw_sync,
+		      TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+	} else {
+		DEBUGOUT("write ee hostif failed to get semaphore");
+		status = TXGBE_ERR_SWFW_SYNC;
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_write_ee_hostif_buffer - Write EEPROM word(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @words: number of words
+ *  @data: word(s) write to the EEPROM
+ *
+ *  Write a 16 bit word(s) to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw,
+				 u16 offset, u16 words, u16 *data)
+{
+	s32 status = 0;
+	u16 i = 0;
+
+	/* Take semaphore for the entire operation. */
+	status = TCALL(hw, mac.ops.acquire_swfw_sync,
+		       TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+	if (status != 0) {
+		DEBUGOUT("EEPROM write buffer - semaphore failed\n");
+		goto out;
+	}
+
+	for (i = 0; i < words; i++) {
+		status = txgbe_write_ee_hostif_data(hw, offset + i,
+						    data[i]);
+
+		if (status != 0) {
+			DEBUGOUT("EEPROM buffered write failed\n");
+			break;
+		}
+	}
+
+	TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+out:
+
+	return status;
+}
+
 s32 txgbe_close_notify(struct txgbe_hw *hw)
 {
 	int tmp;
@@ -5078,6 +5586,39 @@ s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw)
 	return (s32)checksum;
 }
 
+/**
+ * txgbe_update_eeprom_checksum - Updates the EEPROM checksum and flash
+ * @hw: pointer to hardware structure
+ **/
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw)
+{
+	s32 status;
+	u16 checksum = 0;
+
+	/* Read the first word from the EEPROM. If this times out or fails, do
+	 * not continue or we could be in for a very long wait while every
+	 * EEPROM read fails
+	 */
+	status = txgbe_read_ee_hostif(hw, 0, &checksum);
+	if (status) {
+		DEBUGOUT("EEPROM read failed\n");
+		return status;
+	}
+
+	status = txgbe_calc_eeprom_checksum(hw);
+	if (status < 0)
+		return status;
+
+	checksum = (u16)(status & 0xffff);
+
+	status = txgbe_write_ee_hostif(hw, TXGBE_EEPROM_CHECKSUM,
+				       checksum);
+	if (status)
+		return status;
+
+	return status;
+}
+
 /**
  *  txgbe_validate_eeprom_checksum - Validate EEPROM checksum
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index e6b78292c60b..eb1e189fbaa5 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -4,6 +4,31 @@
 #ifndef _TXGBE_HW_H_
 #define _TXGBE_HW_H_
 
+#define SPI_CLK_DIV                        2
+
+#define SPI_CMD_ERASE_CHIP                 4  // SPI erase chip command
+#define SPI_CMD_ERASE_SECTOR               3  // SPI erase sector command
+#define SPI_CMD_WRITE_DWORD                0  // SPI write a dword command
+#define SPI_CMD_READ_DWORD                 1  // SPI read a dword command
+#define SPI_CMD_USER_CMD                   5  // SPI user command
+
+#define SPI_CLK_CMD_OFFSET                28  // SPI command field offset in Command register
+#define SPI_CLK_DIV_OFFSET                25  // SPI clock divide field offset in Command register
+
+#define SPI_TIME_OUT_VALUE           10000
+#define SPI_SECTOR_SIZE           (4 * 1024)  // FLASH sector size is 64KB
+#define SPI_H_CMD_REG_ADDR           0x10104  // SPI Command register address
+#define SPI_H_DAT_REG_ADDR           0x10108  // SPI Data register address
+#define SPI_H_STA_REG_ADDR           0x1010c  // SPI Status register address
+#define SPI_H_USR_CMD_REG_ADDR       0x10110  // SPI User Command register address
+#define SPI_CMD_CFG1_ADDR            0x10118  // Flash command configuration register 1
+
+#define MAC_ADDR0_WORD0_OFFSET_1G    0x006000c  // MAC Address for LAN0, stored in external FLASH
+#define MAC_ADDR0_WORD1_OFFSET_1G    0x0060014
+#define MAC_ADDR1_WORD0_OFFSET_1G    0x007000c  // MAC Address for LAN1, stored in external FLASH
+#define MAC_ADDR1_WORD1_OFFSET_1G    0x0070014
+#define PRODUCT_SERIAL_NUM_OFFSET_1G    0x00f0000
+
 /**
  * Packet Type decoding
  **/
@@ -91,6 +116,7 @@ s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
 s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
 
 s32 txgbe_fc_enable(struct txgbe_hw *hw);
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw);
 void txgbe_fc_autoneg(struct txgbe_hw *hw);
 s32 txgbe_setup_fc(struct txgbe_hw *hw);
 
@@ -143,6 +169,9 @@ s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw,
 s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
 				    union txgbe_atr_input *input,
 				    u16 soft_id, u8 queue, bool cloud_mode);
+s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw,
+				    union txgbe_atr_input *input,
+				    u16 soft_id);
 s32 txgbe_fdir_add_perfect_filter(struct txgbe_hw *hw,
 				  union txgbe_atr_input *input,
 				  union txgbe_atr_input *mask,
@@ -174,9 +203,16 @@ s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval);
 s32 txgbe_init_ops(struct txgbe_hw *hw);
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw);
 s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
 s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
 				   u16 *checksum_val);
+int txgbe_upgrade_flash(struct txgbe_hw *hw, u32 region,
+			const u8 *data, u32 size);
+s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw,
+				 u16 offset, u16 words, u16 *data);
+s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset,
+			  u16 data);
 s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
 				u16 offset, u16 words, u16 *data);
 s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset, u16 *data);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index b225417c27d5..0e5400dbdd9a 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -155,6 +155,79 @@ static void txgbe_remove_adapter(struct txgbe_hw *hw)
 		txgbe_service_event_schedule(adapter);
 }
 
+static void txgbe_check_remove(struct txgbe_hw *hw, u32 reg)
+{
+	u32 value;
+
+	/* The following check not only optimizes a bit by not
+	 * performing a read on the status register when the
+	 * register just read was a status register read that
+	 * returned TXGBE_FAILED_READ_REG. It also blocks any
+	 * potential recursion.
+	 */
+	if (reg == TXGBE_CFG_PORT_ST) {
+		txgbe_remove_adapter(hw);
+		return;
+	}
+	value = rd32(hw, TXGBE_CFG_PORT_ST);
+	if (value == TXGBE_FAILED_READ_REG)
+		txgbe_remove_adapter(hw);
+}
+
+static u32 txgbe_validate_register_read(struct txgbe_hw *hw, u32 reg, bool quiet)
+{
+	int i;
+	u32 value;
+	u8 __iomem *reg_addr;
+	struct txgbe_adapter *adapter = hw->back;
+
+	reg_addr = READ_ONCE(hw->hw_addr);
+	if (TXGBE_REMOVED(reg_addr))
+		return TXGBE_FAILED_READ_REG;
+	for (i = 0; i < TXGBE_DEAD_READ_RETRIES; ++i) {
+		value = txgbe_rd32(reg_addr + reg);
+		if (value != TXGBE_DEAD_READ_REG)
+			break;
+	}
+	if (quiet)
+		return value;
+	if (value == TXGBE_DEAD_READ_REG)
+		txgbe_err(drv, "%s: register %x read unchanged\n", __func__, reg);
+	else
+		txgbe_warn(hw, "%s: register %x read recovered after %d retries\n",
+			   __func__, reg, i + 1);
+	return value;
+}
+
+/**
+ * txgbe_read_reg - Read from device register
+ * @hw: hw specific details
+ * @reg: offset of register to read
+ *
+ * Returns : value read or TXGBE_FAILED_READ_REG if removed
+ *
+ * This function is used to read device registers. It checks for device
+ * removal by confirming any read that returns all ones by checking the
+ * status register value for all ones. This function avoids reading from
+ * the hardware if a removal was previously detected in which case it
+ * returns TXGBE_FAILED_READ_REG (all ones).
+ */
+u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet)
+{
+	u32 value;
+	u8 __iomem *reg_addr;
+
+	reg_addr = READ_ONCE(hw->hw_addr);
+	if (TXGBE_REMOVED(reg_addr))
+		return TXGBE_FAILED_READ_REG;
+	value = txgbe_rd32(reg_addr + reg);
+	if (unlikely(value == TXGBE_FAILED_READ_REG))
+		txgbe_check_remove(hw, reg);
+	if (unlikely(value == TXGBE_DEAD_READ_REG))
+		value = txgbe_validate_register_read(hw, reg, quiet);
+	return value;
+}
+
 static void txgbe_release_hw_control(struct txgbe_adapter *adapter)
 {
 	/* Let firmware take over control of hw */
@@ -3854,6 +3927,10 @@ int txgbe_open(struct net_device *netdev)
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
 	int err;
 
+	/* disallow open during test */
+	if (test_bit(__TXGBE_TESTING, &adapter->state))
+		return -EBUSY;
+
 	netif_carrier_off(netdev);
 
 	/* allocate transmit descriptors */
@@ -3970,6 +4047,8 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 {
 	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
 	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	u32 wufc = adapter->wol;
 
 	netif_device_detach(netdev);
 
@@ -3980,6 +4059,27 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 
 	txgbe_clear_interrupt_scheme(adapter);
 
+	if (wufc) {
+		txgbe_set_rx_mode(netdev);
+		txgbe_configure_rx(adapter);
+		/* enable the optics for SFP+ fiber as we can WoL */
+		TCALL(hw, mac.ops.enable_tx_laser);
+
+		/* turn on all-multi mode if wake on multicast is enabled */
+		if (wufc & TXGBE_PSR_WKUP_CTL_MC) {
+			wr32m(hw, TXGBE_PSR_CTL,
+			      TXGBE_PSR_CTL_MPE, TXGBE_PSR_CTL_MPE);
+		}
+
+		pci_clear_master(adapter->pdev);
+		wr32(hw, TXGBE_PSR_WKUP_CTL, wufc);
+	} else {
+		wr32(hw, TXGBE_PSR_WKUP_CTL, 0);
+	}
+
+	pci_wake_from_d3(pdev, !!wufc);
+
+	*enable_wake = !!wufc;
 	txgbe_release_hw_control(adapter);
 
 	if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
@@ -5720,6 +5820,37 @@ static int txgbe_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
 	}
 }
 
+/**
+ * txgbe_setup_tc - routine to configure net_device for multiple traffic
+ * classes.
+ *
+ * @netdev: net device to configure
+ * @tc: number of traffic classes to enable
+ */
+int txgbe_setup_tc(struct net_device *dev, u8 tc)
+{
+	struct txgbe_adapter *adapter = netdev_priv(dev);
+
+	/* Hardware has to reinitialize queues and interrupts to
+	 * match packet buffer alignment. Unfortunately, the
+	 * hardware is not flexible enough to do this dynamically.
+	 */
+	if (netif_running(dev))
+		txgbe_close(dev);
+	else
+		txgbe_reset(adapter);
+
+	txgbe_clear_interrupt_scheme(adapter);
+
+	netdev_reset_tc(dev);
+
+	txgbe_init_interrupt_scheme(adapter);
+	if (netif_running(dev))
+		txgbe_open(dev);
+
+	return 0;
+}
+
 void txgbe_do_reset(struct net_device *netdev)
 {
 	struct txgbe_adapter *adapter = netdev_priv(netdev);
@@ -5904,6 +6035,29 @@ static const struct net_device_ops txgbe_netdev_ops = {
 void txgbe_assign_netdev_ops(struct net_device *dev)
 {
 	dev->netdev_ops = &txgbe_netdev_ops;
+	txgbe_set_ethtool_ops(dev);
+}
+
+/**
+ * txgbe_wol_supported - Check whether device supports WoL
+ * @adapter: the adapter private structure
+ *
+ * This function is used by probe and ethtool to determine
+ * which devices have WoL support
+ *
+ **/
+int txgbe_wol_supported(struct txgbe_adapter *adapter)
+{
+	struct txgbe_hw *hw = &adapter->hw;
+	u16 wol_cap = adapter->eeprom_cap & TXGBE_DEVICE_CAPS_WOL_MASK;
+
+	/* check eeprom to see if WOL is enabled */
+	if (wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0_1 ||
+	    (wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0 &&
+	     hw->bus.func == 0))
+		return true;
+	else
+		return false;
 }
 
 /**
@@ -6117,6 +6271,21 @@ static int txgbe_probe(struct pci_dev *pdev,
 	if (err)
 		goto err_sw_init;
 
+	/* WOL not supported for all devices */
+	adapter->wol = 0;
+	TCALL(hw, eeprom.ops.read,
+	      hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS,
+	      &adapter->eeprom_cap);
+
+	if ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP &&
+	    hw->bus.lan_id == 0) {
+		adapter->wol = TXGBE_PSR_WKUP_CTL_MAG;
+		wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol);
+	}
+	hw->wol_enabled = !!(adapter->wol);
+
+	device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol);
+
 	/* Save off EEPROM version number and Option Rom version which
 	 * together make a unique identify for the eeprom
 	 */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
index 1e3137506812..c2fd45c8cfa5 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
@@ -319,6 +319,22 @@ s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 		     eeprom_data);
 }
 
+/**
+ *  txgbe_read_i2c_sff8472 - Reads 8 bit word over I2C interface
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset at address 0xA2
+ *  @eeprom_data: value read
+ *
+ *  Performs byte read operation to SFP module's SFF-8472 data over I2C
+ **/
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+			   u8 *sff8472_data)
+{
+	return TCALL(hw, phy.ops.read_i2c_byte, byte_offset,
+		     TXGBE_I2C_EEPROM_DEV_ADDR2,
+		     sff8472_data);
+}
+
 /**
  *  txgbe_read_i2c_byte_int - Reads 8 bit word over I2C
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
index 3efb4abef03a..a8a4adfaf9fb 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
@@ -7,6 +7,7 @@
 #include "txgbe.h"
 
 #define TXGBE_I2C_EEPROM_DEV_ADDR       0xA0
+#define TXGBE_I2C_EEPROM_DEV_ADDR2      0xA2
 
 /* EEPROM byte offsets */
 #define TXGBE_SFF_IDENTIFIER            0x0
@@ -18,6 +19,8 @@
 #define TXGBE_SFF_10GBE_COMP_CODES      0x3
 #define TXGBE_SFF_CABLE_TECHNOLOGY      0x8
 #define TXGBE_SFF_CABLE_SPEC_COMP       0x3C
+#define TXGBE_SFF_SFF_8472_SWAP         0x5C
+#define TXGBE_SFF_SFF_8472_COMP         0x5E
 
 /* Bitmasks */
 #define TXGBE_SFF_DA_PASSIVE_CABLE      0x4
@@ -28,6 +31,8 @@
 #define TXGBE_SFF_1GBASET_CAPABLE       0x8
 #define TXGBE_SFF_10GBASESR_CAPABLE     0x10
 #define TXGBE_SFF_10GBASELR_CAPABLE     0x20
+#define TXGBE_SFF_ADDRESSING_MODE       0x4
+
 /* Bit-shift macros */
 #define TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT        24
 #define TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT        16
@@ -39,6 +44,9 @@
 #define TXGBE_SFF_VENDOR_OUI_AVAGO      0x00176A00
 #define TXGBE_SFF_VENDOR_OUI_INTEL      0x001B2100
 
+/* SFP+ SFF-8472 Compliance */
+#define TXGBE_SFF_SFF_8472_UNSUP        0x00
+
 s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
 
 s32 txgbe_identify_module(struct txgbe_hw *hw);
@@ -51,5 +59,7 @@ s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
 
 s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 			  u8 *eeprom_data);
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+			   u8 *sff8472_data);
 
 #endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index 9416f80d07f0..97abd60eabb1 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -1906,6 +1906,10 @@ enum txgbe_l2_ptypes {
 #define TXGBE_TXD_TUNNEL_UDP            (0x0ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
 #define TXGBE_TXD_TUNNEL_GRE            (0x1ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
 
+/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
+#define TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE        8
+#define TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE        8
+
 /* Transmit Descriptor */
 union txgbe_tx_desc {
 	struct {
@@ -2042,6 +2046,9 @@ union txgbe_atr_hash_dword {
 #define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH  256 /* Num of bytes in range */
 #define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */
 #define TXGBE_HI_COMMAND_TIMEOUT        5000 /* Process HI command limit */
+#define TXGBE_HI_FLASH_ERASE_TIMEOUT    5000 /* Process Erase command limit */
+#define TXGBE_HI_FLASH_UPDATE_TIMEOUT   5000 /* Process Update command limit */
+#define TXGBE_HI_FLASH_VERIFY_TIMEOUT   60000 /* Process Apply command limit */
 
 /* CEM Support */
 #define FW_CEM_HDR_LEN                  0x4
@@ -2059,6 +2066,11 @@ union txgbe_atr_hash_dword {
 #define FW_MAX_READ_BUFFER_SIZE         244
 #define FW_RESET_CMD                    0xDF
 #define FW_RESET_LEN                    0x2
+#define FW_FLASH_UPGRADE_START_CMD      0xE3
+#define FW_FLASH_UPGRADE_START_LEN      0x1
+#define FW_FLASH_UPGRADE_WRITE_CMD      0xE4
+#define FW_FLASH_UPGRADE_VERIFY_CMD     0xE5
+#define FW_FLASH_UPGRADE_VERIFY_LEN     0x4
 #define FW_DW_OPEN_NOTIFY               0xE9
 #define FW_DW_CLOSE_NOTIFY              0xEA
 
@@ -2131,6 +2143,40 @@ struct txgbe_hic_reset {
 	u16 reset_type;
 };
 
+enum txgbe_module_id {
+	TXGBE_MODULE_EEPROM = 0,
+	TXGBE_MODULE_FIRMWARE,
+	TXGBE_MODULE_HARDWARE,
+	TXGBE_MODULE_PCIE
+};
+
+struct txgbe_hic_upg_start {
+	struct txgbe_hic_hdr hdr;
+	u8 module_id;
+	u8  pad2;
+	u16 pad3;
+};
+
+struct txgbe_hic_upg_write {
+	struct txgbe_hic_hdr hdr;
+	u8 data_len;
+	u8 eof_flag;
+	u16 check_sum;
+	u32 data[62];
+};
+
+enum txgbe_upg_flag {
+	TXGBE_RESET_NONE = 0,
+	TXGBE_RESET_FIRMWARE,
+	TXGBE_RELOAD_EEPROM,
+	TXGBE_RESET_LAN
+};
+
+struct txgbe_hic_upg_verify {
+	struct txgbe_hic_hdr hdr;
+	u32 action_flag;
+};
+
 /* Number of 100 microseconds we wait for PCI Express master disable */
 #define TXGBE_PCI_MASTER_DISABLE_TIMEOUT        800
 
@@ -2434,7 +2480,11 @@ struct txgbe_eeprom_operations {
 	s32 (*read)(struct txgbe_hw *hw, u16 offset, u16 *data);
 	s32 (*read_buffer)(struct txgbe_hw *hw,
 			   u16 offset, u16 words, u16 *data);
+	s32 (*write)(struct txgbe_hw *hw, u16 offset, u16 data);
+	s32 (*write_buffer)(struct txgbe_hw *hw,
+			    u16 offset, u16 words, u16 *data);
 	s32 (*validate_checksum)(struct txgbe_hw *hw, u16 *checksum_val);
+	s32 (*update_checksum)(struct txgbe_hw *hw);
 	s32 (*calc_checksum)(struct txgbe_hw *hw);
 };
 
@@ -2513,6 +2563,8 @@ struct txgbe_phy_operations {
 	s32 (*init)(struct txgbe_hw *hw);
 	s32 (*read_i2c_byte)(struct txgbe_hw *hw, u8 byte_offset,
 			     u8 dev_addr, u8 *data);
+	s32 (*read_i2c_sff8472)(struct txgbe_hw *hw, u8 byte_offset,
+				u8 *sff8472_data);
 	s32 (*read_i2c_eeprom)(struct txgbe_hw *hw, u8 byte_offset,
 			       u8 *eeprom_data);
 	s32 (*check_overtemp)(struct txgbe_hw *hw);
@@ -2599,6 +2651,7 @@ struct txgbe_hw {
 	bool adapter_stopped;
 	enum txgbe_reset_type reset_type;
 	bool force_full_reset;
+	bool wol_enabled;
 	enum txgbe_link_status link_status;
 	u16 subsystem_id;
 	u16 tpid[8];
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 11/14] net: txgbe: Support PCIe recovery
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (9 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 10/14] net: txgbe: Add ethtool support Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 12/14] net: txgbe: Support power management Jiawen Wu
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

When Tx hang detected on queues, or the PCIE request error found,
driver will do PCIE recovery, where the config of "TXGBE_PCI_RECOVER"
is enabled.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   2 +
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |   1 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  79 ++++++
 .../net/ethernet/wangxun/txgbe/txgbe_pcierr.c | 236 ++++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_pcierr.h |   8 +
 5 files changed, 326 insertions(+)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.c
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.h

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index 43e5d23109d7..7fe32feeded8 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -9,3 +9,5 @@ obj-$(CONFIG_TXGBE) += txgbe.o
 txgbe-objs := txgbe_main.o txgbe_ethtool.o \
               txgbe_hw.o txgbe_phy.o \
               txgbe_lib.o txgbe_ptp.o
+
+txgbe-$(CONFIG_TXGBE_PCI_RECOVER) += txgbe_pcierr.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 709805f7743c..fe51bdf96f3e 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -385,6 +385,7 @@ struct txgbe_mac_addr {
 #define TXGBE_FLAG2_RSS_FIELD_IPV6_UDP          (1U << 9)
 #define TXGBE_FLAG2_RSS_ENABLED                 (1U << 10)
 #define TXGBE_FLAG2_FDIR_REQUIRES_REINIT        (1U << 11)
+#define TXGBE_FLAG2_PCIE_NEED_RECOVER           (1U << 12)
 
 #define TXGBE_SET_FLAG(_input, _flag, _result) \
 	((_flag <= _result) ? \
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 0e5400dbdd9a..ab222bf9e828 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -23,6 +23,7 @@
 #include "txgbe.h"
 #include "txgbe_hw.h"
 #include "txgbe_phy.h"
+#include "txgbe_pcierr.h"
 
 char txgbe_driver_name[32] = TXGBE_NAME;
 static const char txgbe_driver_string[] =
@@ -416,6 +417,16 @@ static void txgbe_tx_timeout(struct net_device *netdev, unsigned int txqueue)
 		wr32(&adapter->hw, TXGBE_PX_ICS(1), value3);
 		wr32(&adapter->hw, TXGBE_PX_IMC(1), value3);
 	}
+
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n");
+	if (adapter->hw.bus.lan_id == 0) {
+		adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+		txgbe_service_event_schedule(adapter);
+	} else {
+		wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1);
+	}
+#endif
 }
 
 /**
@@ -550,6 +561,16 @@ static bool txgbe_clean_tx_irq(struct txgbe_q_vector *q_vector,
 			   "tx hang %d detected on queue %d, resetting adapter\n",
 			   adapter->tx_timeout_count + 1, tx_ring->queue_index);
 
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+		/* schedule immediate reset if we believe we hung */
+		txgbe_info(hw, "real tx hang. do pcie recovery.\n");
+		if (adapter->hw.bus.lan_id == 0) {
+			adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+			txgbe_service_event_schedule(adapter);
+		} else {
+			wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1);
+		}
+#endif
 		/* the adapter is about to reset, no point in enabling stuff */
 		return true;
 	}
@@ -1593,12 +1614,34 @@ static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
 	struct txgbe_hw *hw = &adapter->hw;
 	u32 eicr;
 	u32 ecc;
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	u16 pci_val = 0;
+#endif
 
 	eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
 
 	if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
 		txgbe_check_lsc(adapter);
 
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	if (eicr & TXGBE_PX_MISC_IC_PCIE_REQ_ERR) {
+		ERROR_REPORT1(TXGBE_ERROR_POLLING,
+			      "lan id %d, PCIe request error founded.\n", hw->bus.lan_id);
+
+		pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &pci_val);
+		ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", pci_val);
+
+		pci_read_config_word(adapter->pdev, PCI_COMMAND, &pci_val);
+		ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", pci_val);
+
+		if (hw->bus.lan_id == 0) {
+			adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+			txgbe_service_event_schedule(adapter);
+		} else {
+			wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1);
+		}
+	}
+#endif
 	if (eicr & TXGBE_PX_MISC_IC_INT_ERR) {
 		txgbe_info(link, "Received unrecoverable ECC Err, initiating reset.\n");
 		ecc = rd32(hw, TXGBE_MIS_ST);
@@ -4654,6 +4697,9 @@ static void txgbe_service_timer(struct timer_list *t)
 	struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
 	unsigned long next_event_offset;
 	struct txgbe_hw *hw = &adapter->hw;
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	u32 val = 0;
+#endif
 
 	/* poll faster when waiting for link */
 	if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
@@ -4665,6 +4711,23 @@ static void txgbe_service_timer(struct timer_list *t)
 		next_event_offset = HZ * 2;
 	}
 
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	if (rd32(&adapter->hw, TXGBE_MIS_PF_SM) == 1) {
+		val = rd32m(&adapter->hw, TXGBE_MIS_PRB_CTL,
+			    TXGBE_MIS_PRB_CTL_LAN0_UP | TXGBE_MIS_PRB_CTL_LAN1_UP);
+		if (val & TXGBE_MIS_PRB_CTL_LAN0_UP) {
+			if (hw->bus.lan_id == 0) {
+				adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+				txgbe_info(probe, "%s: set recover on Lan0\n", __func__);
+			}
+		} else if (val & TXGBE_MIS_PRB_CTL_LAN1_UP) {
+			if (hw->bus.lan_id == 1) {
+				adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+				txgbe_info(probe, "%s: set recover on Lan1\n", __func__);
+			}
+		}
+	}
+#endif
 	/* Reset the timer */
 	mod_timer(&adapter->service_timer, next_event_offset + jiffies);
 
@@ -4749,6 +4812,19 @@ static void txgbe_reset_subtask(struct txgbe_adapter *adapter)
 	rtnl_unlock();
 }
 
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+static void txgbe_check_pcie_subtask(struct txgbe_adapter *adapter)
+{
+	if (!(adapter->flags2 & TXGBE_FLAG2_PCIE_NEED_RECOVER))
+		return;
+
+	txgbe_info(probe, "do recovery\n");
+	wr32m(&adapter->hw, TXGBE_MIS_PF_SM, TXGBE_MIS_PF_SM_SM, 0);
+	txgbe_pcie_do_recovery(adapter->pdev);
+	adapter->flags2 &= ~TXGBE_FLAG2_PCIE_NEED_RECOVER;
+}
+#endif
+
 /**
  * txgbe_service_task - manages and runs subtasks
  * @work: pointer to work_struct containing our data
@@ -4768,6 +4844,9 @@ static void txgbe_service_task(struct work_struct *work)
 		return;
 	}
 
+#ifdef CONFIG_TXGBE_PCI_RECOVER
+	txgbe_check_pcie_subtask(adapter);
+#endif
 	txgbe_reset_subtask(adapter);
 	txgbe_sfp_detection_subtask(adapter);
 	txgbe_sfp_link_config_subtask(adapter);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.c
new file mode 100644
index 000000000000..833e5be4540f
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include "txgbe_pcierr.h"
+#include "txgbe.h"
+
+#define TXGBE_ROOT_PORT_INTR_ON_MESG_MASK (PCI_ERR_ROOT_CMD_COR_EN |      \
+					   PCI_ERR_ROOT_CMD_NONFATAL_EN | \
+					   PCI_ERR_ROOT_CMD_FATAL_EN)
+
+#ifndef PCI_ERS_RESULT_NO_AER_DRIVER
+/* No AER capabilities registered for the driver */
+#define PCI_ERS_RESULT_NO_AER_DRIVER ((__force pci_ers_result_t)6)
+#endif
+
+static pci_ers_result_t merge_result(enum pci_ers_result orig,
+				     enum pci_ers_result new)
+{
+	if (new == PCI_ERS_RESULT_NO_AER_DRIVER)
+		return PCI_ERS_RESULT_NO_AER_DRIVER;
+	if (new == PCI_ERS_RESULT_NONE)
+		return orig;
+	switch (orig) {
+	case PCI_ERS_RESULT_CAN_RECOVER:
+	case PCI_ERS_RESULT_RECOVERED:
+		orig = new;
+		break;
+	case PCI_ERS_RESULT_DISCONNECT:
+		if (new == PCI_ERS_RESULT_NEED_RESET)
+			orig = PCI_ERS_RESULT_NEED_RESET;
+		break;
+	default:
+		break;
+	}
+	return orig;
+}
+
+static int txgbe_report_error_detected(struct pci_dev *dev,
+				       pci_channel_state_t state,
+				       enum pci_ers_result *result)
+{
+	pci_ers_result_t vote;
+	const struct pci_error_handlers *err_handler;
+
+	device_lock(&dev->dev);
+	if (!dev->driver ||
+	    !dev->driver->err_handler ||
+	    !dev->driver->err_handler->error_detected) {
+		/* If any device in the subtree does not have an error_detected
+		 * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent
+		 * error callbacks of "any" device in the subtree, and will
+		 * exit in the disconnected error state.
+		 */
+		if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE)
+			vote = PCI_ERS_RESULT_NO_AER_DRIVER;
+		else
+			vote = PCI_ERS_RESULT_NONE;
+	} else {
+		err_handler = dev->driver->err_handler;
+		vote = err_handler->error_detected(dev, state);
+	}
+
+	*result = merge_result(*result, vote);
+	device_unlock(&dev->dev);
+	return 0;
+}
+
+static int txgbe_report_frozen_detected(struct pci_dev *dev, void *data)
+{
+	return txgbe_report_error_detected(dev, pci_channel_io_frozen, data);
+}
+
+static int txgbe_report_mmio_enabled(struct pci_dev *dev, void *data)
+{
+	pci_ers_result_t vote, *result = data;
+	const struct pci_error_handlers *err_handler;
+
+	device_lock(&dev->dev);
+	if (!dev->driver ||
+	    !dev->driver->err_handler ||
+	    !dev->driver->err_handler->mmio_enabled)
+		goto out;
+
+	err_handler = dev->driver->err_handler;
+	vote = err_handler->mmio_enabled(dev);
+	*result = merge_result(*result, vote);
+out:
+	device_unlock(&dev->dev);
+	return 0;
+}
+
+static int txgbe_report_slot_reset(struct pci_dev *dev, void *data)
+{
+	pci_ers_result_t vote, *result = data;
+	const struct pci_error_handlers *err_handler;
+
+	device_lock(&dev->dev);
+	if (!dev->driver ||
+	    !dev->driver->err_handler ||
+	    !dev->driver->err_handler->slot_reset)
+		goto out;
+
+	err_handler = dev->driver->err_handler;
+	vote = err_handler->slot_reset(dev);
+	*result = merge_result(*result, vote);
+out:
+	device_unlock(&dev->dev);
+	return 0;
+}
+
+static int txgbe_report_resume(struct pci_dev *dev, void *data)
+{
+	const struct pci_error_handlers *err_handler;
+
+	device_lock(&dev->dev);
+	dev->error_state = pci_channel_io_normal;
+	if (!dev->driver ||
+	    !dev->driver->err_handler ||
+	    !dev->driver->err_handler->resume)
+		goto out;
+
+	err_handler = dev->driver->err_handler;
+	err_handler->resume(dev);
+out:
+	device_unlock(&dev->dev);
+	return 0;
+}
+
+void txgbe_pcie_do_recovery(struct pci_dev *dev)
+{
+	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
+	struct pci_bus *bus;
+	u32 reg32;
+	int pos;
+	int delay = 1;
+	u32 id;
+	u16 ctrl;
+
+	/* Error recovery runs on all subordinates of the first downstream port.
+	 * If the downstream port detected the error, it is cleared at the end.
+	 */
+	if (!(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+	      pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM))
+		dev = dev->bus->self;
+	bus = dev->subordinate;
+
+	pci_walk_bus(bus, txgbe_report_frozen_detected, &status);
+	pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
+	if (pos) {
+		/* Disable Root's interrupt in response to error messages */
+		pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, &reg32);
+		reg32 &= ~TXGBE_ROOT_PORT_INTR_ON_MESG_MASK;
+		pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32);
+	}
+
+	pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &ctrl);
+	ctrl |= PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
+
+	/* PCI spec v3.0 7.6.4.2 requires minimum Trst of 1ms. Double
+	 * this to 2ms to ensure that we meet the minimum requirement.
+	 */
+
+	usleep_range(2000, 3000);
+	ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+	pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
+
+	/* Trhfa for conventional PCI is 2^25 clock cycles.
+	 * Assuming a minimum 33MHz clock this results in a 1s
+	 * delay before we can consider subordinate devices to
+	 * be re-initialized.  PCIe has some ways to shorten this,
+	 * but we don't make use of them yet.
+	 */
+	ssleep(1);
+
+	pci_read_config_dword(dev, PCI_COMMAND, &id);
+	while (id == ~0) {
+		if (delay > 60000) {
+			pci_warn(dev, "not ready %dms after %s; giving up\n",
+				 delay - 1, "bus_reset");
+			return;
+		}
+
+		if (delay > 1000)
+			pci_info(dev, "not ready %dms after %s; waiting\n",
+				 delay - 1, "bus_reset");
+
+		msleep(delay);
+		delay *= 2;
+		pci_read_config_dword(dev, PCI_COMMAND, &id);
+	}
+
+	if (delay > 1000)
+		pci_info(dev, "ready %dms after %s\n", delay - 1,
+			 "bus_reset");
+
+	pci_info(dev, "Root Port link has been reset\n");
+
+	if (pos) {
+		/* Clear Root Error Status */
+		pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);
+		pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, reg32);
+
+		/* Enable Root Port's interrupt in response to error messages */
+		pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, &reg32);
+		reg32 |= TXGBE_ROOT_PORT_INTR_ON_MESG_MASK;
+		pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32);
+	}
+
+	if (status == PCI_ERS_RESULT_CAN_RECOVER) {
+		status = PCI_ERS_RESULT_RECOVERED;
+		pci_dbg(dev, "broadcast mmio_enabled message\n");
+		pci_walk_bus(bus, txgbe_report_mmio_enabled, &status);
+	}
+
+	if (status == PCI_ERS_RESULT_NEED_RESET) {
+		/* TODO: Should call platform-specific
+		 * functions to reset slot before calling
+		 * drivers' slot_reset callbacks?
+		 */
+		status = PCI_ERS_RESULT_RECOVERED;
+		pci_dbg(dev, "broadcast slot_reset message\n");
+		pci_walk_bus(bus, txgbe_report_slot_reset, &status);
+	}
+
+	if (status != PCI_ERS_RESULT_RECOVERED)
+		goto failed;
+
+	pci_dbg(dev, "broadcast resume message\n");
+	pci_walk_bus(bus, txgbe_report_resume, &status);
+
+failed:
+	;
+}
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.h
new file mode 100644
index 000000000000..fda21988550d
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_pcierr.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#ifndef _TXGBE_PCIERR_H_
+#define _TXGBE_PCIERR_H_
+
+void txgbe_pcie_do_recovery(struct pci_dev *dev);
+#endif
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 12/14] net: txgbe: Support power management
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (10 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 11/14] net: txgbe: Support PCIe recovery Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-11  3:26 ` [PATCH net-next 13/14] net: txgbe: Support debug filesystem Jiawen Wu
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add support for device power management.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   | 82 +++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index ab222bf9e828..474786bdec3d 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -4086,12 +4086,62 @@ int txgbe_close(struct net_device *netdev)
 	return 0;
 }
 
+#ifdef CONFIG_PM
+static int txgbe_resume(struct pci_dev *pdev)
+{
+	struct txgbe_adapter *adapter;
+	struct net_device *netdev;
+	u32 err;
+
+	adapter = pci_get_drvdata(pdev);
+	netdev = adapter->netdev;
+	adapter->hw.hw_addr = adapter->io_addr;
+	pci_set_power_state(pdev, PCI_D0);
+	pci_restore_state(pdev);
+	/* pci_restore_state clears dev->state_saved so call
+	 * pci_save_state to restore it.
+	 */
+	pci_save_state(pdev);
+
+	err = pci_enable_device_mem(pdev);
+	if (err) {
+		txgbe_dev_err("Cannot enable PCI device from suspend\n");
+		return err;
+	}
+	smp_mb__before_atomic();
+	clear_bit(__TXGBE_DISABLED, &adapter->state);
+	pci_set_master(pdev);
+
+	pci_wake_from_d3(pdev, false);
+
+	txgbe_reset(adapter);
+
+	rtnl_lock();
+
+	err = txgbe_init_interrupt_scheme(adapter);
+	if (!err && netif_running(netdev))
+		err = txgbe_open(netdev);
+
+	rtnl_unlock();
+
+	if (err)
+		return err;
+
+	netif_device_attach(netdev);
+
+	return 0;
+}
+#endif /* CONFIG_PM */
+
 static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 {
 	struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
 	struct net_device *netdev = adapter->netdev;
 	struct txgbe_hw *hw = &adapter->hw;
 	u32 wufc = adapter->wol;
+#ifdef CONFIG_PM
+	int retval = 0;
+#endif
 
 	netif_device_detach(netdev);
 
@@ -4102,6 +4152,12 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 
 	txgbe_clear_interrupt_scheme(adapter);
 
+#ifdef CONFIG_PM
+	retval = pci_save_state(pdev);
+	if (retval)
+		return retval;
+#endif
+
 	if (wufc) {
 		txgbe_set_rx_mode(netdev);
 		txgbe_configure_rx(adapter);
@@ -4131,6 +4187,28 @@ static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
 	return 0;
 }
 
+#ifdef CONFIG_PM
+static int txgbe_suspend(struct pci_dev *pdev,
+			 pm_message_t __always_unused state)
+{
+	int retval;
+	bool wake;
+
+	retval = __txgbe_shutdown(pdev, &wake);
+	if (retval)
+		return retval;
+
+	if (wake) {
+		pci_prepare_to_sleep(pdev);
+	} else {
+		pci_wake_from_d3(pdev, false);
+		pci_set_power_state(pdev, PCI_D3hot);
+	}
+
+	return 0;
+}
+#endif /* CONFIG_PM */
+
 static void txgbe_shutdown(struct pci_dev *pdev)
 {
 	bool wake;
@@ -6599,6 +6677,10 @@ static struct pci_driver txgbe_driver = {
 	.id_table = txgbe_pci_tbl,
 	.probe    = txgbe_probe,
 	.remove   = txgbe_remove,
+#ifdef CONFIG_PM
+	.suspend  = txgbe_suspend,
+	.resume   = txgbe_resume,
+#endif
 	.shutdown = txgbe_shutdown,
 };
 
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 13/14] net: txgbe: Support debug filesystem
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (11 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 12/14] net: txgbe: Support power management Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12 11:56   ` Andrew Lunn
  2022-05-11  3:26 ` [PATCH net-next 14/14] net: txgbe: Support sysfs file system Jiawen Wu
  2022-05-12  0:54 ` [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jakub Kicinski
  14 siblings, 1 reply; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add support for debug filesystem.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   1 +
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  10 +
 .../ethernet/wangxun/txgbe/txgbe_debugfs.c    | 582 ++++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |  14 +
 4 files changed, 607 insertions(+)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_debugfs.c

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index 7fe32feeded8..c338a84abca8 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -10,4 +10,5 @@ txgbe-objs := txgbe_main.o txgbe_ethtool.o \
               txgbe_hw.o txgbe_phy.o \
               txgbe_lib.o txgbe_ptp.o
 
+txgbe-$(CONFIG_DEBUG_FS) += txgbe_debugfs.o
 txgbe-$(CONFIG_TXGBE_PCI_RECOVER) += txgbe_pcierr.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index fe51bdf96f3e..78f4a56d6cff 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -510,6 +510,9 @@ struct txgbe_adapter {
 	struct txgbe_mac_addr *mac_table;
 
 	__le16 vxlan_port;
+
+	struct dentry *txgbe_dbg_adapter;
+
 	unsigned long fwd_bitmask; /* bitmask indicating in use pools */
 
 #define TXGBE_MAX_RETA_ENTRIES 128
@@ -607,6 +610,13 @@ void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
 int txgbe_poll(struct napi_struct *napi, int budget);
 void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
 			    struct txgbe_ring *ring);
+#ifdef CONFIG_DEBUG_FS
+void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter);
+void txgbe_dbg_adapter_exit(struct txgbe_adapter *adapter);
+void txgbe_dbg_init(void);
+void txgbe_dbg_exit(void);
+void txgbe_dump(struct txgbe_adapter *adapter);
+#endif
 
 static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
 {
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_debugfs.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_debugfs.c
new file mode 100644
index 000000000000..93affdcea720
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_debugfs.c
@@ -0,0 +1,582 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe.h"
+
+#include <linux/debugfs.h>
+#include <linux/module.h>
+
+static struct dentry *txgbe_dbg_root;
+static int txgbe_data_mode;
+
+#define TXGBE_DATA_FUNC(dm)  ((dm) & ~0xFFFF)
+#define TXGBE_DATA_ARGS(dm)  ((dm) & 0xFFFF)
+enum txgbe_data_func {
+	TXGBE_FUNC_NONE        = (0 << 16),
+	TXGBE_FUNC_DUMP_BAR    = (1 << 16),
+	TXGBE_FUNC_DUMP_RDESC  = (2 << 16),
+	TXGBE_FUNC_DUMP_TDESC  = (3 << 16),
+	TXGBE_FUNC_FLASH_READ  = (4 << 16),
+	TXGBE_FUNC_FLASH_WRITE = (5 << 16),
+};
+
+/**
+ * reg_ops operation
+ **/
+static char txgbe_dbg_reg_ops_buf[256] = "";
+static ssize_t
+txgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer,
+		       size_t count, loff_t *ppos)
+{
+	struct txgbe_adapter *adapter = filp->private_data;
+	char *buf;
+	int len;
+
+	/* don't allow partial reads */
+	if (*ppos != 0)
+		return 0;
+
+	buf = kasprintf(GFP_KERNEL, "%s: mode=0x%08x\n%s\n",
+			adapter->netdev->name, txgbe_data_mode,
+			txgbe_dbg_reg_ops_buf);
+	if (!buf)
+		return -ENOMEM;
+
+	if (count < strlen(buf)) {
+		kfree(buf);
+		return -ENOSPC;
+	}
+
+	len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
+
+	kfree(buf);
+	return len;
+}
+
+static ssize_t
+txgbe_dbg_reg_ops_write(struct file *filp,
+			const char __user *buffer,
+			size_t count, loff_t *ppos)
+{
+	struct txgbe_adapter *adapter = filp->private_data;
+	char *pc = txgbe_dbg_reg_ops_buf;
+	int len;
+
+	/* don't allow partial writes */
+	if (*ppos != 0)
+		return 0;
+	if (count >= sizeof(txgbe_dbg_reg_ops_buf))
+		return -ENOSPC;
+
+	len = simple_write_to_buffer(txgbe_dbg_reg_ops_buf,
+				     sizeof(txgbe_dbg_reg_ops_buf) - 1,
+				     ppos,
+				     buffer,
+				     count);
+	if (len < 0)
+		return len;
+
+	pc[len] = '\0';
+
+	if (strncmp(pc, "dump", 4) == 0) {
+		u32 mode = 0;
+		u16 args;
+
+		pc += 4;
+		pc += strspn(pc, " \t");
+
+		if (!strncmp(pc, "bar", 3)) {
+			pc += 3;
+			mode = TXGBE_FUNC_DUMP_BAR;
+		} else if (!strncmp(pc, "rdesc", 5)) {
+			pc += 5;
+			mode = TXGBE_FUNC_DUMP_RDESC;
+		} else if (!strncmp(pc, "tdesc", 5)) {
+			pc += 5;
+			mode = TXGBE_FUNC_DUMP_TDESC;
+		} else {
+			txgbe_dump(adapter);
+		}
+
+		if (mode && !kstrtou16(pc, 0, &args))
+			mode |= args;
+
+		txgbe_data_mode = mode;
+	} else if (strncmp(pc, "flash", 4) == 0) {
+		u32 mode = 0;
+		u16 args;
+
+		pc += 5;
+		pc += strspn(pc, " \t");
+		if (!strncmp(pc, "read", 3)) {
+			pc += 4;
+			mode = TXGBE_FUNC_FLASH_READ;
+		} else if (!strncmp(pc, "write", 5)) {
+			pc += 5;
+			mode = TXGBE_FUNC_FLASH_WRITE;
+		}
+
+		if (mode && !kstrtou16(pc, 0, &args))
+			mode |= args;
+
+		txgbe_data_mode = mode;
+	} else if (strncmp(txgbe_dbg_reg_ops_buf, "write", 5) == 0) {
+		u32 reg, value;
+		int cnt;
+
+		cnt = sscanf(&txgbe_dbg_reg_ops_buf[5], "%x %x", &reg, &value);
+		if (cnt == 2) {
+			wr32(&adapter->hw, reg, value);
+			txgbe_dev_info("write: 0x%08x = 0x%08x\n", reg, value);
+		} else {
+			txgbe_dev_info("write <reg> <value>\n");
+		}
+	} else if (strncmp(txgbe_dbg_reg_ops_buf, "read", 4) == 0) {
+		u32 reg, value;
+		int cnt;
+
+		cnt = !kstrtou32(&txgbe_dbg_reg_ops_buf[4], 0, &reg);
+		if (cnt == 1) {
+			value = rd32(&adapter->hw, reg);
+			txgbe_dev_info("read 0x%08x = 0x%08x\n", reg, value);
+		} else {
+			txgbe_dev_info("read <reg>\n");
+		}
+	} else {
+		txgbe_dev_info("Unknown command %s\n", txgbe_dbg_reg_ops_buf);
+		txgbe_dev_info("Available commands:\n");
+		txgbe_dev_info("   read <reg>\n");
+		txgbe_dev_info("   write <reg> <value>\n");
+	}
+	return count;
+}
+
+static const struct file_operations txgbe_dbg_reg_ops_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  txgbe_dbg_reg_ops_read,
+	.write = txgbe_dbg_reg_ops_write,
+};
+
+/**
+ * netdev_ops operation
+ **/
+static char txgbe_dbg_netdev_ops_buf[256] = "";
+static ssize_t
+txgbe_dbg_netdev_ops_read(struct file *filp,
+			  char __user *buffer,
+			  size_t count, loff_t *ppos)
+{
+	struct txgbe_adapter *adapter = filp->private_data;
+	char *buf;
+	int len;
+
+	/* don't allow partial reads */
+	if (*ppos != 0)
+		return 0;
+
+	buf = kasprintf(GFP_KERNEL, "%s: mode=0x%08x\n%s\n",
+			adapter->netdev->name, txgbe_data_mode,
+			txgbe_dbg_netdev_ops_buf);
+	if (!buf)
+		return -ENOMEM;
+
+	if (count < strlen(buf)) {
+		kfree(buf);
+		return -ENOSPC;
+	}
+
+	len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
+
+	kfree(buf);
+	return len;
+}
+
+static ssize_t
+txgbe_dbg_netdev_ops_write(struct file *filp,
+			   const char __user *buffer,
+			   size_t count, loff_t *ppos)
+{
+	struct txgbe_adapter *adapter = filp->private_data;
+	int len;
+
+	/* don't allow partial writes */
+	if (*ppos != 0)
+		return 0;
+	if (count >= sizeof(txgbe_dbg_netdev_ops_buf))
+		return -ENOSPC;
+
+	len = simple_write_to_buffer(txgbe_dbg_netdev_ops_buf,
+				     sizeof(txgbe_dbg_netdev_ops_buf) - 1,
+				     ppos,
+				     buffer,
+				     count);
+	if (len < 0)
+		return len;
+
+	txgbe_dbg_netdev_ops_buf[len] = '\0';
+
+	if (strncmp(txgbe_dbg_netdev_ops_buf, "tx_timeout", 10) == 0) {
+		adapter->netdev->netdev_ops->ndo_tx_timeout(adapter->netdev, 0);
+		txgbe_dev_info("tx_timeout called\n");
+	} else {
+		txgbe_dev_info("Unknown command: %s\n", txgbe_dbg_netdev_ops_buf);
+		txgbe_dev_info("Available commands:\n");
+		txgbe_dev_info("    tx_timeout\n");
+	}
+	return count;
+}
+
+static const struct file_operations txgbe_dbg_netdev_ops_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read = txgbe_dbg_netdev_ops_read,
+	.write = txgbe_dbg_netdev_ops_write,
+};
+
+/**
+ * txgbe_dbg_adapter_init - setup the debugfs directory for the adapter
+ * @adapter: the adapter that is starting up
+ **/
+void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter)
+{
+	const char *name = pci_name(adapter->pdev);
+	struct dentry *pfile;
+
+	adapter->txgbe_dbg_adapter = debugfs_create_dir(name, txgbe_dbg_root);
+	if (!adapter->txgbe_dbg_adapter) {
+		txgbe_dev_err("debugfs entry for %s failed\n", name);
+		return;
+	}
+
+	pfile = debugfs_create_file("reg_ops", 0600,
+				    adapter->txgbe_dbg_adapter, adapter,
+				    &txgbe_dbg_reg_ops_fops);
+	if (!pfile)
+		txgbe_dev_err("debugfs reg_ops for %s failed\n", name);
+
+	pfile = debugfs_create_file("netdev_ops", 0600,
+				    adapter->txgbe_dbg_adapter, adapter,
+				    &txgbe_dbg_netdev_ops_fops);
+	if (!pfile)
+		txgbe_dev_err("debugfs netdev_ops for %s failed\n", name);
+}
+
+/**
+ * txgbe_dbg_adapter_exit - clear out the adapter's debugfs entries
+ * @pf: the pf that is stopping
+ **/
+void txgbe_dbg_adapter_exit(struct txgbe_adapter *adapter)
+{
+	debugfs_remove_recursive(adapter->txgbe_dbg_adapter);
+	adapter->txgbe_dbg_adapter = NULL;
+}
+
+/**
+ * txgbe_dbg_init - start up debugfs for the driver
+ **/
+void txgbe_dbg_init(void)
+{
+	txgbe_dbg_root = debugfs_create_dir(txgbe_driver_name, NULL);
+	if (!txgbe_dbg_root)
+		pr_err("init of debugfs failed\n");
+}
+
+/**
+ * txgbe_dbg_exit - clean out the driver's debugfs entries
+ **/
+void txgbe_dbg_exit(void)
+{
+	debugfs_remove_recursive(txgbe_dbg_root);
+}
+
+struct txgbe_reg_info {
+	u32 offset;
+	u32 length;
+	char *name;
+};
+
+static struct txgbe_reg_info txgbe_reg_info_tbl[] = {
+	/* General Registers */
+	{TXGBE_CFG_PORT_CTL, 1, "CTRL"},
+	{TXGBE_CFG_PORT_ST, 1, "STATUS"},
+
+	/* RX Registers */
+	{TXGBE_PX_RR_CFG(0), 1, "SRRCTL"},
+	{TXGBE_PX_RR_RP(0), 1, "RDH"},
+	{TXGBE_PX_RR_WP(0), 1, "RDT"},
+	{TXGBE_PX_RR_CFG(0), 1, "RXDCTL"},
+	{TXGBE_PX_RR_BAL(0), 1, "RDBAL"},
+	{TXGBE_PX_RR_BAH(0), 1, "RDBAH"},
+
+	/* TX Registers */
+	{TXGBE_PX_TR_BAL(0), 1, "TDBAL"},
+	{TXGBE_PX_TR_BAH(0), 1, "TDBAH"},
+	{TXGBE_PX_TR_RP(0), 1, "TDH"},
+	{TXGBE_PX_TR_WP(0), 1, "TDT"},
+	{TXGBE_PX_TR_CFG(0), 1, "TXDCTL"},
+
+	/* MACVLAN */
+	{TXGBE_PSR_MAC_SWC_VM_H, 128, "PSR_MAC_SWC_VM"},
+	{TXGBE_PSR_MAC_SWC_AD_L, 128, "PSR_MAC_SWC_AD"},
+	{TXGBE_PSR_VLAN_TBL(0),  128, "PSR_VLAN_TBL"},
+
+	/* QoS */
+	{TXGBE_TDM_RP_RATE, 128, "TDM_RP_RATE"},
+
+	/* List Terminator */
+	{ .name = NULL }
+};
+
+/**
+ * txgbe_regdump - register printout routine
+ **/
+static void
+txgbe_regdump(struct txgbe_hw *hw, struct txgbe_reg_info *reg_info)
+{
+	int i, n = 0;
+	u32 buffer[32 * 8];
+
+	switch (reg_info->offset) {
+	case TXGBE_PSR_MAC_SWC_VM_H:
+		for (i = 0; i < reg_info->length; i++) {
+			wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i);
+			buffer[n++] =
+				rd32(hw, TXGBE_PSR_MAC_SWC_VM_H);
+			buffer[n++] =
+				rd32(hw, TXGBE_PSR_MAC_SWC_VM_L);
+		}
+		break;
+	case TXGBE_PSR_MAC_SWC_AD_L:
+		for (i = 0; i < reg_info->length; i++) {
+			wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i);
+			buffer[n++] =
+				rd32(hw, TXGBE_PSR_MAC_SWC_AD_H);
+			buffer[n++] =
+				rd32(hw, TXGBE_PSR_MAC_SWC_AD_L);
+		}
+		break;
+	case TXGBE_TDM_RP_RATE:
+		for (i = 0; i < reg_info->length; i++) {
+			wr32(hw, TXGBE_TDM_RP_IDX, i);
+			buffer[n++] = rd32(hw, TXGBE_TDM_RP_RATE);
+		}
+		break;
+	default:
+		for (i = 0; i < reg_info->length; i++)
+			buffer[n++] = rd32(hw, reg_info->offset + 4 * i);
+		break;
+	}
+	BUG_ON(n);
+}
+
+/**
+ * txgbe_dump - Print registers, tx-rings and rx-rings
+ **/
+void txgbe_dump(struct txgbe_adapter *adapter)
+{
+	struct net_device *netdev = adapter->netdev;
+	struct txgbe_hw *hw = &adapter->hw;
+	struct txgbe_reg_info *reg_info;
+	int n = 0;
+	struct txgbe_ring *tx_ring;
+	struct txgbe_tx_buffer *tx_buffer;
+	union txgbe_tx_desc *tx_desc;
+	struct my_u0 { u64 a; u64 b; } *u0;
+	struct txgbe_ring *rx_ring;
+	union txgbe_rx_desc *rx_desc;
+	struct txgbe_rx_buffer *rx_buffer_info;
+	u32 staterr;
+	int i = 0;
+
+	if (!netif_msg_hw(adapter))
+		return;
+
+	/* Print Registers */
+	dev_info(&adapter->pdev->dev, "Register Dump\n");
+	pr_info(" Register Name   Value\n");
+	for (reg_info = txgbe_reg_info_tbl; reg_info->name; reg_info++)
+		txgbe_regdump(hw, reg_info);
+
+	/* Print TX Ring Summary */
+	if (!netdev || !netif_running(netdev))
+		return;
+
+	dev_info(&adapter->pdev->dev, "TX Rings Summary\n");
+	pr_info(" %s     %s              %s        %s\n",
+		"Queue [NTU] [NTC] [bi(ntc)->dma  ]",
+		"leng", "ntw", "timestamp");
+	for (n = 0; n < adapter->num_tx_queues; n++) {
+		tx_ring = adapter->tx_ring[n];
+		tx_buffer = &tx_ring->tx_buffer_info[tx_ring->next_to_clean];
+		pr_info(" %5d %5X %5X %016llX %08X %p %016llX\n",
+			n, tx_ring->next_to_use, tx_ring->next_to_clean,
+			(u64)dma_unmap_addr(tx_buffer, dma),
+			dma_unmap_len(tx_buffer, len),
+			tx_buffer->next_to_watch,
+			(u64)tx_buffer->time_stamp);
+	}
+
+	/* Print TX Rings */
+	if (!netif_msg_tx_done(adapter))
+		goto rx_ring_summary;
+
+	dev_info(&adapter->pdev->dev, "TX Rings Dump\n");
+
+	/* Transmit Descriptor Formats
+	 *
+	 * Transmit Descriptor (Read)
+	 *   +--------------------------------------------------------------+
+	 * 0 |         Buffer Address [63:0]                                |
+	 *   +--------------------------------------------------------------+
+	 * 8 |PAYLEN  |POPTS|CC|IDX  |STA  |DCMD  |DTYP |MAC  |RSV  |DTALEN |
+	 *   +--------------------------------------------------------------+
+	 *   63     46 45 40 39 38 36 35 32 31  24 23 20 19 18 17 16 15     0
+	 *
+	 * Transmit Descriptor (Write-Back)
+	 *   +--------------------------------------------------------------+
+	 * 0 |                          RSV [63:0]                          |
+	 *   +--------------------------------------------------------------+
+	 * 8 |            RSV           |  STA  |           RSV             |
+	 *   +--------------------------------------------------------------+
+	 *   63                       36 35   32 31                         0
+	 */
+
+	for (n = 0; n < adapter->num_tx_queues; n++) {
+		tx_ring = adapter->tx_ring[n];
+		pr_info("------------------------------------\n");
+		pr_info("TX QUEUE INDEX = %d\n", tx_ring->queue_index);
+		pr_info("------------------------------------\n");
+		pr_info("%s%s    %s              %s        %s          %s\n",
+			"T [desc]     [address 63:0  ] ",
+			"[PlPOIdStDDt Ln] [bi->dma       ] ",
+			"leng", "ntw", "timestamp", "bi->skb");
+
+		for (i = 0; tx_ring->desc && (i < tx_ring->count); i++) {
+			tx_desc = TXGBE_TX_DESC(tx_ring, i);
+			tx_buffer = &tx_ring->tx_buffer_info[i];
+			u0 = (struct my_u0 *)tx_desc;
+			if (dma_unmap_len(tx_buffer, len) > 0) {
+				pr_info("T [0x%03X]    %016llX %016llX %016llX %08X %p %016llX %p",
+					i,
+					le64_to_cpu(u0->a),
+					le64_to_cpu(u0->b),
+					(u64)dma_unmap_addr(tx_buffer, dma),
+					dma_unmap_len(tx_buffer, len),
+					tx_buffer->next_to_watch,
+					(u64)tx_buffer->time_stamp,
+					tx_buffer->skb);
+				if (i == tx_ring->next_to_use &&
+				    i == tx_ring->next_to_clean)
+					pr_info(" NTC/U\n");
+				else if (i == tx_ring->next_to_use)
+					pr_info(" NTU\n");
+				else if (i == tx_ring->next_to_clean)
+					pr_info(" NTC\n");
+				else
+					pr_info("\n");
+
+				if (netif_msg_pktdata(adapter) &&
+				    tx_buffer->skb)
+					print_hex_dump(KERN_INFO, "",
+						       DUMP_PREFIX_ADDRESS, 16, 1,
+						       tx_buffer->skb->data,
+						       dma_unmap_len(tx_buffer, len),
+						       true);
+			}
+		}
+	}
+
+	/* Print RX Rings Summary */
+rx_ring_summary:
+	dev_info(&adapter->pdev->dev, "RX Rings Summary\n");
+	pr_info("Queue [NTU] [NTC]\n");
+	for (n = 0; n < adapter->num_rx_queues; n++) {
+		rx_ring = adapter->rx_ring[n];
+		pr_info("%5d %5X %5X\n",
+			n, rx_ring->next_to_use, rx_ring->next_to_clean);
+	}
+
+	/* Print RX Rings */
+	if (!netif_msg_rx_status(adapter))
+		return;
+
+	dev_info(&adapter->pdev->dev, "RX Rings Dump\n");
+
+	/* Receive Descriptor Formats
+	 *
+	 * Receive Descriptor (Read)
+	 *    63                                           1        0
+	 *    +-----------------------------------------------------+
+	 *  0 |       Packet Buffer Address [63:1]           |A0/NSE|
+	 *    +----------------------------------------------+------+
+	 *  8 |       Header Buffer Address [63:1]           |  DD  |
+	 *    +-----------------------------------------------------+
+	 *
+	 *
+	 * Receive Descriptor (Write-Back)
+	 *
+	 *   63       48 47    32 31  30      21 20 17 16   4 3     0
+	 *   +------------------------------------------------------+
+	 * 0 |RSS / Frag Checksum|SPH| HDR_LEN  |RSC- |Packet|  RSS |
+	 *   |/ RTT / PCoE_PARAM |   |          | CNT | Type | Type |
+	 *   |/ Flow Dir Flt ID  |   |          |     |      |      |
+	 *   +------------------------------------------------------+
+	 * 8 | VLAN Tag | Length |Extended Error| Xtnd Status/NEXTP |
+	 *   +------------------------------------------------------+
+	 *   63       48 47    32 31          20 19                 0
+	 */
+
+	for (n = 0; n < adapter->num_rx_queues; n++) {
+		rx_ring = adapter->rx_ring[n];
+		pr_info("------------------------------------\n");
+		pr_info("RX QUEUE INDEX = %d\n", rx_ring->queue_index);
+		pr_info("------------------------------------\n");
+		pr_info("%s%s%s",
+			"R  [desc]      [ PktBuf     A0] ",
+			"[  HeadBuf   DD] [bi->dma       ] [bi->skb       ] ",
+			"<-- Adv Rx Read format\n");
+		pr_info("%s%s%s",
+			"RWB[desc]      [PcsmIpSHl PtRs] ",
+			"[vl er S cks ln] ---------------- [bi->skb       ] ",
+			"<-- Adv Rx Write-Back format\n");
+
+		for (i = 0; i < rx_ring->count; i++) {
+			rx_buffer_info = &rx_ring->rx_buffer_info[i];
+			rx_desc = TXGBE_RX_DESC(rx_ring, i);
+			u0 = (struct my_u0 *)rx_desc;
+			staterr = le32_to_cpu(rx_desc->wb.upper.status_error);
+			if (staterr & TXGBE_RXD_STAT_DD) {
+				/* Descriptor Done */
+				pr_info("RWB[0x%03X]     %016llX %016llX ---------------- %p",
+					i, le64_to_cpu(u0->a),
+					le64_to_cpu(u0->b),
+					rx_buffer_info->skb);
+			} else {
+				pr_info("R  [0x%03X]     %016llX %016llX %016llX %p",
+					i, le64_to_cpu(u0->a),
+					le64_to_cpu(u0->b),
+					(u64)rx_buffer_info->page_dma,
+					rx_buffer_info->skb);
+
+				if (netif_msg_pktdata(adapter) &&
+				    rx_buffer_info->page_dma) {
+					print_hex_dump(KERN_INFO, "",
+						       DUMP_PREFIX_ADDRESS, 16, 1,
+						       page_address(rx_buffer_info->page) +
+						       rx_buffer_info->page_offset,
+						       txgbe_rx_bufsz(rx_ring),
+						       true);
+				}
+			}
+
+			if (i == rx_ring->next_to_use)
+				pr_info(" NTU\n");
+			else if (i == rx_ring->next_to_clean)
+				pr_info(" NTC\n");
+			else
+				pr_info("\n");
+		}
+	}
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 474786bdec3d..1dee2877d346 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -6570,6 +6570,9 @@ static int txgbe_probe(struct pci_dev *pdev,
 	txgbe_info(probe, "WangXun(R) 10 Gigabit Network Connection\n");
 	cards_found++;
 
+#ifdef CONFIG_DEBUG_FS
+	txgbe_dbg_adapter_init(adapter);
+#endif
 	/* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */
 	if (txgbe_mng_present(hw) && txgbe_is_sfp(hw) &&
 	    ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
@@ -6618,6 +6621,10 @@ static void txgbe_remove(struct pci_dev *pdev)
 		return;
 
 	netdev = adapter->netdev;
+#ifdef CONFIG_DEBUG_FS
+	txgbe_dbg_adapter_exit(adapter);
+#endif
+
 	set_bit(__TXGBE_REMOVING, &adapter->state);
 	cancel_work_sync(&adapter->service_task);
 
@@ -6703,6 +6710,10 @@ static int __init txgbe_init_module(void)
 		return -ENOMEM;
 	}
 
+#ifdef CONFIG_DEBUG_FS
+	txgbe_dbg_init();
+#endif
+
 	ret = pci_register_driver(&txgbe_driver);
 	return ret;
 }
@@ -6720,6 +6731,9 @@ static void __exit txgbe_exit_module(void)
 	pci_unregister_driver(&txgbe_driver);
 	if (txgbe_wq)
 		destroy_workqueue(txgbe_wq);
+#ifdef CONFIG_DEBUG_FS
+	txgbe_dbg_exit();
+#endif
 }
 
 module_exit(txgbe_exit_module);
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH net-next 14/14] net: txgbe: Support sysfs file system
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (12 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 13/14] net: txgbe: Support debug filesystem Jiawen Wu
@ 2022-05-11  3:26 ` Jiawen Wu
  2022-05-12 11:43   ` Andrew Lunn
  2022-05-12  0:54 ` [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jakub Kicinski
  14 siblings, 1 reply; 26+ messages in thread
From: Jiawen Wu @ 2022-05-11  3:26 UTC (permalink / raw)
  To: netdev; +Cc: Jiawen Wu

Add support for sysfs file system.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ethernet/wangxun/txgbe/Makefile   |   1 +
 drivers/net/ethernet/wangxun/txgbe/txgbe.h    |  27 +++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c |  59 +++++
 drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h |   1 +
 .../net/ethernet/wangxun/txgbe/txgbe_main.c   |   9 +
 .../net/ethernet/wangxun/txgbe/txgbe_sysfs.c  | 203 ++++++++++++++++++
 .../net/ethernet/wangxun/txgbe/txgbe_type.h   |   1 +
 7 files changed, 301 insertions(+)
 create mode 100644 drivers/net/ethernet/wangxun/txgbe/txgbe_sysfs.c

diff --git a/drivers/net/ethernet/wangxun/txgbe/Makefile b/drivers/net/ethernet/wangxun/txgbe/Makefile
index c338a84abca8..e29c82cf3c00 100644
--- a/drivers/net/ethernet/wangxun/txgbe/Makefile
+++ b/drivers/net/ethernet/wangxun/txgbe/Makefile
@@ -11,4 +11,5 @@ txgbe-objs := txgbe_main.o txgbe_ethtool.o \
               txgbe_lib.o txgbe_ptp.o
 
 txgbe-$(CONFIG_DEBUG_FS) += txgbe_debugfs.o
+txgbe-${CONFIG_SYSFS} += txgbe_sysfs.o
 txgbe-$(CONFIG_TXGBE_PCI_RECOVER) += txgbe_pcierr.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe.h b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
index 78f4a56d6cff..b7aebff5a5ba 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe.h
@@ -296,6 +296,25 @@ struct txgbe_q_vector {
 	struct txgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
 };
 
+#ifdef CONFIG_HWMON
+#define TXGBE_HWMON_TYPE_TEMP           0
+#define TXGBE_HWMON_TYPE_ALARMTHRESH    1
+#define TXGBE_HWMON_TYPE_DALARMTHRESH   2
+
+struct hwmon_attr {
+	struct device_attribute dev_attr;
+	struct txgbe_hw *hw;
+	struct txgbe_thermal_diode_data *sensor;
+	char name[19];
+};
+
+struct hwmon_buff {
+	struct device *device;
+	struct hwmon_attr *hwmon_list;
+	unsigned int n_hwmon;
+};
+#endif /* CONFIG_HWMON */
+
 /* microsecond values for various ITR rates shifted by 2 to fit itr register
  * with the first 3 bits reserved 0
  */
@@ -511,6 +530,9 @@ struct txgbe_adapter {
 
 	__le16 vxlan_port;
 
+#ifdef CONFIG_HWMON
+	struct hwmon_buff txgbe_hwmon_buff;
+#endif
 	struct dentry *txgbe_dbg_adapter;
 
 	unsigned long fwd_bitmask; /* bitmask indicating in use pools */
@@ -610,6 +632,11 @@ void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
 int txgbe_poll(struct napi_struct *napi, int budget);
 void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
 			    struct txgbe_ring *ring);
+
+#ifdef CONFIG_SYSFS
+void txgbe_sysfs_exit(struct txgbe_adapter *adapter);
+int txgbe_sysfs_init(struct txgbe_adapter *adapter);
+#endif
 #ifdef CONFIG_DEBUG_FS
 void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter);
 void txgbe_dbg_adapter_exit(struct txgbe_adapter *adapter);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 35279209c51d..1e5343c4ef91 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -2334,6 +2334,63 @@ void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom,
 	}
 }
 
+/**
+ *  txgbe_get_thermal_sensor_data - Gathers thermal sensor data
+ *  @hw: pointer to hardware structure
+ *  @data: pointer to the thermal sensor data structure
+ *
+ * algorithm:
+ * T = (-4.8380E+01)N^0 + (3.1020E-01)N^1 + (-1.8201E-04)N^2 +
+		       (8.1542E-08)N^3 + (-1.6743E-11)N^4
+ * algorithm with 5% more deviation, easy for implementation
+ * T = (-50)N^0 + (0.31)N^1 + (-0.0002)N^2 + (0.0000001)N^3
+ *
+ *  Returns the thermal sensor data structure
+ **/
+s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw)
+{
+	s64 tsv;
+	int i = 0;
+	struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+	/* Only support thermal sensors attached to physical port 0 */
+	if (hw->bus.lan_id)
+		return TXGBE_NOT_IMPLEMENTED;
+
+	tsv = (s64)(rd32(hw, TXGBE_TS_ST) &
+		    TXGBE_TS_ST_DATA_OUT_MASK);
+
+	tsv = tsv < 1200 ? tsv : 1200;
+	tsv = -(48380 << 8) / 1000
+		+ tsv * (31020 << 8) / 100000
+		- tsv * tsv * (18201 << 8) / 100000000
+		+ tsv * tsv * tsv * (81542 << 8) / 1000000000000
+		- tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000;
+	tsv >>= 8;
+
+	data->sensor.temp = (s16)tsv;
+
+	for (i = 0; i < 100 ; i++) {
+		tsv = (s64)rd32(hw, TXGBE_TS_ST);
+		if (tsv >> 16 == 0x1) {
+			tsv = tsv & TXGBE_TS_ST_DATA_OUT_MASK;
+			tsv = tsv < 1200 ? tsv : 1200;
+			tsv = -(48380 << 8) / 1000
+					+ tsv * (31020 << 8) / 100000
+					- tsv * tsv * (18201 << 8) / 100000000
+					+ tsv * tsv * tsv * (81542 << 8) / 1000000000000
+					- tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000;
+			tsv >>= 8;
+
+			data->sensor.temp = (s16)tsv;
+			break;
+		}
+		usleep_range(1000, 2000);
+	}
+
+	return 0;
+}
+
 /**
  *  txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
  *  @hw: pointer to hardware structure
@@ -3074,6 +3131,8 @@ s32 txgbe_init_ops(struct txgbe_hw *hw)
 	/* Manageability interface */
 	mac->ops.set_fw_drv_ver = txgbe_set_fw_drv_ver;
 
+	mac->ops.get_thermal_sensor_data =
+					 txgbe_get_thermal_sensor_data;
 	mac->ops.init_thermal_sensor_thresh =
 				      txgbe_init_thermal_sensor_thresh;
 
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index eb1e189fbaa5..fd7c2ffb1e4b 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -148,6 +148,7 @@ s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
 bool txgbe_mng_present(struct txgbe_hw *hw);
 bool txgbe_check_mng_access(struct txgbe_hw *hw);
 
+s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw);
 s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
 void txgbe_enable_rx(struct txgbe_hw *hw);
 void txgbe_disable_rx(struct txgbe_hw *hw);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 1dee2877d346..19e4186a47ed 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -6570,6 +6570,11 @@ static int txgbe_probe(struct pci_dev *pdev,
 	txgbe_info(probe, "WangXun(R) 10 Gigabit Network Connection\n");
 	cards_found++;
 
+#ifdef CONFIG_SYSFS
+	if (txgbe_sysfs_init(adapter))
+		txgbe_err(probe, "failed to allocate sysfs resources\n");
+#endif
+
 #ifdef CONFIG_DEBUG_FS
 	txgbe_dbg_adapter_init(adapter);
 #endif
@@ -6628,6 +6633,10 @@ static void txgbe_remove(struct pci_dev *pdev)
 	set_bit(__TXGBE_REMOVING, &adapter->state);
 	cancel_work_sync(&adapter->service_task);
 
+#ifdef CONFIG_SYSFS
+	txgbe_sysfs_exit(adapter);
+#endif
+
 	/* remove the added san mac */
 	txgbe_del_sanmac_netdev(netdev);
 
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_sysfs.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_sysfs.c
new file mode 100644
index 000000000000..93a885b5fcc0
--- /dev/null
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_sysfs.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. */
+
+#include "txgbe.h"
+#include "txgbe_hw.h"
+#include "txgbe_type.h"
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/sysfs.h>
+#include <linux/kobject.h>
+#include <linux/device.h>
+#include <linux/netdevice.h>
+#include <linux/time.h>
+#ifdef CONFIG_HWMON
+#include <linux/hwmon.h>
+#endif
+
+#ifdef CONFIG_HWMON
+/* hwmon callback functions */
+static ssize_t txgbe_hwmon_show_temp(struct device __always_unused *dev,
+				     struct device_attribute *attr,
+				     char *buf)
+{
+	struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr,
+						     dev_attr);
+	unsigned int value;
+
+	/* reset the temp field */
+	TCALL(txgbe_attr->hw, mac.ops.get_thermal_sensor_data);
+
+	value = txgbe_attr->sensor->temp;
+
+	/* display millidegree */
+	value *= 1000;
+
+	return sprintf(buf, "%u\n", value);
+}
+
+static ssize_t txgbe_hwmon_show_alarmthresh(struct device __always_unused *dev,
+					    struct device_attribute *attr,
+					    char *buf)
+{
+	struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr,
+						     dev_attr);
+	unsigned int value = txgbe_attr->sensor->alarm_thresh;
+
+	/* display millidegree */
+	value *= 1000;
+
+	return sprintf(buf, "%u\n", value);
+}
+
+static ssize_t txgbe_hwmon_show_dalarmthresh(struct device __always_unused *dev,
+					     struct device_attribute *attr,
+					     char *buf)
+{
+	struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr,
+						     dev_attr);
+	unsigned int value = txgbe_attr->sensor->dalarm_thresh;
+
+	/* display millidegree */
+	value *= 1000;
+
+	return sprintf(buf, "%u\n", value);
+}
+
+/**
+ * txgbe_add_hwmon_attr - Create hwmon attr table for a hwmon sysfs file.
+ * @adapter: pointer to the adapter structure
+ * @type: type of sensor data to display
+ *
+ * For each file we want in hwmon's sysfs interface we need a device_attribute
+ * This is included in our hwmon_attr struct that contains the references to
+ * the data structures we need to get the data to display.
+ */
+static int txgbe_add_hwmon_attr(struct txgbe_adapter *adapter, int type)
+{
+	int rc;
+	unsigned int n_attr;
+	struct hwmon_attr *txgbe_attr;
+
+	n_attr = adapter->txgbe_hwmon_buff.n_hwmon;
+	txgbe_attr = &adapter->txgbe_hwmon_buff.hwmon_list[n_attr];
+
+	switch (type) {
+	case TXGBE_HWMON_TYPE_TEMP:
+		txgbe_attr->dev_attr.show = txgbe_hwmon_show_temp;
+		snprintf(txgbe_attr->name, sizeof(txgbe_attr->name),
+			 "temp%u_input", 0);
+		break;
+	case TXGBE_HWMON_TYPE_ALARMTHRESH:
+		txgbe_attr->dev_attr.show = txgbe_hwmon_show_alarmthresh;
+		snprintf(txgbe_attr->name, sizeof(txgbe_attr->name),
+			 "temp%u_alarmthresh", 0);
+		break;
+	case TXGBE_HWMON_TYPE_DALARMTHRESH:
+		txgbe_attr->dev_attr.show = txgbe_hwmon_show_dalarmthresh;
+		snprintf(txgbe_attr->name, sizeof(txgbe_attr->name),
+			 "temp%u_dalarmthresh", 0);
+		break;
+	default:
+		rc = -EPERM;
+		return rc;
+	}
+
+	/* These always the same regardless of type */
+	txgbe_attr->sensor =
+		&adapter->hw.mac.thermal_sensor_data.sensor;
+	txgbe_attr->hw = &adapter->hw;
+	txgbe_attr->dev_attr.store = NULL;
+	txgbe_attr->dev_attr.attr.mode = 0x0444;
+	txgbe_attr->dev_attr.attr.name = txgbe_attr->name;
+
+	rc = device_create_file(pci_dev_to_dev(adapter->pdev),
+				&txgbe_attr->dev_attr);
+
+	if (rc == 0)
+		++adapter->txgbe_hwmon_buff.n_hwmon;
+
+	return rc;
+}
+#endif /* CONFIG_HWMON */
+
+static void txgbe_sysfs_del_adapter(struct txgbe_adapter __maybe_unused *adapter)
+{
+#ifdef CONFIG_HWMON
+	int i;
+
+	if (!adapter)
+		return;
+
+	for (i = 0; i < adapter->txgbe_hwmon_buff.n_hwmon; i++) {
+		device_remove_file(pci_dev_to_dev(adapter->pdev),
+				   &adapter->txgbe_hwmon_buff.hwmon_list[i].dev_attr);
+	}
+
+	kfree(adapter->txgbe_hwmon_buff.hwmon_list);
+
+	if (adapter->txgbe_hwmon_buff.device)
+		hwmon_device_unregister(adapter->txgbe_hwmon_buff.device);
+#endif /* CONFIG_HWMON */
+}
+
+/* called from txgbe_main.c */
+void txgbe_sysfs_exit(struct txgbe_adapter *adapter)
+{
+	txgbe_sysfs_del_adapter(adapter);
+}
+
+/* called from txgbe_main.c */
+int txgbe_sysfs_init(struct txgbe_adapter *adapter)
+{
+	int rc = 0;
+#ifdef CONFIG_HWMON
+	struct hwmon_buff *txgbe_hwmon = &adapter->txgbe_hwmon_buff;
+	int n_attrs;
+
+#endif /* CONFIG_HWMON */
+	if (!adapter)
+		goto err;
+
+#ifdef CONFIG_HWMON
+
+	/* Don't create thermal hwmon interface if no sensors present */
+	if (TCALL(&adapter->hw, mac.ops.init_thermal_sensor_thresh))
+		goto no_thermal;
+
+	/* Allocation space for max attributs
+	 * max num sensors * values (temp, alamthresh, dalarmthresh)
+	 */
+	n_attrs = 3;
+	txgbe_hwmon->hwmon_list = kcalloc(n_attrs, sizeof(struct hwmon_attr),
+					  GFP_KERNEL);
+	if (!txgbe_hwmon->hwmon_list) {
+		rc = -ENOMEM;
+		goto err;
+	}
+
+	txgbe_hwmon->device =
+			hwmon_device_register(pci_dev_to_dev(adapter->pdev));
+	if (IS_ERR(txgbe_hwmon->device)) {
+		rc = PTR_ERR(txgbe_hwmon->device);
+		goto err;
+	}
+
+	/* Bail if any hwmon attr struct fails to initialize */
+	rc = txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_TEMP);
+	rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_ALARMTHRESH);
+	rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_DALARMTHRESH);
+	if (rc)
+		goto err;
+
+no_thermal:
+#endif /* CONFIG_HWMON */
+	goto exit;
+
+err:
+	txgbe_sysfs_del_adapter(adapter);
+exit:
+	return rc;
+}
+
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index 97abd60eabb1..11e9e2b68fdd 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -2552,6 +2552,7 @@ struct txgbe_mac_operations {
 	/* Manageability interface */
 	s32 (*set_fw_drv_ver)(struct txgbe_hw *hw, u8 maj, u8 min,
 			      u8 build, u8 ver);
+	s32 (*get_thermal_sensor_data)(struct txgbe_hw *hw);
 	s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
 	void (*disable_rx)(struct txgbe_hw *hw);
 	void (*enable_rx)(struct txgbe_hw *hw);
-- 
2.27.0




^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver
  2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
@ 2022-05-12  0:38   ` kernel test robot
  2022-05-12 15:52   ` Andrew Lunn
  1 sibling, 0 replies; 26+ messages in thread
From: kernel test robot @ 2022-05-12  0:38 UTC (permalink / raw)
  To: Jiawen Wu, netdev; +Cc: kbuild-all, Jiawen Wu

Hi Jiawen,

I love your patch! Perhaps something to improve:

[auto build test WARNING on horms-ipvs/master]
[cannot apply to net-next/master net/master linus/master v5.18-rc6 next-20220511]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Jiawen-Wu/Wangxun-10-Gigabit-Ethernet-Driver/20220511-113032
base:   https://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git master
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220512/202205120837.ym9aTQJ0-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/63bf477b5f111995b0ea1fcf839b7e6b0c06ca55
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jiawen-Wu/Wangxun-10-Gigabit-Ethernet-Driver/20220511-113032
        git checkout 63bf477b5f111995b0ea1fcf839b7e6b0c06ca55
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=arc SHELL=/bin/bash drivers/net/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:47:6: warning: no previous prototype for 'txgbe_service_event_schedule' [-Wmissing-prototypes]
      47 | void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c: In function 'txgbe_probe':
   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:148:18: warning: variable 'pci_using_dac' set but not used [-Wunused-but-set-variable]
     148 |         int err, pci_using_dac;
         |                  ^~~~~~~~~~~~~


vim +/txgbe_service_event_schedule +47 drivers/net/ethernet/wangxun/txgbe/txgbe_main.c

    46	
  > 47	void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
    48	{
    49		if (!test_bit(__TXGBE_DOWN, &adapter->state) &&
    50		    !test_bit(__TXGBE_REMOVING, &adapter->state) &&
    51		    !test_and_set_bit(__TXGBE_SERVICE_SCHED, &adapter->state))
    52			queue_work(txgbe_wq, &adapter->service_task);
    53	}
    54	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver
  2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
                   ` (13 preceding siblings ...)
  2022-05-11  3:26 ` [PATCH net-next 14/14] net: txgbe: Support sysfs file system Jiawen Wu
@ 2022-05-12  0:54 ` Jakub Kicinski
       [not found]   ` <004401d865e4$c3073d10$4915b730$@trustnetic.com>
  14 siblings, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2022-05-12  0:54 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

On Wed, 11 May 2022 11:26:45 +0800 Jiawen Wu wrote:
>  22 files changed, 22839 insertions(+)

Cut it up more, please. Expecting folks to review 23kLoC in one sitting
is unrealistic. Upstream a minimal driver first then start adding
features.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 02/14] net: txgbe: Add hardware initialization
  2022-05-11  3:26 ` [PATCH net-next 02/14] net: txgbe: Add hardware initialization Jiawen Wu
@ 2022-05-12  5:15   ` kernel test robot
  0 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2022-05-12  5:15 UTC (permalink / raw)
  To: Jiawen Wu, netdev; +Cc: kbuild-all, Jiawen Wu

Hi Jiawen,

I love your patch! Perhaps something to improve:

[auto build test WARNING on horms-ipvs/master]
[cannot apply to net-next/master net/master linus/master v5.18-rc6 next-20220511]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Jiawen-Wu/Wangxun-10-Gigabit-Ethernet-Driver/20220511-113032
base:   https://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git master
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220512/202205121354.BKT9ZuVB-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/f33cce2ea458796311d5925beaf78c01546f36ce
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jiawen-Wu/Wangxun-10-Gigabit-Ethernet-Driver/20220511-113032
        git checkout f33cce2ea458796311d5925beaf78c01546f36ce
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=arc SHELL=/bin/bash drivers/net/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:98:6: warning: no previous prototype for 'txgbe_service_event_schedule' [-Wmissing-prototypes]
      98 | void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:178:6: warning: no previous prototype for 'txgbe_reset' [-Wmissing-prototypes]
     178 | void txgbe_reset(struct txgbe_adapter *adapter)
         |      ^~~~~~~~~~~
>> drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:208:6: warning: no previous prototype for 'txgbe_disable_device' [-Wmissing-prototypes]
     208 | void txgbe_disable_device(struct txgbe_adapter *adapter)
         |      ^~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:264:5: warning: no previous prototype for 'txgbe_init_shared_code' [-Wmissing-prototypes]
     264 | s32 txgbe_init_shared_code(struct txgbe_hw *hw)
         |     ^~~~~~~~~~~~~~~~~~~~~~
   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c: In function 'txgbe_probe':
   drivers/net/ethernet/wangxun/txgbe/txgbe_main.c:460:18: warning: variable 'pci_using_dac' set but not used [-Wunused-but-set-variable]
     460 |         int err, pci_using_dac, expected_gts;
         |                  ^~~~~~~~~~~~~
--
   drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c:205: warning: Function parameter or member 'pools' not described in 'txgbe_set_rar'
   drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c:205: warning: Excess function parameter 'vmdq' description in 'txgbe_set_rar'
>> drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c:444: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
    *  This function should only be involved in the IOV mode.


vim +/txgbe_disable_device +208 drivers/net/ethernet/wangxun/txgbe/txgbe_main.c

    97	
  > 98	void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
    99	{
   100		if (!test_bit(__TXGBE_DOWN, &adapter->state) &&
   101		    !test_bit(__TXGBE_REMOVING, &adapter->state) &&
   102		    !test_and_set_bit(__TXGBE_SERVICE_SCHED, &adapter->state))
   103			queue_work(txgbe_wq, &adapter->service_task);
   104	}
   105	
   106	static void txgbe_service_event_complete(struct txgbe_adapter *adapter)
   107	{
   108		BUG_ON(!test_bit(__TXGBE_SERVICE_SCHED, &adapter->state));
   109	
   110		/* flush memory to make sure state is correct before next watchdog */
   111		smp_mb__before_atomic();
   112		clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
   113	}
   114	
   115	static void txgbe_remove_adapter(struct txgbe_hw *hw)
   116	{
   117		struct txgbe_adapter *adapter = hw->back;
   118	
   119		if (!hw->hw_addr)
   120			return;
   121		hw->hw_addr = NULL;
   122		txgbe_dev_err("Adapter removed\n");
   123		if (test_bit(__TXGBE_SERVICE_INITED, &adapter->state))
   124			txgbe_service_event_schedule(adapter);
   125	}
   126	
   127	static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
   128	{
   129		struct txgbe_hw *hw = &adapter->hw;
   130		int i;
   131	
   132		for (i = 0; i < hw->mac.num_rar_entries; i++) {
   133			if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
   134				if (adapter->mac_table[i].state &
   135						TXGBE_MAC_STATE_IN_USE) {
   136					TCALL(hw, mac.ops.set_rar, i,
   137					      adapter->mac_table[i].addr,
   138					      adapter->mac_table[i].pools,
   139					      TXGBE_PSR_MAC_SWC_AD_H_AV);
   140				} else {
   141					TCALL(hw, mac.ops.clear_rar, i);
   142				}
   143				adapter->mac_table[i].state &=
   144					~(TXGBE_MAC_STATE_MODIFIED);
   145			}
   146		}
   147	}
   148	
   149	/* this function destroys the first RAR entry */
   150	static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
   151						 u8 *addr)
   152	{
   153		struct txgbe_hw *hw = &adapter->hw;
   154	
   155		memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
   156		adapter->mac_table[0].pools = 1ULL;
   157		adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
   158					       TXGBE_MAC_STATE_IN_USE);
   159		TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr,
   160		      adapter->mac_table[0].pools,
   161		      TXGBE_PSR_MAC_SWC_AD_H_AV);
   162	}
   163	
   164	static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
   165	{
   166		u32 i;
   167		struct txgbe_hw *hw = &adapter->hw;
   168	
   169		for (i = 0; i < hw->mac.num_rar_entries; i++) {
   170			adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
   171			adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
   172			memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
   173			adapter->mac_table[i].pools = 0;
   174		}
   175		txgbe_sync_mac_table(adapter);
   176	}
   177	
   178	void txgbe_reset(struct txgbe_adapter *adapter)
   179	{
   180		struct txgbe_hw *hw = &adapter->hw;
   181		struct net_device *netdev = adapter->netdev;
   182		int err;
   183		u8 old_addr[ETH_ALEN];
   184	
   185		if (TXGBE_REMOVED(hw->hw_addr))
   186			return;
   187	
   188		err = TCALL(hw, mac.ops.init_hw);
   189		switch (err) {
   190		case 0:
   191			break;
   192		case TXGBE_ERR_MASTER_REQUESTS_PENDING:
   193			txgbe_dev_err("master disable timed out\n");
   194			break;
   195		default:
   196			txgbe_dev_err("Hardware Error: %d\n", err);
   197		}
   198	
   199		/* do not flush user set addresses */
   200		memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
   201		txgbe_flush_sw_mac_table(adapter);
   202		txgbe_mac_set_default_filter(adapter, old_addr);
   203	
   204		/* update SAN MAC vmdq pool selection */
   205		TCALL(hw, mac.ops.set_vmdq_san_mac, 0);
   206	}
   207	
 > 208	void txgbe_disable_device(struct txgbe_adapter *adapter)
   209	{
   210		struct net_device *netdev = adapter->netdev;
   211		struct txgbe_hw *hw = &adapter->hw;
   212		u32 i;
   213	
   214		/* signal that we are down to the interrupt handler */
   215		if (test_and_set_bit(__TXGBE_DOWN, &adapter->state))
   216			return; /* do nothing if already down */
   217	
   218		txgbe_disable_pcie_master(hw);
   219		/* disable receives */
   220		TCALL(hw, mac.ops.disable_rx);
   221	
   222		/* call carrier off first to avoid false dev_watchdog timeouts */
   223		netif_carrier_off(netdev);
   224		netif_tx_disable(netdev);
   225	
   226		del_timer_sync(&adapter->service_timer);
   227	
   228		if (hw->bus.lan_id == 0)
   229			wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN0_UP, 0);
   230		else if (hw->bus.lan_id == 1)
   231			wr32m(hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN1_UP, 0);
   232		else
   233			txgbe_dev_err("%s: invalid bus lan id %d\n", __func__,
   234				      hw->bus.lan_id);
   235	
   236		if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
   237		      ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
   238			/* disable mac transmiter */
   239			wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, 0);
   240		}
   241		/* disable transmits in the hardware now that interrupts are off */
   242		for (i = 0; i < adapter->num_tx_queues; i++) {
   243			u8 reg_idx = adapter->tx_ring[i]->reg_idx;
   244	
   245			wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
   246		}
   247	
   248		/* Disable the Tx DMA engine */
   249		wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
   250	}
   251	
   252	void txgbe_down(struct txgbe_adapter *adapter)
   253	{
   254		txgbe_disable_device(adapter);
   255		txgbe_reset(adapter);
   256	}
   257	
   258	/**
   259	 *  txgbe_init_shared_code - Initialize the shared code
   260	 *  @hw: pointer to hardware structure
   261	 *
   262	 *  This will assign function pointers and assign the MAC type and PHY code.
   263	 **/
 > 264	s32 txgbe_init_shared_code(struct txgbe_hw *hw)
   265	{
   266		s32 status;
   267	
   268		status = txgbe_init_ops(hw);
   269		return status;
   270	}
   271	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 14/14] net: txgbe: Support sysfs file system
  2022-05-11  3:26 ` [PATCH net-next 14/14] net: txgbe: Support sysfs file system Jiawen Wu
@ 2022-05-12 11:43   ` Andrew Lunn
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 11:43 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

On Wed, May 11, 2022 at 11:26:59AM +0800, Jiawen Wu wrote:
> Add support for sysfs file system.

Please always Cc: the hwmon maintainers for hwmon patches.

> +	txgbe_hwmon->device =
> +			hwmon_device_register(pci_dev_to_dev(adapter->pdev));

I _think_ the preferred interface is
hwmon_device_register_with_groups() or
hwmon_device_register_with_info(). The hwmon maintainer will tell you.

	Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 13/14] net: txgbe: Support debug filesystem
  2022-05-11  3:26 ` [PATCH net-next 13/14] net: txgbe: Support debug filesystem Jiawen Wu
@ 2022-05-12 11:56   ` Andrew Lunn
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 11:56 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

> +static ssize_t
> +txgbe_dbg_reg_ops_write(struct file *filp,
> +			const char __user *buffer,
> +			size_t count, loff_t *ppos)
> +{
> +	struct txgbe_adapter *adapter = filp->private_data;
> +	char *pc = txgbe_dbg_reg_ops_buf;
> +	int len;

No writing from debugfs please. It allows closed source binary blobs
in user space, rather than proper open solutions.

> +/**
> + * txgbe_dbg_adapter_init - setup the debugfs directory for the adapter
> + * @adapter: the adapter that is starting up
> + **/
> +void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter)
> +{
> +	const char *name = pci_name(adapter->pdev);
> +	struct dentry *pfile;
> +
> +	adapter->txgbe_dbg_adapter = debugfs_create_dir(name, txgbe_dbg_root);
> +	if (!adapter->txgbe_dbg_adapter) {
> +		txgbe_dev_err("debugfs entry for %s failed\n", name);
> +		return;
> +	}

Don't check the return code for debugfs_XXX API calls. The API is
designed to work even when there are errors.

> +/**
> + * txgbe_dump - Print registers, tx-rings and rx-rings
> + **/
> +void txgbe_dump(struct txgbe_adapter *adapter)
> +{
> +	struct net_device *netdev = adapter->netdev;
> +	struct txgbe_hw *hw = &adapter->hw;
> +	struct txgbe_reg_info *reg_info;
> +	int n = 0;
> +	struct txgbe_ring *tx_ring;
> +	struct txgbe_tx_buffer *tx_buffer;
> +	union txgbe_tx_desc *tx_desc;
> +	struct my_u0 { u64 a; u64 b; } *u0;
> +	struct txgbe_ring *rx_ring;
> +	union txgbe_rx_desc *rx_desc;
> +	struct txgbe_rx_buffer *rx_buffer_info;
> +	u32 staterr;
> +	int i = 0;
> +
> +	if (!netif_msg_hw(adapter))
> +		return;
> +
> +	/* Print Registers */
> +	dev_info(&adapter->pdev->dev, "Register Dump\n");
> +	pr_info(" Register Name   Value\n");
> +	for (reg_info = txgbe_reg_info_tbl; reg_info->name; reg_info++)
> +		txgbe_regdump(hw, reg_info);
> +
> +	/* Print TX Ring Summary */
> +	if (!netdev || !netif_running(netdev))
> +		return;
> +
> +	dev_info(&adapter->pdev->dev, "TX Rings Summary\n");
> +	pr_info(" %s     %s              %s        %s\n",
> +		"Queue [NTU] [NTC] [bi(ntc)->dma  ]",
> +		"leng", "ntw", "timestamp");

Don't dump this sort of thing to the kernel log. The only case where
this is currently done in the kernel is for helping to find real bugs
in drivers which have broken DMA handling. There is one driver i know
of where the DMA ring will stop working, and after a timeout the ring
is dumped to see what sort of shape it is in, where the head/tail
pointers are, which descriptors are busy etc.

Does your driver have issues? Or is it production ready?

> +#ifdef CONFIG_DEBUG_FS
> +	txgbe_dbg_adapter_init(adapter);
> +#endif
>  	/* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */
>  	if (txgbe_mng_present(hw) && txgbe_is_sfp(hw) &&
>  	    ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
> @@ -6618,6 +6621,10 @@ static void txgbe_remove(struct pci_dev *pdev)
>  		return;
>  
>  	netdev = adapter->netdev;
> +#ifdef CONFIG_DEBUG_FS
> +	txgbe_dbg_adapter_exit(adapter);
> +#endif
> +
>  	set_bit(__TXGBE_REMOVING, &adapter->state);
>  	cancel_work_sync(&adapter->service_task);
>  
> @@ -6703,6 +6710,10 @@ static int __init txgbe_init_module(void)
>  		return -ENOMEM;
>  	}
>  
> +#ifdef CONFIG_DEBUG_FS
> +	txgbe_dbg_init();
> +#endif
> +
>  	ret = pci_register_driver(&txgbe_driver);
>  	return ret;
>  }
> @@ -6720,6 +6731,9 @@ static void __exit txgbe_exit_module(void)
>  	pci_unregister_driver(&txgbe_driver);
>  	if (txgbe_wq)
>  		destroy_workqueue(txgbe_wq);
> +#ifdef CONFIG_DEBUG_FS
> +	txgbe_dbg_exit();
> +#endif

debugfs is designed such that you don't need all these #ifdef
CONFIG_DEBUG_FS calls. There are stubs for when it is not part of the
kernel. So just makes the calls.

In general #ifdef is not liked in C code. Try to avoid it.

	Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 10/14] net: txgbe: Add ethtool support
  2022-05-11  3:26 ` [PATCH net-next 10/14] net: txgbe: Add ethtool support Jiawen Wu
@ 2022-05-12 12:11   ` Andrew Lunn
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 12:11 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

> +int txgbe_get_link_ksettings(struct net_device *netdev,
> +			     struct ethtool_link_ksettings *cmd)
> +{
> +	struct txgbe_adapter *adapter = netdev_priv(netdev);
> +	struct txgbe_hw *hw = &adapter->hw;
> +	u32 supported_link;
> +	u32 link_speed = 0;
> +	bool autoneg = false;
> +	u32 supported, advertising;
> +	bool link_up;



> +
> +	if (!in_interrupt()) {
> +		TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
> +	} else {
> +		/* this case is a special workaround for RHEL5 bonding
> +		 * that calls this routine from interrupt context
> +		 */
> +		link_speed = adapter->link_speed;
> +		link_up = adapter->link_up;
> +	}

Does mainline do this? You are contributing this driver to mainline,
not a vendor kernel. Don't work around vendor issues.

    Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 04/14] net: txgbe: Add PHY interface support
  2022-05-11  3:26 ` [PATCH net-next 04/14] net: txgbe: Add PHY interface support Jiawen Wu
@ 2022-05-12 12:21   ` Andrew Lunn
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 12:21 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

> -s32 txgbe_reset_hw(struct txgbe_hw *hw)
> +s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg)
>  {


> +	if (1) {
> +		/* 2. Disable xpcs AN-73 */

You don't use if (1) in kernel quality code.

    Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver
  2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
  2022-05-12  0:38   ` kernel test robot
@ 2022-05-12 15:52   ` Andrew Lunn
  1 sibling, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 15:52 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

> +Disable LRO if enabling ip forwarding or bridging
> +-------------------------------------------------
> +WARNING: The txgbe driver supports the Large Receive Offload (LRO) feature.
> +This option offers the lowest CPU utilization for receives but is completely
> +incompatible with *routing/ip forwarding* and *bridging*. If enabling ip
> +forwarding or bridging is a requirement, it is necessary to disable LRO using
> +compile time options as noted in the LRO section later in this document. The
> +result of not disabling LRO when combined with ip forwarding or bridging can be
> +low throughput or even a kernel panic.

Please could you tell us more. Is this a hardware/firmware issue? A
know bug in your driver you cannot find?

And why is it a compile time option, not ethtool -k?

> +ethtool
> +-------
> +The driver utilizes the ethtool interface for driver configuration and
> +diagnostics, as well as displaying statistical information. The latest
> +ethtool version is required for this functionality.

And will this still be true in 5 years time?

> +Disable GRO when routing/bridging
> +---------------------------------
> +Due to a known kernel issue, GRO must be turned off when routing/bridging. GRO
> +can be turned off via ethtool::

What kernel issue?

     Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver
       [not found]   ` <004401d865e4$c3073d10$4915b730$@trustnetic.com>
@ 2022-05-12 15:57     ` Jakub Kicinski
  2022-05-12 22:06       ` Andrew Lunn
  0 siblings, 1 reply; 26+ messages in thread
From: Jakub Kicinski @ 2022-05-12 15:57 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: netdev

On Thu, 12 May 2022 17:43:39 +0800 Jiawen Wu wrote:
> On Thursday, May 12, 2022 8:54 AM, Jakub Kicinski wrote:
> > On Wed, 11 May 2022 11:26:45 +0800 Jiawen Wu wrote:  
> > >  22 files changed, 22839 insertions(+)  
> > 
> > Cut it up more, please. Expecting folks to review 23kLoC in one sitting is
> > unrealistic. Upstream a minimal driver first then start adding features.  
> 
> I learned that the number of patches should not exceed 15 at a time, refer
> to the guidance document.
> May I ask your advice that the limit of one patch and the total lines?

There is no strict limit, but the reality is that we have maybe 
5 people reviewing code upstream and hundreds of developers typing 
and sending changes. So the process needs to be skewed towards making
reviewer's life easier, reviewers are the bottleneck.

So there is no easy way here. Remove as much code as possible to still
have functional driver and cut it up. Looks like you can definitely
drop all patches starting from patch 7 to begin with. But patches 1-6
are still pretty huge.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver
  2022-05-12 15:57     ` Jakub Kicinski
@ 2022-05-12 22:06       ` Andrew Lunn
       [not found]         ` <001201d86671$61c86410$25592c30$@trustnetic.com>
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Lunn @ 2022-05-12 22:06 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Jiawen Wu, netdev

On Thu, May 12, 2022 at 08:57:48AM -0700, Jakub Kicinski wrote:
> On Thu, 12 May 2022 17:43:39 +0800 Jiawen Wu wrote:
> > On Thursday, May 12, 2022 8:54 AM, Jakub Kicinski wrote:
> > > On Wed, 11 May 2022 11:26:45 +0800 Jiawen Wu wrote:  
> > > >  22 files changed, 22839 insertions(+)  
> > > 
> > > Cut it up more, please. Expecting folks to review 23kLoC in one sitting is
> > > unrealistic. Upstream a minimal driver first then start adding features.  
> > 
> > I learned that the number of patches should not exceed 15 at a time, refer
> > to the guidance document.
> > May I ask your advice that the limit of one patch and the total lines?
> 
> There is no strict limit, but the reality is that we have maybe 
> 5 people reviewing code upstream and hundreds of developers typing 
> and sending changes. So the process needs to be skewed towards making
> reviewer's life easier, reviewers are the bottleneck.
> 
> So there is no easy way here. Remove as much code as possible to still
> have functional driver and cut it up. Looks like you can definitely
> drop all patches starting from patch 7 to begin with. But patches 1-6
> are still pretty huge.

Hi Jiawen Wu

After a quick look at the patches, i'm guessing you are new to
submitting to mainline. So it might even make sense to just post patch
1. Please make it standalone, so most of the txgbe.rst can be removed,
since the driver with just patch 1 has none of the features discussed
in it. We can give you feedback which should help you learn the
process, and what we expect from mainline code. You can then rework
that patch and the rest of your driver from what you have learned,
making further patches easier and faster to review.

       Andrew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver
       [not found]         ` <001201d86671$61c86410$25592c30$@trustnetic.com>
@ 2022-05-13 12:27           ` Andrew Lunn
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Lunn @ 2022-05-13 12:27 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: 'Jakub Kicinski', netdev

> By the way, I have some email related questions.
> I can't find the message of my reply in Patchwork. So I don't know whether
> my reply has been sent successfully.

Try looking at the other archive, like lore.

It could be you are obfuscating your email in MIME, so it got dropped.

> Also, I always receive two identical reply emails from other people at the
> same time. Are there settings I have forgotten?

That is probably one direct and one via the list. Take a look in the
email header to confirm this, look at the path the email took.

I actually find this useful, since i have all the list emails going
into a separate mailbox. Replies from the list can be missed, since
they are just one among 700 a day, but they do form a good local
archive. The direct emails land in my main mailbox, where they get
seen quickly.

     Andrew

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-05-13 12:27 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-11  3:26 [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 01/14] net: txgbe: Add build support for txgbe ethernet driver Jiawen Wu
2022-05-12  0:38   ` kernel test robot
2022-05-12 15:52   ` Andrew Lunn
2022-05-11  3:26 ` [PATCH net-next 02/14] net: txgbe: Add hardware initialization Jiawen Wu
2022-05-12  5:15   ` kernel test robot
2022-05-11  3:26 ` [PATCH net-next 03/14] net: txgbe: Add operations to interact with firmware Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 04/14] net: txgbe: Add PHY interface support Jiawen Wu
2022-05-12 12:21   ` Andrew Lunn
2022-05-11  3:26 ` [PATCH net-next 05/14] net: txgbe: Add interrupt support Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 06/14] net: txgbe: Support to receive and tranmit packets Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 07/14] net: txgbe: Support flow control Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 08/14] net: txgbe: Support flow director Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 09/14] net: txgbe: Support PTP Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 10/14] net: txgbe: Add ethtool support Jiawen Wu
2022-05-12 12:11   ` Andrew Lunn
2022-05-11  3:26 ` [PATCH net-next 11/14] net: txgbe: Support PCIe recovery Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 12/14] net: txgbe: Support power management Jiawen Wu
2022-05-11  3:26 ` [PATCH net-next 13/14] net: txgbe: Support debug filesystem Jiawen Wu
2022-05-12 11:56   ` Andrew Lunn
2022-05-11  3:26 ` [PATCH net-next 14/14] net: txgbe: Support sysfs file system Jiawen Wu
2022-05-12 11:43   ` Andrew Lunn
2022-05-12  0:54 ` [PATCH net-next 00/14] Wangxun 10 Gigabit Ethernet Driver Jakub Kicinski
     [not found]   ` <004401d865e4$c3073d10$4915b730$@trustnetic.com>
2022-05-12 15:57     ` Jakub Kicinski
2022-05-12 22:06       ` Andrew Lunn
     [not found]         ` <001201d86671$61c86410$25592c30$@trustnetic.com>
2022-05-13 12:27           ` Andrew Lunn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).