All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24
@ 2020-08-24 17:32 Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 01/15] virtchnl: Extend AVF ops Tony Nguyen
                   ` (14 more replies)
  0 siblings, 15 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem; +Cc: Tony Nguyen, netdev, nhorman, sassmann, jeffrey.t.kirsher

This series introduces both the Intel Ethernet Common Module and the Intel
Data Plane Function.  The patches also incorporate extended features and
functionality added in the virtchnl.h file.
 
The format of the series flow is to add the data set, then introduce
function stubs and finally introduce pieces in large cohesive subjects or
functionality.  This is to allow for more in depth understanding and
review of the bigger picture as the series is reviewed.

Currently this is common layer (iecm) is initially only being used by only
the idpf driver (PF driver for SmartNIC).  However, the plan is to
eventually switch our iavf driver along with future drivers to use this
common module.  The hope is to better enable code sharing going forward as
well as support other developers writing drivers for our hardware

v2: Addresses comments from the original series.  This includes removing
    the iecm_ctlq_err in iecm_ctlq_api.h, the private flags and duplicated
    checks, and cleaning up the clamps in iecm_ethtool.c.  We also added
    the supported_coalesce_params flags in iecm_ethtool.c.  Finally, we
    got the headers cleaned up and addressed mismatching types from calls
    to cpu_to_le to match the types (this fixes C=2 W=1 errors that were
    reported).
v3: fixed missed compile warning/error with C=2 W=1
v4: Fixed missed static in idpf_main.c on idpf_probe. Added missing local
    variable in iecm_rx_can_reuse_page. Updated location of documentation,
    refactored soft reset path to take memory allocation into account,
    refactored ethtool stats to not use VA_ARGS, *greatly* reduced use of
    iecm_status enum, aligned use of periods in debug statements, and
    refactored to reduce line indents.
v5: Cleaned up some checks the core is already doing, corrected the
    calculation for txq and rxq, Removed the FW version that had been
    missed in previous version, dma_wmb call directly replacing a
    define to it, cleaned up the memory and header files that were not
    used, and cleaning up the self defining error codes.

The following are changes since commit 7611cbb900b4b9a6fe5eca12fb12645c0576a015:
  Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
and are available in the git repository at:
  git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue 100GbE

Alan Brady (1):
  idpf: Introduce idpf driver

Alice Michael (14):
  virtchnl: Extend AVF ops
  iecm: Add framework set of header files
  iecm: Add TX/RX header files
  iecm: Common module introduction and function stubs
  iecm: Add basic netdevice functionality
  iecm: Implement mailbox functionality
  iecm: Implement virtchnl commands
  iecm: Implement vector allocation
  iecm: Init and allocate vport
  iecm: Deinit vport
  iecm: Add splitq TX/RX
  iecm: Add singleq TX/RX
  iecm: Add ethtool
  iecm: Add iecm to the kernel build system

 .../device_drivers/ethernet/intel/idpf.rst    |   47 +
 .../device_drivers/ethernet/intel/iecm.rst    |   93 +
 MAINTAINERS                                   |    1 +
 drivers/net/ethernet/intel/Kconfig            |   21 +
 drivers/net/ethernet/intel/Makefile           |    2 +
 drivers/net/ethernet/intel/idpf/Makefile      |   12 +
 drivers/net/ethernet/intel/idpf/idpf_dev.h    |   17 +
 drivers/net/ethernet/intel/idpf/idpf_devids.h |   10 +
 drivers/net/ethernet/intel/idpf/idpf_main.c   |  137 +
 drivers/net/ethernet/intel/idpf/idpf_reg.c    |  152 +
 drivers/net/ethernet/intel/iecm/Makefile      |   18 +
 .../net/ethernet/intel/iecm/iecm_controlq.c   |  668 +++
 .../ethernet/intel/iecm/iecm_controlq_setup.c |  176 +
 .../net/ethernet/intel/iecm/iecm_ethtool.c    | 1048 +++++
 drivers/net/ethernet/intel/iecm/iecm_lib.c    | 1235 ++++++
 drivers/net/ethernet/intel/iecm/iecm_main.c   |   41 +
 .../ethernet/intel/iecm/iecm_singleq_txrx.c   |  892 ++++
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   | 3911 +++++++++++++++++
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   | 2203 ++++++++++
 include/linux/avf/virtchnl.h                  |  592 +++
 include/linux/net/intel/iecm.h                |  399 ++
 include/linux/net/intel/iecm_alloc.h          |    7 +
 include/linux/net/intel/iecm_controlq.h       |  118 +
 include/linux/net/intel/iecm_controlq_api.h   |  192 +
 include/linux/net/intel/iecm_lan_pf_regs.h    |  120 +
 include/linux/net/intel/iecm_lan_txrx.h       |  635 +++
 include/linux/net/intel/iecm_mem.h            |   20 +
 include/linux/net/intel/iecm_txrx.h           |  582 +++
 28 files changed, 13349 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/intel/idpf.rst
 create mode 100644 Documentation/networking/device_drivers/ethernet/intel/iecm.rst
 create mode 100644 drivers/net/ethernet/intel/idpf/Makefile
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_dev.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_devids.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_main.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_reg.c
 create mode 100644 drivers/net/ethernet/intel/iecm/Makefile
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_ethtool.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_lib.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_main.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_txrx.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
 create mode 100644 include/linux/net/intel/iecm.h
 create mode 100644 include/linux/net/intel/iecm_alloc.h
 create mode 100644 include/linux/net/intel/iecm_controlq.h
 create mode 100644 include/linux/net/intel/iecm_controlq_api.h
 create mode 100644 include/linux/net/intel/iecm_lan_pf_regs.h
 create mode 100644 include/linux/net/intel/iecm_lan_txrx.h
 create mode 100644 include/linux/net/intel/iecm_mem.h
 create mode 100644 include/linux/net/intel/iecm_txrx.h

-- 
2.26.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [net-next v5 01/15] virtchnl: Extend AVF ops
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 19:42   ` Jakub Kicinski
  2020-08-24 17:32 ` [net-next v5 02/15] iecm: Add framework set of header files Tony Nguyen
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

This implements the next generation of virtchnl ops which
enable greater functionality and capabilities.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <Alan.Brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <Pavan.Kumar.Linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 include/linux/avf/virtchnl.h | 592 +++++++++++++++++++++++++++++++++++
 1 file changed, 592 insertions(+)

diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index 40bad71865ea..b8b5d545b621 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -136,6 +136,34 @@ enum virtchnl_ops {
 	VIRTCHNL_OP_DISABLE_CHANNELS = 31,
 	VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
 	VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+	/* New major set of opcodes introduced and so leaving room for
+	 * old misc opcodes to be added in future. Also these opcodes may only
+	 * be used if both the PF and VF have successfully negotiated the
+	 * VIRTCHNL_VF_CAP_EXT_FEATURES capability during initial capabilities
+	 * exchange.
+	 */
+	VIRTCHNL_OP_GET_CAPS = 100,
+	VIRTCHNL_OP_CREATE_VPORT = 101,
+	VIRTCHNL_OP_DESTROY_VPORT = 102,
+	VIRTCHNL_OP_ENABLE_VPORT = 103,
+	VIRTCHNL_OP_DISABLE_VPORT = 104,
+	VIRTCHNL_OP_CONFIG_TX_QUEUES = 105,
+	VIRTCHNL_OP_CONFIG_RX_QUEUES = 106,
+	VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
+	VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
+	VIRTCHNL_OP_ADD_QUEUES = 109,
+	VIRTCHNL_OP_DEL_QUEUES = 110,
+	VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+	VIRTCHNL_OP_UNMAP_QUEUE_VECTOR = 112,
+	VIRTCHNL_OP_GET_RSS_KEY = 113,
+	VIRTCHNL_OP_GET_RSS_LUT = 114,
+	VIRTCHNL_OP_SET_RSS_LUT = 115,
+	VIRTCHNL_OP_GET_RSS_HASH = 116,
+	VIRTCHNL_OP_SET_RSS_HASH = 117,
+	VIRTCHNL_OP_CREATE_VFS = 118,
+	VIRTCHNL_OP_DESTROY_VFS = 119,
+	VIRTCHNL_OP_ALLOC_VECTORS = 120,
+	VIRTCHNL_OP_DEALLOC_VECTORS = 121,
 };
 
 /* These macros are used to generate compilation errors if a structure/union
@@ -463,6 +491,21 @@ VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
  * PF replies with struct eth_stats in an external buffer.
  */
 
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes */
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
 /* VIRTCHNL_OP_CONFIG_RSS_KEY
  * VIRTCHNL_OP_CONFIG_RSS_LUT
  * VF sends these messages to configure RSS. Only supported if both PF
@@ -503,6 +546,14 @@ struct virtchnl_rss_hena {
 
 VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
 
+/* Type of RSS algorithm */
+enum virtchnl_rss_algorithm {
+	VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
 /* VIRTCHNL_OP_ENABLE_CHANNELS
  * VIRTCHNL_OP_DISABLE_CHANNELS
  * VF sends these messages to enable or disable channels based on
@@ -668,6 +719,389 @@ enum virtchnl_vfr_states {
 	VIRTCHNL_VFR_VFACTIVE,
 };
 
+/* PF capability flags
+ * VIRTCHNL_CAP_STATELESS_OFFLOADS flag indicates stateless offloads
+ * such as TX/RX Checksum offloading and TSO for non-tunneled packets. Please
+ * note that old and new capabilities are exclusive and not supposed to be
+ * mixed
+ */
+#define VIRTCHNL_CAP_STATELESS_OFFLOADS	BIT(1)
+#define VIRTCHNL_CAP_UDP_SEG_OFFLOAD	BIT(2)
+#define VIRTCHNL_CAP_RSS		BIT(3)
+#define VIRTCHNL_CAP_TCP_RSC		BIT(4)
+#define VIRTCHNL_CAP_HEADER_SPLIT	BIT(5)
+#define VIRTCHNL_CAP_RDMA		BIT(6)
+#define VIRTCHNL_CAP_SRIOV		BIT(7)
+/* Earliest Departure Time capability used for Timing Wheel */
+#define VIRTCHNL_CAP_EDT		BIT(8)
+
+/* Type of virtual port */
+enum virtchnl_vport_type {
+	VIRTCHNL_VPORT_TYPE_DEFAULT	= 0,
+};
+
+/* Type of queue model */
+enum virtchnl_queue_model {
+	VIRTCHNL_QUEUE_MODEL_SINGLE	= 0,
+	VIRTCHNL_QUEUE_MODEL_SPLIT	= 1,
+};
+
+/* TX and RX queue types are valid in legacy as well as split queue models.
+ * With Split Queue model, 2 additional types are introduced - TX_COMPLETION
+ * and RX_BUFFER. In split queue model, RX corresponds to the queue where HW
+ * posts completions.
+ */
+enum virtchnl_queue_type {
+	VIRTCHNL_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL_QUEUE_TYPE_RX_BUFFER		= 3,
+};
+
+/* RX Queue Feature bits */
+#define VIRTCHNL_RXQ_RSC			BIT(1)
+#define VIRTCHNL_RXQ_HDR_SPLIT			BIT(2)
+#define VIRTCHNL_RXQ_IMMEDIATE_WRITE_BACK	BIT(4)
+
+/* RX Queue Descriptor Types */
+enum virtchnl_rxq_desc_size {
+	VIRTCHNL_RXQ_DESC_SIZE_16BYTE		= 0,
+	VIRTCHNL_RXQ_DESC_SIZE_32BYTE		= 1,
+};
+
+/* TX Queue Scheduling Modes  Queue mode is the legacy type i.e. inorder
+ * and Flow mode is out of order packet processing
+ */
+enum virtchnl_txq_sched_mode {
+	VIRTCHNL_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/* Queue Descriptor Profiles  Base mode is the legacy and Native is the
+ * flex descriptors
+ */
+enum virtchnl_desc_profile {
+	VIRTCHNL_TXQ_DESC_PROFILE_BASE		= 0,
+	VIRTCHNL_TXQ_DESC_PROFILE_NATIVE	= 1,
+};
+
+/* VIRTCHNL_OP_GET_CAPS
+ * PF sends this message to CP to negotiate capabilities by filling
+ * in the u64 bitmap of its desired capabilities, max_num_vfs and
+ * num_allocated_vectors.
+ * CP responds with an updated virtchnl_get_capabilities structure
+ * with allowed capabilities and the other fields as below.
+ * If PF sets max_num_vfs as 0, CP will respond with max number of VFs
+ * that can be created by this PF. For any other value 'n', CP responds
+ * with max_num_vfs set to max(n, x) where x is the max number of VFs
+ * allowed by CP's policy.
+ * If PF sets num_allocated_vectors as 0, CP will respond with 1 which
+ * is default vector associated with the default mailbox. For any other
+ * value 'n', CP responds with a value <= n based on the CP's policy of
+ * max number of vectors for a PF.
+ */
+struct virtchnl_get_capabilities {
+	u64 cap_flags;
+	u16 max_num_vfs;
+	u16 num_allocated_vectors;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_get_capabilities);
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl_queue_chunk {
+	enum virtchnl_queue_type type;
+	u16 start_queue_id;
+	u16 num_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl_queue_chunks {
+	u16 num_chunks;
+	u16 rsvd;
+	struct virtchnl_queue_chunk chunks[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks);
+
+/* VIRTCHNL_OP_CREATE_VPORT
+ * PF sends this message to CP to create a vport by filling in the first 8
+ * fields of virtchnl_create_vport structure (vport type, tx, rx queue models
+ * and desired number of queues and vectors). CP responds with the updated
+ * virtchnl_create_vport structure containing the number of assigned queues,
+ * vectors, vport id, max mtu, default mac addr followed by chunks which in turn
+ * will have an array of num_chunks entries of virtchnl_queue_chunk structures.
+ */
+struct virtchnl_create_vport {
+	enum virtchnl_vport_type vport_type;
+	/* single or split */
+	enum virtchnl_queue_model txq_model;
+	/* single or split */
+	enum virtchnl_queue_model rxq_model;
+	u16 num_tx_q;
+	/* valid only if txq_model is split Q */
+	u16 num_tx_complq;
+	u16 num_rx_q;
+	/* valid only if rxq_model is split Q */
+	u16 num_rx_bufq;
+	u16 vport_id;
+	u16 max_mtu;
+	u8 default_mac_addr[ETH_ALEN];
+	enum virtchnl_rss_algorithm rss_algorithm;
+	u16 rss_key_size;
+	u16 rss_lut_size;
+	u16 qset_handle;
+	struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(56, virtchnl_create_vport);
+
+/* VIRTCHNL_OP_DESTROY_VPORT
+ * VIRTCHNL_OP_ENABLE_VPORT
+ * VIRTCHNL_OP_DISABLE_VPORT
+ * PF sends this message to CP to destroy, enable or disable a vport by filling
+ * in the vport_id in virtchnl_vport structure.
+ * CP responds with the status of the requested operation.
+ */
+struct virtchnl_vport {
+	u16 vport_id;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_vport);
+
+/* Tx queue config info */
+struct virtchnl_txq_info_v2 {
+	u16 queue_id;
+	/* single or split */
+	enum virtchnl_queue_model model;
+	/* tx or tx_completion */
+	enum virtchnl_queue_type type;
+	/* queue or flow based */
+	enum virtchnl_txq_sched_mode sched_mode;
+	/* base or native */
+	enum virtchnl_desc_profile desc_profile;
+	u16 ring_len;
+	u64 dma_ring_addr;
+	/* valid only if queue model is split and type is tx */
+	u16 tx_compl_queue_id;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_txq_info_v2);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUES
+ * PF sends this message to set up parameters for one or more TX queues.
+ * This message contains an array of num_qinfo instances of virtchnl_txq_info_v2
+ * structures. CP configures requested queues and returns a status code. If
+ * num_qinfo specified is greater than the number of queues associated with the
+ * vport, an error is returned and no queues are configured.
+ */
+struct virtchnl_config_tx_queues {
+	u16 vport_id;
+	u16 num_qinfo;
+	u32 rsvd;
+	struct virtchnl_txq_info_v2 qinfo[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_config_tx_queues);
+
+/* Rx queue config info */
+struct virtchnl_rxq_info_v2 {
+	u16 queue_id;
+	/* single or split */
+	enum virtchnl_queue_model model;
+	/* rx or rx buffer */
+	enum virtchnl_queue_type type;
+	/* base or native */
+	enum virtchnl_desc_profile desc_profile;
+	/* rsc, header-split, immediate write back */
+	u16 queue_flags;
+	/* 16 or 32 byte */
+	enum virtchnl_rxq_desc_size desc_size;
+	u16 ring_len;
+	u16 hdr_buffer_size;
+	u32 data_buffer_size;
+	u32 max_pkt_size;
+	u64 dma_ring_addr;
+	u64 dma_head_wb_addr;
+	u16 rsc_low_watermark;
+	u8 buffer_notif_stride;
+	enum virtchnl_rx_hsplit rx_split_pos;
+	/* valid only if queue model is split and type is rx buffer*/
+	u16 rx_bufq1_id;
+	/* valid only if queue model is split and type is rx buffer*/
+	u16 rx_bufq2_id;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_rxq_info_v2);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUES
+ * PF sends this message to set up parameters for one or more RX queues.
+ * This message contains an array of num_qinfo instances of virtchnl_rxq_info_v2
+ * structures. CP configures requested queues and returns a status code.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ */
+struct virtchnl_config_rx_queues {
+	u16 vport_id;
+	u16 num_qinfo;
+	struct virtchnl_rxq_info_v2 qinfo[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_config_rx_queues);
+
+/* VIRTCHNL_OP_ADD_QUEUES
+ * PF sends this message to request additional TX/RX queues beyond the ones
+ * that were assigned via CREATE_VPORT request. virtchnl_add_queues structure is
+ * used to specify the number of each type of queues.
+ * CP responds with the same structure with the actual number of queues assigned
+ * followed by num_chunks of virtchnl_queue_chunk structures.
+ */
+struct virtchnl_add_queues {
+	u16 vport_id;
+	u16 num_tx_q;
+	u16 num_tx_complq;
+	u16 num_rx_q;
+	u16 num_rx_bufq;
+	struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_add_queues);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VIRTCHNL_OP_DEL_QUEUES
+ * PF sends these messages to enable, disable or delete queues specified in
+ * chunks. PF sends virtchnl_del_ena_dis_queues struct to specify the queues
+ * to be enabled/disabled/deleted. Also applicable to single queue RX or
+ * TX. CP performs requested action and returns status.
+ */
+struct virtchnl_del_ena_dis_queues {
+	u16 vport_id;
+	struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues);
+
+/* Virtchannel interrupt throttling rate index */
+enum virtchnl_itr_idx {
+	VIRTCHNL_ITR_IDX_0	= 0,
+	VIRTCHNL_ITR_IDX_1	= 1,
+	VIRTCHNL_ITR_IDX_NO_ITR	= 3,
+};
+
+/* Queue to vector mapping */
+struct virtchnl_queue_vector {
+	u16 queue_id;
+	u16 vector_id;
+	enum virtchnl_itr_idx itr_idx;
+	enum virtchnl_queue_type queue_type;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_vector);
+
+/* VIRTCHNL_OP_MAP_QUEUE_VECTOR
+ * VIRTCHNL_OP_UNMAP_QUEUE_VECTOR
+ * PF sends this message to map or unmap queues to vectors and ITR index
+ * registers. External data buffer contains virtchnl_queue_vector_maps structure
+ * that contains num_maps of virtchnl_queue_vector structures.
+ * CP maps the requested queue vector maps after validating the queue and vector
+ * ids and returns a status code.
+ */
+struct virtchnl_queue_vector_maps {
+	u16 vport_id;
+	u16 num_maps;
+	struct virtchnl_queue_vector qv_maps[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector_maps);
+
+/* Structure to specify a chunk of contiguous interrupt vectors */
+struct virtchnl_vector_chunk {
+	u16 start_vector_id;
+	u16 num_vectors;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_vector_chunk);
+
+/* Structure to specify several chunks of contiguous interrupt vectors */
+struct virtchnl_vector_chunks {
+	u16 num_vector_chunks;
+	struct virtchnl_vector_chunk num_vchunk[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vector_chunks);
+
+/* VIRTCHNL_OP_ALLOC_VECTORS
+ * PF sends this message to request additional interrupt vectors beyond the
+ * ones that were assigned via GET_CAPS request. virtchnl_alloc_vectors
+ * structure is used to specify the number of vectors requested. CP responds
+ * with the same structure with the actual number of vectors assigned followed
+ * by virtchnl_vector_chunks structure identifying the vector ids.
+ */
+struct virtchnl_alloc_vectors {
+	u16 num_vectors;
+	struct virtchnl_vector_chunks vchunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_alloc_vectors);
+
+/* VIRTCHNL_OP_DEALLOC_VECTORS
+ * PF sends this message to release the vectors.
+ * PF sends virtchnl_vector_chunks struct to specify the vectors it is giving
+ * away. CP performs requested action and returns status.
+ */
+
+/* VIRTCHNL_OP_GET_RSS_LUT
+ * VIRTCHNL_OP_SET_RSS_LUT
+ * PF sends this message to get or set RSS lookup table. Only supported if
+ * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl_rss_lut_v2 structure
+ */
+struct virtchnl_rss_lut_v2 {
+	u16 vport_id;
+	u16 lut_entries;
+	u16 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut_v2);
+
+/* VIRTCHNL_OP_GET_RSS_KEY
+ * PF sends this message to get RSS key. Only supported if
+ * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl_rss_key structure
+ */
+
+/* VIRTCHNL_OP_GET_RSS_HASH
+ * VIRTCHNL_OP_SET_RSS_HASH
+ * PF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the CP sets these to all possible traffic types that the
+ * hardware supports. The PF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Only supported if both PF and CP drivers set the VIRTCHNL_CAP_RSS bit
+ * during configuration negotiation.
+ */
+struct virtchnl_rss_hash {
+	u64 hash;
+	u16 vport_id;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);
+
+/* VIRTCHNL_OP_CREATE_SRIOV_VFS
+ * VIRTCHNL_OP_DESTROY_SRIOV_VFS
+ * This message is used to let the CP know how many SRIOV VFs need to be
+ * created. The actual allocation of resources for the VFs in terms of VSI,
+ * Queues and Interrupts is done by CP. When this call completes, the APF driver
+ * calls pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
+ */
+struct virtchnl_sriov_vfs_info {
+	u16 num_vfs;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_sriov_vfs_info);
+
 /**
  * virtchnl_vc_validate_vf_msg
  * @ver: Virtchnl version info
@@ -828,6 +1262,164 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
 	case VIRTCHNL_OP_DEL_CLOUD_FILTER:
 		valid_len = sizeof(struct virtchnl_filter);
 		break;
+	case VIRTCHNL_OP_GET_CAPS:
+		valid_len = sizeof(struct virtchnl_get_capabilities);
+		break;
+	case VIRTCHNL_OP_CREATE_VPORT:
+		valid_len = sizeof(struct virtchnl_create_vport);
+		if (msglen >= valid_len) {
+			struct virtchnl_create_vport *cvport =
+				(struct virtchnl_create_vport *)msg;
+
+			if (cvport->chunks.num_chunks == 0) {
+				/* zero chunks is allowed as input */
+				break;
+			}
+
+			valid_len += (cvport->chunks.num_chunks - 1) *
+				sizeof(struct virtchnl_queue_chunk);
+		}
+		break;
+	case VIRTCHNL_OP_DESTROY_VPORT:
+	case VIRTCHNL_OP_ENABLE_VPORT:
+	case VIRTCHNL_OP_DISABLE_VPORT:
+		valid_len = sizeof(struct virtchnl_vport);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUES:
+		valid_len = sizeof(struct virtchnl_config_tx_queues);
+		if (msglen >= valid_len) {
+			struct virtchnl_config_tx_queues *ctq =
+				(struct virtchnl_config_tx_queues *)msg;
+			if (ctq->num_qinfo == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += (ctq->num_qinfo - 1) *
+				sizeof(struct virtchnl_txq_info_v2);
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUES:
+		valid_len = sizeof(struct virtchnl_config_rx_queues);
+		if (msglen >= valid_len) {
+			struct virtchnl_config_rx_queues *crq =
+				(struct virtchnl_config_rx_queues *)msg;
+			if (crq->num_qinfo == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += (crq->num_qinfo - 1) *
+				sizeof(struct virtchnl_rxq_info_v2);
+		}
+		break;
+	case VIRTCHNL_OP_ADD_QUEUES:
+		valid_len = sizeof(struct virtchnl_add_queues);
+		if (msglen >= valid_len) {
+			struct virtchnl_add_queues *add_q =
+				(struct virtchnl_add_queues *)msg;
+
+			if (add_q->chunks.num_chunks == 0) {
+				/* zero chunks is allowed as input */
+				break;
+			}
+
+			valid_len += (add_q->chunks.num_chunks - 1) *
+				sizeof(struct virtchnl_queue_chunk);
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+	case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+	case VIRTCHNL_OP_DEL_QUEUES:
+		valid_len = sizeof(struct virtchnl_del_ena_dis_queues);
+		if (msglen >= valid_len) {
+			struct virtchnl_del_ena_dis_queues *qs =
+				(struct virtchnl_del_ena_dis_queues *)msg;
+			if (qs->chunks.num_chunks == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += (qs->chunks.num_chunks - 1) *
+				sizeof(struct virtchnl_queue_chunk);
+		}
+		break;
+	case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+	case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR:
+		valid_len = sizeof(struct virtchnl_queue_vector_maps);
+		if (msglen >= valid_len) {
+			struct virtchnl_queue_vector_maps *v_qp =
+				(struct virtchnl_queue_vector_maps *)msg;
+			if (v_qp->num_maps == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += (v_qp->num_maps - 1) *
+				sizeof(struct virtchnl_queue_vector);
+		}
+		break;
+	case VIRTCHNL_OP_ALLOC_VECTORS:
+		valid_len = sizeof(struct virtchnl_alloc_vectors);
+		if (msglen >= valid_len) {
+			struct virtchnl_alloc_vectors *v_av =
+				(struct virtchnl_alloc_vectors *)msg;
+
+			if (v_av->vchunks.num_vector_chunks == 0) {
+				/* zero chunks is allowed as input */
+				break;
+			}
+
+			valid_len += (v_av->vchunks.num_vector_chunks - 1) *
+				sizeof(struct virtchnl_vector_chunk);
+		}
+		break;
+	case VIRTCHNL_OP_DEALLOC_VECTORS:
+		valid_len = sizeof(struct virtchnl_vector_chunks);
+		if (msglen >= valid_len) {
+			struct virtchnl_vector_chunks *v_chunks =
+				(struct virtchnl_vector_chunks *)msg;
+			if (v_chunks->num_vector_chunks == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += (v_chunks->num_vector_chunks - 1) *
+				sizeof(struct virtchnl_vector_chunk);
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+
+			if (vrk->key_len == 0) {
+				/* zero length is allowed as input */
+				break;
+			}
+
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_LUT:
+	case VIRTCHNL_OP_SET_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut_v2);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut_v2 *vrl =
+				(struct virtchnl_rss_lut_v2 *)msg;
+
+			if (vrl->lut_entries == 0) {
+				/* zero entries is allowed as input */
+				break;
+			}
+
+			valid_len += (vrl->lut_entries - 1) * sizeof(u16);
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HASH:
+	case VIRTCHNL_OP_SET_RSS_HASH:
+		valid_len = sizeof(struct virtchnl_rss_hash);
+		break;
+	case VIRTCHNL_OP_CREATE_VFS:
+	case VIRTCHNL_OP_DESTROY_VFS:
+		valid_len = sizeof(struct virtchnl_sriov_vfs_info);
+		break;
 	/* These are always errors coming from the VF. */
 	case VIRTCHNL_OP_EVENT:
 	case VIRTCHNL_OP_UNKNOWN:
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 02/15] iecm: Add framework set of header files
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 01/15] virtchnl: Extend AVF ops Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 03/15] iecm: Add TX/RX " Tony Nguyen
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Introduces the framework of data for the driver common
module.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 include/linux/net/intel/iecm.h              | 399 ++++++++++++++++++++
 include/linux/net/intel/iecm_alloc.h        |   7 +
 include/linux/net/intel/iecm_controlq.h     | 118 ++++++
 include/linux/net/intel/iecm_controlq_api.h | 192 ++++++++++
 include/linux/net/intel/iecm_mem.h          |  20 +
 5 files changed, 736 insertions(+)
 create mode 100644 include/linux/net/intel/iecm.h
 create mode 100644 include/linux/net/intel/iecm_alloc.h
 create mode 100644 include/linux/net/intel/iecm_controlq.h
 create mode 100644 include/linux/net/intel/iecm_controlq_api.h
 create mode 100644 include/linux/net/intel/iecm_mem.h

diff --git a/include/linux/net/intel/iecm.h b/include/linux/net/intel/iecm.h
new file mode 100644
index 000000000000..1f2dcdbc9c61
--- /dev/null
+++ b/include/linux/net/intel/iecm.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+
+#ifndef _IECM_H_
+#define _IECM_H_
+
+#include <linux/aer.h>
+#include <linux/pci.h>
+#include <linux/sctp.h>
+#include <net/tcp.h>
+#include <net/ip6_checksum.h>
+#include <linux/avf/virtchnl.h>
+
+#include <linux/net/intel/iecm_lan_txrx.h>
+#include <linux/net/intel/iecm_txrx.h>
+#include <linux/net/intel/iecm_controlq.h>
+
+extern const char iecm_drv_ver[];
+extern char iecm_drv_name[];
+
+#define MAKEMASK(m, s)	((m) << (s))
+
+#define IECM_BAR0	0
+#define IECM_NO_FREE_SLOT	0xffff
+
+/* Default Mailbox settings */
+#define IECM_DFLT_MBX_BUF_SIZE	(1 * 1024)
+#define IECM_NUM_QCTX_PER_MSG	3
+#define IECM_DFLT_MBX_Q_LEN	64
+#define IECM_DFLT_MBX_ID	-1
+/* maximum number of times to try before resetting mailbox */
+#define IECM_MB_MAX_ERR	20
+
+#define IECM_MAX_NUM_VPORTS	1
+
+/* available message levels */
+#define IECM_AVAIL_NETIF_M (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK)
+
+/* Forward declaration */
+struct iecm_adapter;
+
+enum iecm_state {
+	__IECM_STARTUP,
+	__IECM_VER_CHECK,
+	__IECM_GET_RES,
+	__IECM_GET_CAPS,
+	__IECM_GET_DFLT_VPORT_PARAMS,
+	__IECM_INIT_SW,
+	__IECM_DOWN,
+	__IECM_UP,
+	__IECM_REMOVE,
+	__IECM_STATE_LAST /* this member MUST be last */
+};
+
+enum iecm_flags {
+	/* Soft reset causes */
+	__IECM_SR_Q_CHANGE, /* Soft reset to do queue change */
+	__IECM_SR_Q_DESC_CHANGE,
+	__IECM_SR_Q_SCH_CHANGE, /* Scheduling mode change in queue context */
+	__IECM_SR_MTU_CHANGE,
+	__IECM_SR_TC_CHANGE,
+	/* Hard reset causes */
+	__IECM_HR_FUNC_RESET, /* Hard reset when txrx timeout */
+	__IECM_HR_CORE_RESET, /* when reset event is received on virtchannel */
+	__IECM_HR_DRV_LOAD, /* Set on driver load for a clean HW */
+	/* Generic bits to share a message */
+	__IECM_DEL_QUEUES,
+	__IECM_UP_REQUESTED, /* Set if open to be called explicitly by driver */
+	/* Mailbox interrupt event */
+	__IECM_MB_INTR_MODE,
+	__IECM_MB_INTR_TRIGGER,
+	/* Backwards compatibility */
+	__IECM_NO_EXTENDED_CAPS,
+	/* must be last */
+	__IECM_FLAGS_NBITS,
+};
+
+struct iecm_netdev_priv {
+	struct iecm_vport *vport;
+};
+
+struct iecm_reset_reg {
+	u32 rstat;
+	u32 rstat_m;
+};
+
+/* product specific register API */
+struct iecm_reg_ops {
+	void (*ctlq_reg_init)(struct iecm_ctlq_create_info *cq);
+	void (*vportq_reg_init)(struct iecm_vport *vport);
+	void (*intr_reg_init)(struct iecm_vport *vport);
+	void (*mb_intr_reg_init)(struct iecm_adapter *adapter);
+	void (*reset_reg_init)(struct iecm_reset_reg *reset_reg);
+	void (*trigger_reset)(struct iecm_adapter *adapter,
+			      enum iecm_flags trig_cause);
+};
+
+struct iecm_virtchnl_ops {
+	int (*core_init)(struct iecm_adapter *adapter, int *vport_id);
+	void (*vport_init)(struct iecm_vport *vport, int vport_id);
+	int (*vport_queue_ids_init)(struct iecm_vport *vport);
+	int (*get_caps)(struct iecm_adapter *adapter);
+	int (*config_queues)(struct iecm_vport *vport);
+	int (*enable_queues)(struct iecm_vport *vport);
+	int (*disable_queues)(struct iecm_vport *vport);
+	int (*irq_map_unmap)(struct iecm_vport *vport, bool map);
+	int (*enable_vport)(struct iecm_vport *vport);
+	int (*disable_vport)(struct iecm_vport *vport);
+	int (*destroy_vport)(struct iecm_vport *vport);
+	int (*get_ptype)(struct iecm_vport *vport);
+	int (*get_set_rss_lut)(struct iecm_vport *vport, bool get);
+	int (*get_set_rss_hash)(struct iecm_vport *vport, bool get);
+	void (*adjust_qs)(struct iecm_vport *vport);
+	int (*recv_mbx_msg)(struct iecm_adapter *adapter,
+			    void *msg, int msg_size,
+			    struct iecm_ctlq_msg *ctlq_msg, bool *work_done);
+	bool (*is_cap_ena)(struct iecm_adapter *adapter, u64 flag);
+};
+
+struct iecm_dev_ops {
+	void (*reg_ops_init)(struct iecm_adapter *adapter);
+	void (*vc_ops_init)(struct iecm_adapter *adapter);
+	void (*crc_enable)(u64 *td_cmd);
+	struct iecm_reg_ops reg_ops;
+	struct iecm_virtchnl_ops vc_ops;
+};
+
+/* vport specific data structure */
+
+enum  iecm_vport_vc_state {
+	IECM_VC_ENA_VPORT,
+	IECM_VC_ENA_VPORT_ERR,
+	IECM_VC_DIS_VPORT,
+	IECM_VC_DIS_VPORT_ERR,
+	IECM_VC_DESTROY_VPORT,
+	IECM_VC_DESTROY_VPORT_ERR,
+	IECM_VC_CONFIG_TXQ,
+	IECM_VC_CONFIG_TXQ_ERR,
+	IECM_VC_CONFIG_RXQ,
+	IECM_VC_CONFIG_RXQ_ERR,
+	IECM_VC_CONFIG_Q,
+	IECM_VC_CONFIG_Q_ERR,
+	IECM_VC_ENA_QUEUES,
+	IECM_VC_ENA_QUEUES_ERR,
+	IECM_VC_DIS_QUEUES,
+	IECM_VC_DIS_QUEUES_ERR,
+	IECM_VC_MAP_IRQ,
+	IECM_VC_MAP_IRQ_ERR,
+	IECM_VC_UNMAP_IRQ,
+	IECM_VC_UNMAP_IRQ_ERR,
+	IECM_VC_ADD_QUEUES,
+	IECM_VC_ADD_QUEUES_ERR,
+	IECM_VC_DEL_QUEUES,
+	IECM_VC_DEL_QUEUES_ERR,
+	IECM_VC_ALLOC_VECTORS,
+	IECM_VC_ALLOC_VECTORS_ERR,
+	IECM_VC_DEALLOC_VECTORS,
+	IECM_VC_DEALLOC_VECTORS_ERR,
+	IECM_VC_CREATE_VFS,
+	IECM_VC_CREATE_VFS_ERR,
+	IECM_VC_DESTROY_VFS,
+	IECM_VC_DESTROY_VFS_ERR,
+	IECM_VC_GET_RSS_HASH,
+	IECM_VC_GET_RSS_HASH_ERR,
+	IECM_VC_SET_RSS_HASH,
+	IECM_VC_SET_RSS_HASH_ERR,
+	IECM_VC_GET_RSS_LUT,
+	IECM_VC_GET_RSS_LUT_ERR,
+	IECM_VC_SET_RSS_LUT,
+	IECM_VC_SET_RSS_LUT_ERR,
+	IECM_VC_GET_RSS_KEY,
+	IECM_VC_GET_RSS_KEY_ERR,
+	IECM_VC_CONFIG_RSS_KEY,
+	IECM_VC_CONFIG_RSS_KEY_ERR,
+	IECM_VC_GET_STATS,
+	IECM_VC_GET_STATS_ERR,
+	IECM_VC_NBITS
+};
+
+enum iecm_vport_flags {
+	__IECM_VPORT_SW_MARKER,
+	__IECM_VPORT_FLAGS_NBITS,
+};
+
+struct iecm_vport {
+	/* TX */
+	unsigned int num_txq;
+	unsigned int num_complq;
+	/* It makes more sense for descriptor count to be part of only iecm
+	 * queue structure. But when user changes the count via ethtool, driver
+	 * has to store that value somewhere other than queue structure as the
+	 * queues will be freed and allocated again.
+	 */
+	unsigned int txq_desc_count;
+	unsigned int complq_desc_count;
+	unsigned int compln_clean_budget;
+	unsigned int num_txq_grp;
+	struct iecm_txq_group *txq_grps;
+	enum virtchnl_queue_model txq_model;
+	/* Used only in hotpath to get to the right queue very fast */
+	struct iecm_queue **txqs;
+	wait_queue_head_t sw_marker_wq;
+	DECLARE_BITMAP(flags, __IECM_VPORT_FLAGS_NBITS);
+
+	/* RX */
+	unsigned int num_rxq;
+	unsigned int num_bufq;
+	unsigned int rxq_desc_count;
+	unsigned int bufq_desc_count;
+	unsigned int num_rxq_grp;
+	struct iecm_rxq_group *rxq_grps;
+	enum virtchnl_queue_model rxq_model;
+	struct iecm_rx_ptype_decoded rx_ptype_lkup[IECM_RX_MAX_PTYPE];
+
+	struct iecm_adapter *adapter;
+	struct net_device *netdev;
+	u16 vport_type;
+	u16 vport_id;
+	u16 idx;		/* software index in adapter vports struct */
+	struct rtnl_link_stats64 netstats;
+
+	/* handler for hard interrupt */
+	irqreturn_t (*irq_q_handler)(int irq, void *data);
+	struct iecm_q_vector *q_vectors;	/* q vector array */
+	u16 num_q_vectors;
+	u16 q_vector_base;
+	u16 max_mtu;
+	u8 default_mac_addr[ETH_ALEN];
+	u16 qset_handle;
+	/* Duplicated in queue structure for performance reasons */
+	enum iecm_rx_hsplit rx_hsplit_en;
+};
+
+/* User defined configuration values */
+struct iecm_user_config_data {
+	u32 num_req_qs; /* user requested queues through ethtool */
+	u32 num_req_txq_desc;
+	u32 num_req_rxq_desc;
+	void *req_qs_chunks;
+};
+
+struct iecm_rss_data {
+	u64 rss_hash;
+	u16 rss_key_size;
+	u8 *rss_key;
+	u16 rss_lut_size;
+	u8 *rss_lut;
+};
+
+struct iecm_adapter {
+	struct pci_dev *pdev;
+
+	u32 tx_timeout_count;
+	u32 msg_enable;
+	enum iecm_state state;
+	DECLARE_BITMAP(flags, __IECM_FLAGS_NBITS);
+	struct mutex reset_lock; /* lock to protect reset flows */
+	struct iecm_reset_reg reset_reg;
+	struct iecm_hw hw;
+
+	u16 num_req_msix;
+	u16 num_msix_entries;
+	struct msix_entry *msix_entries;
+	struct virtchnl_alloc_vectors *req_vec_chunks;
+	struct iecm_q_vector mb_vector;
+	/* handler for hard interrupt for mailbox*/
+	irqreturn_t (*irq_mb_handler)(int irq, void *data);
+
+	/* vport structs */
+	struct iecm_vport **vports;	/* vports created by the driver */
+	u16 num_alloc_vport;
+	u16 next_vport;		/* Next free slot in pf->vport[] - 0-based! */
+	struct mutex sw_mutex;	/* lock to protect vport alloc flow */
+
+	struct delayed_work init_task; /* delayed init task */
+	struct workqueue_struct *init_wq;
+	u32 mb_wait_count;
+	struct delayed_work serv_task; /* delayed service task */
+	struct workqueue_struct *serv_wq;
+	struct delayed_work stats_task; /* delayed statistics task */
+	struct workqueue_struct *stats_wq;
+	struct delayed_work vc_event_task; /* delayed virtchannel event task */
+	struct workqueue_struct *vc_event_wq;
+	/* Store the resources data received from control plane */
+	void **vport_params_reqd;
+	void **vport_params_recvd;
+	/* User set parameters */
+	struct iecm_user_config_data config_data;
+	void *caps;
+	wait_queue_head_t vchnl_wq;
+	DECLARE_BITMAP(vc_state, IECM_VC_NBITS);
+	struct mutex vc_msg_lock;	/* lock to protect vc_msg flow */
+	char vc_msg[IECM_DFLT_MBX_BUF_SIZE];
+	struct iecm_rss_data rss_data;
+	struct iecm_dev_ops dev_ops;
+	enum virtchnl_link_speed link_speed;
+	bool link_up;
+	int debug_msk; /* netif messaging level */
+};
+
+/**
+ * iecm_is_queue_model_split - check if queue model is split
+ * @q_model: queue model single or split
+ *
+ * Returns true if queue model is split else false
+ */
+static inline int iecm_is_queue_model_split(enum virtchnl_queue_model q_model)
+{
+	return (q_model == VIRTCHNL_QUEUE_MODEL_SPLIT);
+}
+
+/**
+ * iecm_is_cap_ena - Determine if HW capability is supported
+ * @adapter: private data struct
+ * @flag: Feature flag to check
+ */
+static inline bool iecm_is_cap_ena(struct iecm_adapter *adapter, u64 flag)
+{
+	return adapter->dev_ops.vc_ops.is_cap_ena(adapter, flag);
+}
+
+/**
+ * iecm_is_feature_ena - Determine if a particular feature is enabled
+ * @vport: vport to check
+ * @feature: netdev flag to check
+ *
+ * Returns true or false if a particular feature is enabled.
+ */
+static inline bool iecm_is_feature_ena(struct iecm_vport *vport,
+				       netdev_features_t feature)
+{
+	return vport->netdev->features & feature;
+}
+
+/**
+ * iecm_rx_offset - Return expected offset into page to access data
+ * @rx_q: queue we are requesting offset of
+ *
+ * Returns the offset value for queue into the data buffer.
+ */
+static inline unsigned int
+iecm_rx_offset(struct iecm_queue __maybe_unused *rx_q)
+{
+	return 0;
+}
+
+int iecm_probe(struct pci_dev *pdev,
+	       const struct pci_device_id __always_unused *ent,
+	       struct iecm_adapter *adapter);
+void iecm_remove(struct pci_dev *pdev);
+void iecm_shutdown(struct pci_dev *pdev);
+void iecm_vport_adjust_qs(struct iecm_vport *vport);
+int iecm_ctlq_reg_init(struct iecm_ctlq_create_info *cq, int num_q);
+int iecm_init_dflt_mbx(struct iecm_adapter *adapter);
+void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter);
+void iecm_vc_ops_init(struct iecm_adapter *adapter);
+int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id);
+enum iecm_status
+iecm_wait_for_event(struct iecm_adapter *adapter,
+		    enum iecm_vport_vc_state state,
+		    enum iecm_vport_vc_state err_check);
+int iecm_wait_for_event(struct iecm_adapter *adapter,
+			enum iecm_vport_vc_state state,
+			enum iecm_vport_vc_state err_check);
+int iecm_send_get_caps_msg(struct iecm_adapter *adapter);
+int iecm_send_delete_queues_msg(struct iecm_vport *vport);
+int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q,
+			     u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
+int iecm_initiate_soft_reset(struct iecm_vport *vport,
+			     enum iecm_flags reset_cause);
+int iecm_send_config_tx_queues_msg(struct iecm_vport *vport);
+int iecm_send_config_rx_queues_msg(struct iecm_vport *vport);
+int iecm_send_enable_vport_msg(struct iecm_vport *vport);
+int iecm_send_disable_vport_msg(struct iecm_vport *vport);
+int iecm_send_destroy_vport_msg(struct iecm_vport *vport);
+int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport);
+int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get);
+int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get);
+int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get);
+int iecm_send_dealloc_vectors_msg(struct iecm_adapter *adapter);
+int iecm_send_alloc_vectors_msg(struct iecm_adapter *adapter, u16 num_vectors);
+int iecm_send_enable_strip_vlan_msg(struct iecm_vport *vport);
+int iecm_send_disable_strip_vlan_msg(struct iecm_vport *vport);
+int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter);
+void iecm_vport_params_buf_rel(struct iecm_adapter *adapter);
+struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev);
+struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev);
+int iecm_send_get_stats_msg(struct iecm_vport *vport);
+int iecm_vport_get_vec_ids(u16 *vecids, int num_vecids,
+			   struct virtchnl_vector_chunks *chunks);
+int iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
+		     void *msg, int msg_size);
+int iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
+		     u16 msg_size, u8 *msg);
+void iecm_set_ethtool_ops(struct net_device *netdev);
+void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog *prog);
+void iecm_vport_intr_rel(struct iecm_vport *vport);
+int iecm_vport_intr_alloc(struct iecm_vport *vport);
+#endif /* !_IECM_H_ */
diff --git a/include/linux/net/intel/iecm_alloc.h b/include/linux/net/intel/iecm_alloc.h
new file mode 100644
index 000000000000..7b3b18739a05
--- /dev/null
+++ b/include/linux/net/intel/iecm_alloc.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2020, Intel Corporation. */
+
+#ifndef _IECM_ALLOC_H_
+#define _IECM_ALLOC_H_
+
+#endif /* _IECM_ALLOC_H_ */
diff --git a/include/linux/net/intel/iecm_controlq.h b/include/linux/net/intel/iecm_controlq.h
new file mode 100644
index 000000000000..19d9b85e23a6
--- /dev/null
+++ b/include/linux/net/intel/iecm_controlq.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2020, Intel Corporation. */
+
+#ifndef _IECM_CONTROLQ_H_
+#define _IECM_CONTROLQ_H_
+
+#include <linux/slab.h>
+
+#include <linux/net/intel/iecm_controlq_api.h>
+
+/* Maximum buffer lengths for all control queue types */
+#define IECM_CTLQ_MAX_RING_SIZE	1024
+#define IECM_CTLQ_MAX_BUF_LEN	4096
+
+#define IECM_CTLQ_DESC(R, i) \
+	(&(((struct iecm_ctlq_desc *)((R)->desc_ring.va))[i]))
+
+#define IECM_CTLQ_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Control Queue default settings */
+#define IECM_CTRL_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct iecm_ctlq_desc {
+	__le16	flags;
+	__le16	opcode;
+	__le16	datalen;	/* 0 for direct commands */
+	union {
+		__le16 ret_val;
+		__le16 pfid_vfid;
+#define IECM_CTLQ_DESC_VF_ID_S	0
+#define IECM_CTLQ_DESC_VF_ID_M	(0x3FF << IECM_CTLQ_DESC_VF_ID_S)
+#define IECM_CTLQ_DESC_PF_ID_S	10
+#define IECM_CTLQ_DESC_PF_ID_M	(0x3F << IECM_CTLQ_DESC_PF_ID_S)
+	};
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} direct;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} indirect;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|  * RSV *  |FTYPE  | *RSV* |RD |VFC|BUF|  * RSV *  |
+ */
+/* command flags and offsets */
+#define IECM_CTLQ_FLAG_DD_S	0
+#define IECM_CTLQ_FLAG_CMP_S	1
+#define IECM_CTLQ_FLAG_ERR_S	2
+#define IECM_CTLQ_FLAG_FTYPE_S	6
+#define IECM_CTLQ_FLAG_RD_S	10
+#define IECM_CTLQ_FLAG_VFC_S	11
+#define IECM_CTLQ_FLAG_BUF_S	12
+
+#define IECM_CTLQ_FLAG_DD	BIT(IECM_CTLQ_FLAG_DD_S)	/* 0x1 */
+#define IECM_CTLQ_FLAG_CMP	BIT(IECM_CTLQ_FLAG_CMP_S)	/* 0x2 */
+#define IECM_CTLQ_FLAG_ERR	BIT(IECM_CTLQ_FLAG_ERR_S)	/* 0x4 */
+#define IECM_CTLQ_FLAG_FTYPE_VM	BIT(IECM_CTLQ_FLAG_FTYPE_S)	/* 0x40 */
+#define IECM_CTLQ_FLAG_FTYPE_PF	BIT(IECM_CTLQ_FLAG_FTYPE_S + 1)	/* 0x80 */
+#define IECM_CTLQ_FLAG_RD	BIT(IECM_CTLQ_FLAG_RD_S)	/* 0x400 */
+#define IECM_CTLQ_FLAG_VFC	BIT(IECM_CTLQ_FLAG_VFC_S)	/* 0x800 */
+#define IECM_CTLQ_FLAG_BUF	BIT(IECM_CTLQ_FLAG_BUF_S)	/* 0x1000 */
+
+struct iecm_mbxq_desc {
+	u8 pad[8];		/* CTLQ flags/opcode/len/retval fields */
+	u32 chnl_opcode;	/* avoid confusion with desc->opcode */
+	u32 chnl_retval;	/* ditto for desc->retval */
+	u32 pf_vf_id;		/* used by CP when sending to PF */
+};
+
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct iecm_hw {
+	u8 __iomem *hw_addr;
+	u64 hw_addr_len;
+	void *back;
+
+	/* control queue - send and receive */
+	struct iecm_ctlq_info *asq;
+	struct iecm_ctlq_info *arq;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	struct list_head cq_list_head;
+};
+
+enum iecm_status
+iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
+			 struct iecm_ctlq_info *cq);
+
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq);
+
+/* prototype for functions used for dynamic memory allocation */
+void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem,
+			 u64 size);
+void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem);
+#endif /* _IECM_CONTROLQ_H_ */
diff --git a/include/linux/net/intel/iecm_controlq_api.h b/include/linux/net/intel/iecm_controlq_api.h
new file mode 100644
index 000000000000..3f4bf34c15e6
--- /dev/null
+++ b/include/linux/net/intel/iecm_controlq_api.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2020, Intel Corporation. */
+
+#ifndef _IECM_CONTROLQ_API_H_
+#define _IECM_CONTROLQ_API_H_
+
+#include <linux/net/intel/iecm_mem.h>
+
+/* Error Codes */
+enum iecm_status {
+	IECM_SUCCESS				= 0,
+
+	/* Reserve range -1..-49 for generic codes */
+	IECM_ERR_PARAM				= -1,
+	IECM_ERR_NOT_IMPL			= -2,
+	IECM_ERR_NOT_READY			= -3,
+	IECM_ERR_BAD_PTR			= -5,
+	IECM_ERR_INVAL_SIZE			= -6,
+	IECM_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	IECM_ERR_FW_API_VER			= -9,
+	IECM_ERR_NO_MEMORY			= -10,
+	IECM_ERR_CFG				= -11,
+	IECM_ERR_OUT_OF_RANGE			= -12,
+	IECM_ERR_ALREADY_EXISTS			= -13,
+	IECM_ERR_DOES_NOT_EXIST			= -14,
+	IECM_ERR_IN_USE				= -15,
+	IECM_ERR_MAX_LIMIT			= -16,
+	IECM_ERR_RESET_ONGOING			= -17,
+
+	/* Reserve range -100..-109 for CRQ/CSQ specific error codes */
+	IECM_ERR_CTLQ_ERROR			= -100,
+	IECM_ERR_CTLQ_TIMEOUT			= -101,
+	IECM_ERR_CTLQ_FULL			= -102,
+	IECM_ERR_CTLQ_NO_WORK			= -103,
+	IECM_ERR_CTLQ_EMPTY			= -104,
+};
+
+struct iecm_hw;
+
+/* Used for queue init, response and events */
+enum iecm_ctlq_type {
+	IECM_CTLQ_TYPE_MAILBOX_TX	= 0,
+	IECM_CTLQ_TYPE_MAILBOX_RX	= 1,
+	IECM_CTLQ_TYPE_CONFIG_TX	= 2,
+	IECM_CTLQ_TYPE_CONFIG_RX	= 3,
+	IECM_CTLQ_TYPE_EVENT_RX		= 4,
+	IECM_CTLQ_TYPE_RDMA_TX		= 5,
+	IECM_CTLQ_TYPE_RDMA_RX		= 6,
+	IECM_CTLQ_TYPE_RDMA_COMPL	= 7
+};
+
+/* Generic Control Queue Structures */
+
+struct iecm_ctlq_reg {
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	/* Below applies only to default mb (if present) */
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* Generic queue msg structure */
+struct iecm_ctlq_msg {
+	u16 vmvf_type; /* represents the source of the message on recv */
+#define IECM_VMVF_TYPE_VF 0
+#define IECM_VMVF_TYPE_VM 1
+#define IECM_VMVF_TYPE_PF 2
+	u16 opcode;
+	u16 data_len;	/* data_len = 0 when no payload is attached */
+	union {
+		u16 func_id;	/* when sending a message */
+		u16 status;	/* when receiving a message */
+	};
+	union {
+		struct {
+			u32 chnl_retval;
+			u32 chnl_opcode;
+		} mbx;
+	} cookie;
+	union {
+#define IECM_DIRECT_CTX_SIZE	16
+#define IECM_INDIRECT_CTX_SIZE	8
+		/* 16 bytes of context can be provided or 8 bytes of context
+		 * plus the address of a DMA buffer
+		 */
+		u8 direct[IECM_DIRECT_CTX_SIZE];
+		struct {
+			u8 context[IECM_INDIRECT_CTX_SIZE];
+			struct iecm_dma_mem *payload;
+		} indirect;
+	} ctx;
+};
+
+/* Generic queue info structures */
+/* MB, CONFIG and EVENT q do not have extended info */
+struct iecm_ctlq_create_info {
+	enum iecm_ctlq_type type;
+	/* absolute queue offset passed as input
+	 * -1 for default mailbox if present
+	 */
+	int id;
+	u16 len; /* Queue length passed as input */
+	u16 buf_size; /* buffer size passed as input */
+	u64 base_address; /* output, HPA of the Queue start  */
+	struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+
+	int ext_info_size;
+	void *ext_info; /* Specific to q type */
+};
+
+/* Control Queue information */
+struct iecm_ctlq_info {
+	struct list_head cq_list;
+
+	enum iecm_ctlq_type cq_type;
+	int q_id;
+	struct mutex cq_lock;		/* queue lock */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* starting descriptor to post buffers to after recev */
+	u16 next_to_post;
+	struct iecm_dma_mem desc_ring;	/* descriptor ring memory */
+
+	union {
+		struct iecm_dma_mem **rx_buff;
+		struct iecm_ctlq_msg **tx_msg;
+	} bi;
+
+	u16 buf_size;			/* queue buffer size */
+	u16 ring_size;			/* Number of descriptors */
+	struct iecm_ctlq_reg reg;	/* registers accessed by ctlqs */
+};
+
+/* PF/VF mailbox commands */
+enum iecm_mbx_opc {
+	iecm_mbq_opc_send_msg_to_cp		= 0x0801,
+	iecm_mbq_opc_send_msg_to_vf		= 0x0802,
+	iecm_mbq_opc_send_msg_to_pf		= 0x0803,
+};
+
+/* API supported for control queue management */
+
+/* Will init all required q including default mb.  "q_info" is an array of
+ * create_info structs equal to the number of control queues to be created.
+ */
+enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+				struct iecm_ctlq_create_info *q_info);
+
+/* Allocate and initialize a single control queue, which will be added to the
+ * control queue list; returns a handle to the created control queue
+ */
+enum iecm_status iecm_ctlq_add(struct iecm_hw *hw,
+			       struct iecm_ctlq_create_info *qinfo,
+			       struct iecm_ctlq_info **cq);
+/* Deinitialize and deallocate a single control queue */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+		      struct iecm_ctlq_info *cq);
+
+/* Sends messages to HW and will also free the buffer*/
+enum iecm_status iecm_ctlq_send(struct iecm_hw *hw,
+				struct iecm_ctlq_info *cq,
+				u16 num_q_msg,
+				struct iecm_ctlq_msg q_msg[]);
+
+/* Receives messages and called by interrupt handler/polling
+ * initiated by app/process. Also caller is supposed to free the buffers
+ */
+enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq,
+				u16 *num_q_msg, struct iecm_ctlq_msg *q_msg);
+/* Reclaims send descriptors on HW write back */
+enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq,
+				    u16 *clean_count,
+				    struct iecm_ctlq_msg *msg_status[]);
+
+/* Indicate RX buffers are done being processed */
+enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
+					 struct iecm_ctlq_info *cq,
+					 u16 *buff_count,
+					 struct iecm_dma_mem **buffs);
+
+/* Will destroy all q including the default mb */
+enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw);
+
+#endif /* _IECM_CONTROLQ_API_H_ */
diff --git a/include/linux/net/intel/iecm_mem.h b/include/linux/net/intel/iecm_mem.h
new file mode 100644
index 000000000000..5964ab817c3e
--- /dev/null
+++ b/include/linux/net/intel/iecm_mem.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+
+#ifndef _IECM_MEM_H_
+#define _IECM_MEM_H_
+
+#include <linux/io.h>
+
+struct iecm_dma_mem {
+	void *va;
+	dma_addr_t pa;
+	size_t size;
+};
+
+#define wr32(a, reg, value)	writel((value), ((a)->hw_addr + (reg)))
+#define rd32(a, reg)		readl((a)->hw_addr + (reg))
+#define wr64(a, reg, value)	writeq((value), ((a)->hw_addr + (reg)))
+#define rd64(a, reg)		readq((a)->hw_addr + (reg))
+
+#endif
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 03/15] iecm: Add TX/RX header files
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 01/15] virtchnl: Extend AVF ops Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 02/15] iecm: Add framework set of header files Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 04/15] iecm: Common module introduction and function stubs Tony Nguyen
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Introduces the data for the TX/RX paths for use
by the common module.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 include/linux/net/intel/iecm_lan_pf_regs.h | 120 ++++
 include/linux/net/intel/iecm_lan_txrx.h    | 635 +++++++++++++++++++++
 include/linux/net/intel/iecm_txrx.h        | 582 +++++++++++++++++++
 3 files changed, 1337 insertions(+)
 create mode 100644 include/linux/net/intel/iecm_lan_pf_regs.h
 create mode 100644 include/linux/net/intel/iecm_lan_txrx.h
 create mode 100644 include/linux/net/intel/iecm_txrx.h

diff --git a/include/linux/net/intel/iecm_lan_pf_regs.h b/include/linux/net/intel/iecm_lan_pf_regs.h
new file mode 100644
index 000000000000..6690a2645608
--- /dev/null
+++ b/include/linux/net/intel/iecm_lan_pf_regs.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2020, Intel Corporation. */
+
+#ifndef _IECM_LAN_PF_REGS_H_
+#define _IECM_LAN_PF_REGS_H_
+
+/* Receive queues */
+#define PF_QRX_BASE			0x00000000
+#define PF_QRX_TAIL(_QRX)		(PF_QRX_BASE + (((_QRX) * 0x1000)))
+#define PF_QRX_BUFFQ_BASE		0x03000000
+#define PF_QRX_BUFFQ_TAIL(_QRX)		\
+	(PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
+
+/* Transmit queues */
+#define PF_QTX_BASE			0x05000000
+#define PF_QTX_COMM_DBELL(_DBQM)	(PF_QTX_BASE + ((_DBQM) * 0x1000))
+
+/* Control(PF Mailbox) Queue */
+#define PF_FW_BASE			0x08400000
+
+#define PF_FW_ARQBAL			(PF_FW_BASE)
+#define PF_FW_ARQBAH			(PF_FW_BASE + 0x4)
+#define PF_FW_ARQLEN			(PF_FW_BASE + 0x8)
+#define PF_FW_ARQLEN_ARQLEN_S		0
+#define PF_FW_ARQLEN_ARQLEN_M		MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S)
+#define PF_FW_ARQLEN_ARQVFE_S		28
+#define PF_FW_ARQLEN_ARQVFE_M		BIT(PF_FW_ARQLEN_ARQVFE_S)
+#define PF_FW_ARQLEN_ARQOVFL_S		29
+#define PF_FW_ARQLEN_ARQOVFL_M		BIT(PF_FW_ARQLEN_ARQOVFL_S)
+#define PF_FW_ARQLEN_ARQCRIT_S		30
+#define PF_FW_ARQLEN_ARQCRIT_M		BIT(PF_FW_ARQLEN_ARQCRIT_S)
+#define PF_FW_ARQLEN_ARQENABLE_S	31
+#define PF_FW_ARQLEN_ARQENABLE_M	BIT(PF_FW_ARQLEN_ARQENABLE_S)
+#define PF_FW_ARQH			(PF_FW_BASE + 0xC)
+#define PF_FW_ARQH_ARQH_S		0
+#define PF_FW_ARQH_ARQH_M		MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S)
+#define PF_FW_ARQT			(PF_FW_BASE + 0x10)
+
+#define PF_FW_ATQBAL			(PF_FW_BASE + 0x14)
+#define PF_FW_ATQBAH			(PF_FW_BASE + 0x18)
+#define PF_FW_ATQLEN			(PF_FW_BASE + 0x1C)
+#define PF_FW_ATQLEN_ATQLEN_S		0
+#define PF_FW_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S)
+#define PF_FW_ATQLEN_ATQVFE_S		28
+#define PF_FW_ATQLEN_ATQVFE_M		BIT(PF_FW_ATQLEN_ATQVFE_S)
+#define PF_FW_ATQLEN_ATQOVFL_S		29
+#define PF_FW_ATQLEN_ATQOVFL_M		BIT(PF_FW_ATQLEN_ATQOVFL_S)
+#define PF_FW_ATQLEN_ATQCRIT_S		30
+#define PF_FW_ATQLEN_ATQCRIT_M		BIT(PF_FW_ATQLEN_ATQCRIT_S)
+#define PF_FW_ATQLEN_ATQENABLE_S	31
+#define PF_FW_ATQLEN_ATQENABLE_M	BIT(PF_FW_ATQLEN_ATQENABLE_S)
+#define PF_FW_ATQH			(PF_FW_BASE + 0x20)
+#define PF_FW_ATQH_ATQH_S		0
+#define PF_FW_ATQH_ATQH_M		MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S)
+#define PF_FW_ATQT			(PF_FW_BASE + 0x24)
+
+/* Interrupts */
+#define PF_GLINT_BASE			0x08900000
+#define PF_GLINT_DYN_CTL(_INT)		(PF_GLINT_BASE + ((_INT) * 0x1000))
+#define PF_GLINT_DYN_CTL_INTENA_S	0
+#define PF_GLINT_DYN_CTL_INTENA_M	BIT(PF_GLINT_DYN_CTL_INTENA_S)
+#define PF_GLINT_DYN_CTL_CLEARPBA_S	1
+#define PF_GLINT_DYN_CTL_CLEARPBA_M	BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_S	2
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_M	BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
+#define PF_GLINT_DYN_CTL_ITR_INDX_S	3
+#define PF_GLINT_DYN_CTL_INTERVAL_S	5
+#define PF_GLINT_DYN_CTL_INTERVAL_M	BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M \
+	BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S	25
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M	BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_INTENA_MSK_S	31
+#define PF_GLINT_DYN_CTL_INTENA_MSK_M	BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
+#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_INT) + \
+			       (((_i) + 1) * 4)) * 0x1000))
+#define PF_GLINT_ITR_MAX_INDEX		2
+#define PF_GLINT_ITR_INTERVAL_S		0
+#define PF_GLINT_ITR_INTERVAL_M		MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S)
+
+/* Generic registers */
+#define PF_INT_DIR_OICR_ENA		0x08406000
+#define PF_INT_DIR_OICR_ENA_S		0
+#define PF_INT_DIR_OICR_ENA_M	MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S)
+#define PF_INT_DIR_OICR			0x08406004
+#define PF_INT_DIR_OICR_TSYN_EVNT	0
+#define PF_INT_DIR_OICR_PHY_TS_0	BIT(1)
+#define PF_INT_DIR_OICR_PHY_TS_1	BIT(2)
+#define PF_INT_DIR_OICR_CAUSE		0x08406008
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_S	0
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_M	\
+	MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S)
+#define PF_INT_PBA_CLEAR		0x0840600C
+
+#define PF_FUNC_RID			0x08406010
+#define PF_FUNC_RID_FUNCTION_NUMBER_S	0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M	\
+	MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S)
+#define PF_FUNC_RID_DEVICE_NUMBER_S	3
+#define PF_FUNC_RID_DEVICE_NUMBER_M	\
+	MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S)
+#define PF_FUNC_RID_BUS_NUMBER_S	8
+#define PF_FUNC_RID_BUS_NUMBER_M	MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S)
+
+/* Reset registers */
+#define PFGEN_RTRIG			0x08407000
+#define PFGEN_RTRIG_CORER_S		0
+#define PFGEN_RTRIG_CORER_M		BIT(0)
+#define PFGEN_RTRIG_LINKR_S		1
+#define PFGEN_RTRIG_LINKR_M		BIT(1)
+#define PFGEN_RTRIG_IMCR_S		2
+#define PFGEN_RTRIG_IMCR_M		BIT(2)
+#define PFGEN_RSTAT			0x08407008 /* PFR Status */
+#define PFGEN_RSTAT_PFR_STATE_S		0
+#define PFGEN_RSTAT_PFR_STATE_M		MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S)
+#define PFGEN_CTRL			0x0840700C
+#define PFGEN_CTRL_PFSWR		BIT(0)
+
+#endif
diff --git a/include/linux/net/intel/iecm_lan_txrx.h b/include/linux/net/intel/iecm_lan_txrx.h
new file mode 100644
index 000000000000..68d02239f319
--- /dev/null
+++ b/include/linux/net/intel/iecm_lan_txrx.h
@@ -0,0 +1,635 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2020, Intel Corporation. */
+
+#ifndef _IECM_LAN_TXRX_H_
+#define _IECM_LAN_TXRX_H_
+
+enum iecm_rss_hash {
+	/* Values 0 - 28 are reserved for future use */
+	IECM_HASH_INVALID		= 0,
+	IECM_HASH_NONF_UNICAST_IPV4_UDP	= 29,
+	IECM_HASH_NONF_MULTICAST_IPV4_UDP,
+	IECM_HASH_NONF_IPV4_UDP,
+	IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
+	IECM_HASH_NONF_IPV4_TCP,
+	IECM_HASH_NONF_IPV4_SCTP,
+	IECM_HASH_NONF_IPV4_OTHER,
+	IECM_HASH_FRAG_IPV4,
+	/* Values 37-38 are reserved */
+	IECM_HASH_NONF_UNICAST_IPV6_UDP	= 39,
+	IECM_HASH_NONF_MULTICAST_IPV6_UDP,
+	IECM_HASH_NONF_IPV6_UDP,
+	IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
+	IECM_HASH_NONF_IPV6_TCP,
+	IECM_HASH_NONF_IPV6_SCTP,
+	IECM_HASH_NONF_IPV6_OTHER,
+	IECM_HASH_FRAG_IPV6,
+	IECM_HASH_NONF_RSVD47,
+	IECM_HASH_NONF_FCOE_OX,
+	IECM_HASH_NONF_FCOE_RX,
+	IECM_HASH_NONF_FCOE_OTHER,
+	/* Values 51-62 are reserved */
+	IECM_HASH_L2_PAYLOAD		= 63,
+	IECM_HASH_MAX
+};
+
+/* Supported RSS offloads */
+#define IECM_DEFAULT_RSS_HASH ( \
+	BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \
+	BIT_ULL(IECM_HASH_FRAG_IPV4) | \
+	BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \
+	BIT_ULL(IECM_HASH_FRAG_IPV6) | \
+	BIT_ULL(IECM_HASH_L2_PAYLOAD))
+
+#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \
+	BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
+	BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \
+	BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \
+	BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
+	BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \
+	BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP))
+
+/* For iecm_splitq_base_tx_compl_desc */
+#define IECM_TXD_COMPLQ_GEN_S	15
+#define IECM_TXD_COMPLQ_GEN_M		BIT_ULL(IECM_TXD_COMPLQ_GEN_S)
+#define IECM_TXD_COMPLQ_COMPL_TYPE_S	11
+#define IECM_TXD_COMPLQ_COMPL_TYPE_M	\
+	MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S)
+#define IECM_TXD_COMPLQ_QID_S	0
+#define IECM_TXD_COMPLQ_QID_M		MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S)
+
+/* For base mode TX descriptors */
+#define IECM_TXD_CTX_QW1_MSS_S		50
+#define IECM_TXD_CTX_QW1_MSS_M		\
+	MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S)
+#define IECM_TXD_CTX_QW1_TSO_LEN_S	30
+#define IECM_TXD_CTX_QW1_TSO_LEN_M	\
+	MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S)
+#define IECM_TXD_CTX_QW1_CMD_S		4
+#define IECM_TXD_CTX_QW1_CMD_M		\
+	MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S)
+#define IECM_TXD_CTX_QW1_DTYPE_S	0
+#define IECM_TXD_CTX_QW1_DTYPE_M	\
+	MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S)
+#define IECM_TXD_QW1_L2TAG1_S		48
+#define IECM_TXD_QW1_L2TAG1_M		\
+	MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S)
+#define IECM_TXD_QW1_TX_BUF_SZ_S	34
+#define IECM_TXD_QW1_TX_BUF_SZ_M	\
+	MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S)
+#define IECM_TXD_QW1_OFFSET_S		16
+#define IECM_TXD_QW1_OFFSET_M		\
+	MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S)
+#define IECM_TXD_QW1_CMD_S		4
+#define IECM_TXD_QW1_CMD_M		MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S)
+#define IECM_TXD_QW1_DTYPE_S		0
+#define IECM_TXD_QW1_DTYPE_M		MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S)
+
+/* TX Completion Descriptor Completion Types */
+#define IECM_TXD_COMPLT_ITR_FLUSH	0
+#define IECM_TXD_COMPLT_RULE_MISS	1
+#define IECM_TXD_COMPLT_RS		2
+#define IECM_TXD_COMPLT_REINJECTED	3
+#define IECM_TXD_COMPLT_RE		4
+#define IECM_TXD_COMPLT_SW_MARKER	5
+
+enum iecm_tx_desc_dtype_value {
+	IECM_TX_DESC_DTYPE_DATA				= 0,
+	IECM_TX_DESC_DTYPE_CTX				= 1,
+	IECM_TX_DESC_DTYPE_REINJECT_CTX			= 2,
+	IECM_TX_DESC_DTYPE_FLEX_DATA			= 3,
+	IECM_TX_DESC_DTYPE_FLEX_CTX			= 4,
+	IECM_TX_DESC_DTYPE_FLEX_TSO_CTX			= 5,
+	IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1		= 6,
+	IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2		= 7,
+	IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX	= 8,
+	IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX	= 9,
+	IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX	= 10,
+	IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX		= 11,
+	IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE		= 12,
+	IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX	= 13,
+	IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX		= 14,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	IECM_TX_DESC_DTYPE_DESC_DONE			= 15,
+};
+
+enum iecm_tx_ctx_desc_cmd_bits {
+	IECM_TX_CTX_DESC_TSO		= 0x01,
+	IECM_TX_CTX_DESC_TSYN		= 0x02,
+	IECM_TX_CTX_DESC_IL2TAG2	= 0x04,
+	IECM_TX_CTX_DESC_RSVD		= 0x08,
+	IECM_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	IECM_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	IECM_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	IECM_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	IECM_TX_CTX_DESC_FILT_AU_EN	= 0x40,
+	IECM_TX_CTX_DESC_FILT_AU_EVICT	= 0x80,
+	IECM_TX_CTX_DESC_RSVD1		= 0xF00
+};
+
+enum iecm_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	IECM_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	IECM_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	IECM_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+enum iecm_tx_base_desc_cmd_bits {
+	IECM_TX_DESC_CMD_EOP			= 0x0001,
+	IECM_TX_DESC_CMD_RS			= 0x0002,
+	 /* only on VFs else RSVD */
+	IECM_TX_DESC_CMD_ICRC			= 0x0004,
+	IECM_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	IECM_TX_DESC_CMD_RSVD1			= 0x0010,
+	IECM_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	IECM_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	IECM_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	IECM_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	IECM_TX_DESC_CMD_RSVD2			= 0x0080,
+	IECM_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	IECM_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	IECM_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	IECM_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	IECM_TX_DESC_CMD_RSVD3			= 0x0400,
+	IECM_TX_DESC_CMD_RSVD4			= 0x0800,
+};
+
+/* Transmit descriptors  */
+/* splitq Tx buf, singleq Tx buf and singleq compl desc */
+struct iecm_base_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
+};/* read used with buffer queues*/
+
+struct iecm_splitq_tx_compl_desc {
+	/* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
+	__le16 qid_comptype_gen;
+	union {
+		__le16 q_head; /* Queue head */
+		__le16 compl_tag; /* Completion tag */
+	} q_head_compl_tag;
+	u8 ts[3];
+	u8 rsvd; /* Reserved */
+};/* writeback used with completion queues*/
+
+/* Context descriptors */
+struct iecm_base_tx_ctx_desc {
+	struct {
+		__le32 rsvd0;
+		__le16 l2tag2;
+		__le16 rsvd1;
+	} qw0;
+	__le64 qw1; /* type_cmd_tlen_mss/rt_hint */
+};
+
+/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
+enum iecm_tx_flex_desc_cmd_bits {
+	IECM_TX_FLEX_DESC_CMD_EOP			= 0x01,
+	IECM_TX_FLEX_DESC_CMD_RS			= 0x02,
+	IECM_TX_FLEX_DESC_CMD_RE			= 0x04,
+	IECM_TX_FLEX_DESC_CMD_IL2TAG1			= 0x08,
+	IECM_TX_FLEX_DESC_CMD_DUMMY			= 0x10,
+	IECM_TX_FLEX_DESC_CMD_CS_EN			= 0x20,
+	IECM_TX_FLEX_DESC_CMD_FILT_AU_EN		= 0x40,
+	IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT		= 0x80,
+};
+
+struct iecm_flex_tx_desc {
+	__le64 buf_addr;	/* Packet buffer address */
+	struct {
+		__le16 cmd_dtype;
+#define IECM_FLEX_TXD_QW1_DTYPE_S		0
+#define IECM_FLEX_TXD_QW1_DTYPE_M		\
+		(0x1FUL << IECM_FLEX_TXD_QW1_DTYPE_S)
+#define IECM_FLEX_TXD_QW1_CMD_S		5
+#define IECM_FLEX_TXD_QW1_CMD_M		MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S)
+		union {
+			/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */
+			u8 raw[4];
+
+			/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */
+			struct {
+				__le16 l2tag1;
+				u8 flex;
+				u8 tsync;
+			} tsync;
+
+			/* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
+			struct {
+				__le16 l2tag1;
+				__le16 l2tag2;
+			} l2tags;
+		} flex;
+		__le16 buf_size;
+	} qw1;
+};
+
+struct iecm_flex_tx_sched_desc {
+	__le64 buf_addr;	/* Packet buffer address */
+
+	/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
+	struct {
+		u8 cmd_dtype;
+#define IECM_TXD_FLEX_FLOW_DTYPE_M	0x1F
+#define IECM_TXD_FLEX_FLOW_CMD_EOP	0x20
+#define IECM_TXD_FLEX_FLOW_CMD_CS_EN	0x40
+#define IECM_TXD_FLEX_FLOW_CMD_RE	0x80
+
+		/* [23:23] Horizon Overflow bit, [22:0] timestamp */
+		u8 ts[3];
+#define IECM_TXD_FLOW_SCH_HORIZON_OVERFLOW_M	0x80
+
+		__le16 compl_tag;
+		__le16 rxr_bufsize;
+#define IECM_TXD_FLEX_FLOW_RXR		0x4000
+#define IECM_TXD_FLEX_FLOW_BUFSIZE_M	0x3FFF
+	} qw1;
+};
+
+/* Common cmd fields for all flex context descriptors
+ * Note: these defines already account for the 5 bit dtype in the cmd_dtype
+ * field
+ */
+enum iecm_tx_flex_ctx_desc_cmd_bits {
+	IECM_TX_FLEX_CTX_DESC_CMD_TSO			= 0x0020,
+	IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN		= 0x0040,
+	IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2		= 0x0080,
+	IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK		= 0x0200, /* 2 bits */
+	IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL		= 0x0400, /* 2 bits */
+	IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI	= 0x0600, /* 2 bits */
+};
+
+/* Standard flex descriptor TSO context quad word */
+struct iecm_flex_tx_tso_ctx_qw {
+	__le32 flex_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_M	0x1FFFF
+#define IECM_TXD_FLEX_TSO_CTX_FLEX_S	24
+	__le16 mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_M	0x3FFF
+	u8 hdr_len;
+	u8 flex;
+};
+
+union iecm_flex_tx_ctx_desc {
+	/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */
+	struct {
+		u8 qw0_flex[8];
+		struct {
+			__le16 cmd_dtype;
+			__le16 l2tag1;
+			u8 qw1_flex[4];
+		} qw1;
+	} gen;
+
+	/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
+	struct {
+		struct iecm_flex_tx_tso_ctx_qw qw0;
+		struct {
+			__le16 cmd_dtype;
+			u8 flex[6];
+		} qw1;
+	} tso;
+
+	/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */
+	struct {
+		struct iecm_flex_tx_tso_ctx_qw qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 l2tag2;
+			u8 flex0;
+			u8 ptag;
+			u8 flex1[2];
+		} qw1;
+	} tso_l2tag2_ptag;
+
+	/* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */
+	struct {
+		u8 qw0_flex[8];
+		struct {
+			__le16 cmd_dtype;
+			__le16 l2tag2;
+			u8 flex[4];
+		} qw1;
+	} l2tag2;
+
+	/* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */
+	struct {
+		struct {
+			__le32 sa_domain;
+#define IECM_TXD_FLEX_CTX_SA_DOM_M	0xFFFF
+#define IECM_TXD_FLEX_CTX_SA_DOM_VAL	0x10000
+			__le32 sa_idx;
+#define IECM_TXD_FLEX_CTX_SAIDX_M	0x1FFFFF
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 txr2comp;
+#define IECM_TXD_FLEX_CTX_TXR2COMP	0x1
+			__le16 miss_txq_comp_tag;
+			__le16 miss_txq_id;
+		} qw1;
+	} reinjection_pkt;
+};
+
+/* Host Split Context Descriptors */
+struct iecm_flex_tx_hs_ctx_desc {
+	union {
+		struct {
+			__le32 host_fnum_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_S	0
+#define IECM_TXD_FLEX_CTX_TLEN_M	0x1FFFF
+#define IECM_TXD_FLEX_CTX_FNUM_S	18
+#define IECM_TXD_FLEX_CTX_FNUM_M	0x7FF
+#define IECM_TXD_FLEX_CTX_HOST_S	29
+#define IECM_TXD_FLEX_CTX_HOST_M	0x7
+			__le16 ftype_mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_0	0
+#define IECM_TXD_FLEX_CTX_MSS_RT_M	0x3FFF
+#define IECM_TXD_FLEX_CTX_FTYPE_S	14
+#define IECM_TXD_FLEX_CTX_FTYPE_VF	MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_VDEV	MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_PF	MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S)
+			u8 hdr_len;
+			u8 ptag;
+		} tso;
+		struct {
+			u8 flex0[2];
+			__le16 host_fnum_ftype;
+			u8 flex1[3];
+			u8 ptag;
+		} no_tso;
+	} qw0;
+
+	__le64 qw1_cmd_dtype;
+#define IECM_TXD_FLEX_CTX_QW1_PASID_S		16
+#define IECM_TXD_FLEX_CTX_QW1_PASID_M		0xFFFFF
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S	36
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID	\
+	MAKEMASK(0x1,  IECM_TXD_FLEX_CTX_PASID_VALID_S)
+#define IECM_TXD_FLEX_CTX_QW1_TPH_S		37
+#define IECM_TXD_FLEX_CTX_QW1_TPH		\
+	MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S)
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S		38
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M		0xF
+/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S		42
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M		0x1FFFFF
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S	63
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID	\
+	MAKEMASK(0x1,  IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S)
+/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S		48
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M		0xFF
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S		56
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M		0xFF
+};
+
+/* Rx */
+/* For iecm_splitq_base_rx_flex desc members */
+#define IECM_RXD_FLEX_PTYPE_S		0
+#define IECM_RXD_FLEX_PTYPE_M		MAKEMASK(0x3FFUL, IECM_RXD_FLEX_PTYPE_S)
+#define IECM_RXD_FLEX_UMBCAST_S		10
+#define IECM_RXD_FLEX_UMBCAST_M		MAKEMASK(0x3UL, IECM_RXD_FLEX_UMBCAST_S)
+#define IECM_RXD_FLEX_FF0_S		12
+#define IECM_RXD_FLEX_FF0_M		MAKEMASK(0xFUL, IECM_RXD_FLEX_FF0_S)
+#define IECM_RXD_FLEX_LEN_PBUF_S	0
+#define IECM_RXD_FLEX_LEN_PBUF_M	\
+	MAKEMASK(0x3FFFUL, IECM_RXD_FLEX_LEN_PBUF_S)
+#define IECM_RXD_FLEX_GEN_S		14
+#define IECM_RXD_FLEX_GEN_M		BIT_ULL(IECM_RXD_FLEX_GEN_S)
+#define IECM_RXD_FLEX_BUFQ_ID_S		15
+#define IECM_RXD_FLEX_BUFQ_ID_M		BIT_ULL(IECM_RXD_FLEX_BUFQ_ID_S)
+#define IECM_RXD_FLEX_LEN_HDR_S		0
+#define IECM_RXD_FLEX_LEN_HDR_M		\
+	MAKEMASK(0x3FFUL, IECM_RXD_FLEX_LEN_HDR_S)
+#define IECM_RXD_FLEX_RSC_S		10
+#define IECM_RXD_FLEX_RSC_M		BIT_ULL(IECM_RXD_FLEX_RSC_S)
+#define IECM_RXD_FLEX_SPH_S		11
+#define IECM_RXD_FLEX_SPH_M		BIT_ULL(IECM_RXD_FLEX_SPH_S)
+#define IECM_RXD_FLEX_MISS_S		12
+#define IECM_RXD_FLEX_MISS_M		BIT_ULL(IECM_RXD_FLEX_MISS_S)
+#define IECM_RXD_FLEX_FF1_S		13
+#define IECM_RXD_FLEX_FF1_M		MAKEMASK(0x7UL, IECM_RXD_FLEX_FF1_M)
+
+/* For iecm_singleq_base_rx_legacy desc members */
+#define IECM_RXD_QW1_LEN_SPH_S	63
+#define IECM_RXD_QW1_LEN_SPH_M	BIT_ULL(IECM_RXD_QW1_LEN_SPH_S)
+#define IECM_RXD_QW1_LEN_HBUF_S	52
+#define IECM_RXD_QW1_LEN_HBUF_M	MAKEMASK(0x7FFULL, IECM_RXD_QW1_LEN_HBUF_S)
+#define IECM_RXD_QW1_LEN_PBUF_S	38
+#define IECM_RXD_QW1_LEN_PBUF_M	MAKEMASK(0x3FFFULL, IECM_RXD_QW1_LEN_PBUF_S)
+#define IECM_RXD_QW1_PTYPE_S	30
+#define IECM_RXD_QW1_PTYPE_M	MAKEMASK(0xFFULL, IECM_RXD_QW1_PTYPE_S)
+#define IECM_RXD_QW1_ERROR_S	19
+#define IECM_RXD_QW1_ERROR_M	MAKEMASK(0xFFUL, IECM_RXD_QW1_ERROR_S)
+#define IECM_RXD_QW1_STATUS_S	0
+#define IECM_RXD_QW1_STATUS_M	MAKEMASK(0x7FFFFUL, IECM_RXD_QW1_STATUS_S)
+
+enum iecm_rx_flex_desc_status_error_0_qw1_bits {
+	/* Note: These are predefined bit offsets */
+	IECM_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	IECM_RX_FLEX_DESC_STATUS0_EOF_S,
+	IECM_RX_FLEX_DESC_STATUS0_HBO_S,
+	IECM_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	IECM_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+};
+
+enum iecm_rx_flex_desc_status_error_0_qw0_bits {
+	IECM_RX_FLEX_DESC_STATUS0_LPBK_S = 0,
+	IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	IECM_RX_FLEX_DESC_STATUS0_RXE_S,
+	IECM_RX_FLEX_DESC_STATUS0_CRCP_S,
+	IECM_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	IECM_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	IECM_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	IECM_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	IECM_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum iecm_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	IECM_RX_FLEX_DESC_STATUS1_RSVD_S = 0, /* 2 bits */
+	IECM_RX_FLEX_DESC_STATUS1_ATRAEFAIL_S = 2,
+	IECM_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 3,
+	IECM_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 4,
+	IECM_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 5,
+	IECM_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 6,
+	IECM_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 7,
+	IECM_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum iecm_rx_base_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	IECM_RX_BASE_DESC_STATUS_DD_S		= 0,
+	IECM_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	IECM_RX_BASE_DESC_STATUS_L2TAG1P_S	= 2,
+	IECM_RX_BASE_DESC_STATUS_L3L4P_S	= 3,
+	IECM_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	IECM_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 BITS */
+	IECM_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	IECM_RX_BASE_DESC_STATUS_UMBCAST_S	= 9, /* 2 BITS */
+	IECM_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	IECM_RX_BASE_DESC_STATUS_FLTSTAT_S	= 12, /* 2 BITS */
+	IECM_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	IECM_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	IECM_RX_BASE_DESC_STATUS_RSVD1_S	= 16, /* 2 BITS */
+	IECM_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	IECM_RX_BASE_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+enum iecm_rx_desc_fltstat_values {
+	IECM_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	IECM_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	IECM_RX_DESC_FLTSTAT_RSV	= 2,
+	IECM_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+enum iecm_rx_base_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	IECM_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	IECM_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	IECM_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	IECM_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	IECM_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	IECM_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	IECM_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	IECM_RX_BASE_DESC_ERROR_OVERSIZE_S	= 6,
+	IECM_RX_BASE_DESC_ERROR_RSVD_S		= 7
+};
+
+/* Receive Descriptors */
+/* splitq buf*/
+struct iecm_splitq_rx_buf_desc {
+	struct {
+		__le16  buf_id; /* Buffer Identifier */
+		__le16  rsvd0;
+		__le32  rsvd1;
+	} qword0;
+	__le64  pkt_addr; /* Packet buffer address */
+	__le64  hdr_addr; /* Header buffer address */
+	__le64  rsvd2;
+}; /* read used with buffer queues*/
+
+/* singleq buf */
+struct iecm_singleq_rx_buf_desc {
+	__le64  pkt_addr; /* Packet buffer address */
+	__le64  hdr_addr; /* Header buffer address */
+	__le64  rsvd1;
+	__le64  rsvd2;
+}; /* read used with buffer queues*/
+
+union iecm_rx_buf_desc {
+	struct iecm_singleq_rx_buf_desc		read;
+	struct iecm_splitq_rx_buf_desc		split_rd;
+};
+
+/* splitq compl */
+struct iecm_flex_rx_desc {
+	/* Qword 0 */
+	u8 rxdid_ucast; /* profile_id=[3:0] */
+			/* rsvd=[5:4] */
+			/* ucast=[7:6] */
+	u8 status_err0_qw0;
+	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
+					/* ip_hdr_err=[10:10] */
+					/* udp_len_err=[11:11] */
+					/* ff0=[15:12] */
+	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
+					/* gen=[14:14]  only in splitq */
+					/* bufq_id=[15:15] only in splitq */
+	__le16 hdrlen_flags;		/* header=[9:0] */
+					/* rsc=[10:10] only in splitq */
+					/* sph=[11:11] only in splitq */
+					/* ext_udp_0=[12:12] */
+					/* int_udp_0=[13:13] */
+					/* trunc_mirr=[14:14] */
+					/* miss_prepend=[15:15] */
+	/* Qword 1 */
+	u8 status_err0_qw1;
+	u8 status_err1;
+	u8 fflags1;
+	u8 ts_low;
+	union {
+		__le16 fmd0;
+		__le16 buf_id; /* only in splitq */
+	} fmd0_bufid;
+	union {
+		__le16 fmd1;
+		__le16 raw_cs;
+		__le16 l2tag1;
+		__le16 rscseglen;
+	} fmd1_misc;
+	/* Qword 2 */
+	union {
+		__le16 fmd2;
+		__le16 hash1;
+	} fmd2_hash1;
+	union {
+		u8 fflags2;
+		u8 mirrorid;
+		u8 hash2;
+	} ff2_mirrid_hash2;
+	u8 hash3;
+	union {
+		__le16 fmd3;
+		__le16 l2tag2;
+	} fmd3_l2tag2;
+	__le16 fmd4;
+	/* Qword 3 */
+	union {
+		__le16 fmd5;
+		__le16 l2tag1;
+	} fmd5_l2tag1;
+	__le16 fmd6;
+	union {
+		struct {
+			__le16 fmd7_0;
+			__le16 fmd7_1;
+		} fmd7;
+		__le32 ts_high;
+	} flex_ts;
+}; /* writeback */
+
+/* singleq wb(compl) */
+struct iecm_singleq_base_rx_desc {
+	struct {
+		struct {
+			__le16 mirroring_status;
+			__le16 l2tag1;
+		} lo_dword;
+		union {
+			__le32 rss; /* RSS Hash */
+			__le32 fd_id; /* Flow Director filter id */
+		} hi_dword;
+	} qword0;
+	struct {
+		/* status/error/PTYPE/length */
+		__le64 status_error_ptype_len;
+	} qword1;
+	struct {
+		__le16 ext_status; /* extended status */
+		__le16 rsvd;
+		__le16 l2tag2_1;
+		__le16 l2tag2_2;
+	} qword2;
+	struct {
+		__le32 reserved;
+		__le32 fd_id;
+	} qword3;
+}; /* writeback */
+
+union iecm_rx_desc {
+	struct iecm_singleq_rx_buf_desc	read;
+	struct iecm_singleq_base_rx_desc	base_wb;
+	struct iecm_flex_rx_desc		flex_wb;
+};
+#endif /* _IECM_LAN_TXRX_H_ */
diff --git a/include/linux/net/intel/iecm_txrx.h b/include/linux/net/intel/iecm_txrx.h
new file mode 100644
index 000000000000..acac414eae05
--- /dev/null
+++ b/include/linux/net/intel/iecm_txrx.h
@@ -0,0 +1,582 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+
+#ifndef _IECM_TXRX_H_
+#define _IECM_TXRX_H_
+
+#define IECM_MAX_Q				16
+/* Mailbox Queue */
+#define IECM_MAX_NONQ				1
+#define IECM_MAX_TXQ_DESC			512
+#define IECM_MAX_RXQ_DESC			512
+#define IECM_MIN_TXQ_DESC			128
+#define IECM_MIN_RXQ_DESC			128
+#define IECM_REQ_DESC_MULTIPLE			32
+
+#define IECM_DFLT_SINGLEQ_TX_Q_GROUPS		1
+#define IECM_DFLT_SINGLEQ_RX_Q_GROUPS		1
+#define IECM_DFLT_SINGLEQ_TXQ_PER_GROUP		4
+#define IECM_DFLT_SINGLEQ_RXQ_PER_GROUP		4
+
+#define IECM_COMPLQ_PER_GROUP			1
+#define IECM_BUFQS_PER_RXQ_SET			2
+
+#define IECM_DFLT_SPLITQ_TX_Q_GROUPS		4
+#define IECM_DFLT_SPLITQ_RX_Q_GROUPS		4
+#define IECM_DFLT_SPLITQ_TXQ_PER_GROUP		1
+#define IECM_DFLT_SPLITQ_RXQ_PER_GROUP		1
+
+/* Default vector sharing */
+#define IECM_MAX_NONQ_VEC	1
+#define IECM_MAX_Q_VEC		4 /* For Tx Completion queue and Rx queue */
+#define IECM_MAX_RDMA_VEC	2 /* To share with RDMA */
+#define IECM_MIN_RDMA_VEC	1 /* Minimum vectors to be shared with RDMA */
+#define IECM_MIN_VEC		3 /* One for mailbox, one for data queues, one
+				   * for RDMA
+				   */
+
+#define IECM_DFLT_TX_Q_DESC_COUNT		512
+#define IECM_DFLT_TX_COMPLQ_DESC_COUNT		512
+#define IECM_DFLT_RX_Q_DESC_COUNT		512
+#define IECM_DFLT_RX_BUFQ_DESC_COUNT		512
+
+#define IECM_RX_BUF_WRITE			16 /* Must be power of 2 */
+#define IECM_RX_HDR_SIZE			256
+#define IECM_RX_BUF_2048			2048
+#define IECM_RX_BUF_STRIDE			64
+#define IECM_LOW_WATERMARK			64
+#define IECM_HDR_BUF_SIZE			256
+#define IECM_PACKET_HDR_PAD	\
+	(ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
+#define IECM_MAX_RXBUFFER			9728
+#define IECM_MAX_MTU		\
+	(IECM_MAX_RXBUFFER - IECM_PACKET_HDR_PAD)
+
+#define IECM_SINGLEQ_RX_BUF_DESC(R, i)	\
+	(&(((struct iecm_singleq_rx_buf_desc *)((R)->desc_ring))[i]))
+#define IECM_SPLITQ_RX_BUF_DESC(R, i)	\
+	(&(((struct iecm_splitq_rx_buf_desc *)((R)->desc_ring))[i]))
+
+#define IECM_BASE_TX_DESC(R, i)	\
+	(&(((struct iecm_base_tx_desc *)((R)->desc_ring))[i]))
+#define IECM_SPLITQ_TX_COMPLQ_DESC(R, i)	\
+	(&(((struct iecm_splitq_tx_compl_desc *)((R)->desc_ring))[i]))
+
+#define IECM_FLEX_TX_DESC(R, i)	\
+	(&(((union iecm_tx_flex_desc *)((R)->desc_ring))[i]))
+#define IECM_FLEX_TX_CTX_DESC(R, i)	\
+	(&(((union iecm_flex_tx_ctx_desc *)((R)->desc_ring))[i]))
+
+#define IECM_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->desc_count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+union iecm_tx_flex_desc {
+	struct iecm_flex_tx_desc q; /* queue based scheduling */
+	struct iecm_flex_tx_sched_desc flow; /* flow based scheduling */
+};
+
+struct iecm_tx_buf {
+	struct hlist_node hlist;
+	void *next_to_watch;
+	struct sk_buff *skb;
+	unsigned int bytecount;
+	unsigned short gso_segs;
+#define IECM_TX_FLAGS_TSO	BIT(0)
+	u32 tx_flags;
+	DEFINE_DMA_UNMAP_ADDR(dma);
+	DEFINE_DMA_UNMAP_LEN(len);
+	u16 compl_tag;		/* Unique identifier for buffer; used to
+				 * compare with completion tag returned
+				 * in buffer completion event
+				 */
+};
+
+struct iecm_buf_lifo {
+	u16 top;
+	u16 size;
+	struct iecm_tx_buf **bufs;
+};
+
+struct iecm_tx_offload_params {
+	u16 td_cmd;	/* command field to be inserted into descriptor */
+	u32 tso_len;	/* total length of payload to segment */
+	u16 mss;
+	u8 tso_hdr_len;	/* length of headers to be duplicated */
+
+	/* Flow scheduling offload timestamp, formatting as hw expects it */
+#define IECM_TW_TIME_STAMP_GRAN_512_DIV_S	9
+#define IECM_TW_TIME_STAMP_GRAN_1024_DIV_S	10
+#define IECM_TW_TIME_STAMP_GRAN_2048_DIV_S	11
+#define IECM_TW_TIME_STAMP_GRAN_4096_DIV_S	12
+	u64 desc_ts;
+
+	/* For legacy offloads */
+	u32 hdr_offsets;
+};
+
+struct iecm_tx_splitq_params {
+	/* Descriptor build function pointer */
+	void (*splitq_build_ctb)(union iecm_tx_flex_desc *desc,
+				 struct iecm_tx_splitq_params *params,
+				 u16 td_cmd, u16 size);
+
+	/* General descriptor info */
+	enum iecm_tx_desc_dtype_value dtype;
+	u16 eop_cmd;
+	u16 compl_tag; /* only relevant for flow scheduling */
+
+	struct iecm_tx_offload_params offload;
+};
+
+#define IECM_TX_COMPLQ_CLEAN_BUDGET	256
+#define IECM_TX_MIN_LEN			17
+#define IECM_TX_DESCS_FOR_SKB_DATA_PTR	1
+#define IECM_TX_MAX_BUF			8
+#define IECM_TX_DESCS_PER_CACHE_LINE	4
+#define IECM_TX_DESCS_FOR_CTX		1
+/* TX descriptors needed, worst case */
+#define IECM_TX_DESC_NEEDED (MAX_SKB_FRAGS + IECM_TX_DESCS_FOR_CTX + \
+			     IECM_TX_DESCS_PER_CACHE_LINE + \
+			     IECM_TX_DESCS_FOR_SKB_DATA_PTR)
+
+/* The size limit for a transmit buffer in a descriptor is (16K - 1).
+ * In order to align with the read requests we will align the value to
+ * the nearest 4K which represents our maximum read request size.
+ */
+#define IECM_TX_MAX_READ_REQ_SIZE	4096
+#define IECM_TX_MAX_DESC_DATA		(16 * 1024 - 1)
+#define IECM_TX_MAX_DESC_DATA_ALIGNED \
+	(~(IECM_TX_MAX_READ_REQ_SIZE - 1) & IECM_TX_MAX_DESC_DATA)
+
+#define IECM_RX_DMA_ATTR \
+	(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+#define IECM_RX_DESC(R, i)	\
+	(&(((union iecm_rx_desc *)((R)->desc_ring))[i]))
+
+struct iecm_rx_buf {
+	struct sk_buff *skb;
+	dma_addr_t dma;
+	struct page *page;
+	unsigned int page_offset;
+	u16 pagecnt_bias;
+	u16 buf_id;
+};
+
+/* Packet type non-ip values */
+enum iecm_rx_ptype_l2 {
+	IECM_RX_PTYPE_L2_RESERVED	= 0,
+	IECM_RX_PTYPE_L2_MAC_PAY2	= 1,
+	IECM_RX_PTYPE_L2_TIMESYNC_PAY2	= 2,
+	IECM_RX_PTYPE_L2_FIP_PAY2	= 3,
+	IECM_RX_PTYPE_L2_OUI_PAY2	= 4,
+	IECM_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	IECM_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	IECM_RX_PTYPE_L2_ECP_PAY2	= 7,
+	IECM_RX_PTYPE_L2_EVB_PAY2	= 8,
+	IECM_RX_PTYPE_L2_QCN_PAY2	= 9,
+	IECM_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	IECM_RX_PTYPE_L2_ARP		= 11,
+};
+
+enum iecm_rx_ptype_outer_ip {
+	IECM_RX_PTYPE_OUTER_L2	= 0,
+	IECM_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum iecm_rx_ptype_outer_ip_ver {
+	IECM_RX_PTYPE_OUTER_NONE	= 0,
+	IECM_RX_PTYPE_OUTER_IPV4	= 1,
+	IECM_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum iecm_rx_ptype_outer_fragmented {
+	IECM_RX_PTYPE_NOT_FRAG	= 0,
+	IECM_RX_PTYPE_FRAG	= 1,
+};
+
+enum iecm_rx_ptype_tunnel_type {
+	IECM_RX_PTYPE_TUNNEL_NONE		= 0,
+	IECM_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	IECM_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum iecm_rx_ptype_tunnel_end_prot {
+	IECM_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	IECM_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	IECM_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum iecm_rx_ptype_inner_prot {
+	IECM_RX_PTYPE_INNER_PROT_NONE		= 0,
+	IECM_RX_PTYPE_INNER_PROT_UDP		= 1,
+	IECM_RX_PTYPE_INNER_PROT_TCP		= 2,
+	IECM_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	IECM_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	IECM_RX_PTYPE_INNER_PROT_TIMESYNC	= 5,
+};
+
+enum iecm_rx_ptype_payload_layer {
+	IECM_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	IECM_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+struct iecm_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum iecm_rx_hsplit {
+	IECM_RX_NO_HDR_SPLIT = 0,
+	IECM_RX_HDR_SPLIT = 1,
+	IECM_RX_HDR_SPLIT_PERF = 2,
+};
+
+/* The iecm_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT iecm_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF iecm_ptype_lkup[ptype].outer_ip == IECM_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum iecm_rx_ptype_l2 to decode the packet type
+ * ENDIF
+ */
+/* macro to make the table lines short */
+#define IECM_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		IECM_RX_PTYPE_OUTER_##OUTER_IP, \
+		IECM_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		IECM_RX_PTYPE_##OUTER_FRAG, \
+		IECM_RX_PTYPE_TUNNEL_##T, \
+		IECM_RX_PTYPE_TUNNEL_END_##TE, \
+		IECM_RX_PTYPE_##TEF, \
+		IECM_RX_PTYPE_INNER_PROT_##I, \
+		IECM_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define IECM_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define IECM_RX_PTYPE_NOF		IECM_RX_PTYPE_NOT_FRAG
+#define IECM_RX_PTYPE_FRG		IECM_RX_PTYPE_FRAG
+#define IECM_RX_PTYPE_INNER_PROT_TS	IECM_RX_PTYPE_INNER_PROT_TIMESYNC
+#define IECM_RX_SUPP_PTYPE		18
+#define IECM_RX_MAX_PTYPE		1024
+
+#define IECM_INT_NAME_STR_LEN	(IFNAMSIZ + 16)
+
+enum iecm_queue_flags_t {
+	__IECM_Q_GEN_CHK,
+	__IECM_Q_FLOW_SCH_EN,
+	__IECM_Q_SW_MARKER,
+	__IECM_Q_FLAGS_NBITS,
+};
+
+struct iecm_intr_reg {
+	u32 dyn_ctl;
+	u32 dyn_ctl_intena_m;
+	u32 dyn_ctl_clrpba_m;
+	u32 dyn_ctl_itridx_s;
+	u32 dyn_ctl_itridx_m;
+	u32 dyn_ctl_intrvl_s;
+	u32 itr;
+};
+
+struct iecm_q_vector {
+	struct iecm_vport *vport;
+	cpumask_t affinity_mask;
+	struct napi_struct napi;
+	u16 v_idx;		/* index in the vport->q_vector array */
+	u8 itr_countdown;	/* when 0 should adjust ITR */
+	struct iecm_intr_reg intr_reg;
+	int num_txq;
+	struct iecm_queue **tx;
+	int num_rxq;
+	struct iecm_queue **rx;
+	char name[IECM_INT_NAME_STR_LEN];
+};
+
+struct iecm_rx_queue_stats {
+	u64 packets;
+	u64 bytes;
+	u64 csum_complete;
+	u64 csum_unnecessary;
+	u64 csum_err;
+	u64 hsplit;
+	u64 hsplit_hbo;
+};
+
+struct iecm_tx_queue_stats {
+	u64 packets;
+	u64 bytes;
+};
+
+union iecm_queue_stats {
+	struct iecm_rx_queue_stats rx;
+	struct iecm_tx_queue_stats tx;
+};
+
+enum iecm_latency_range {
+	IECM_LOWEST_LATENCY = 0,
+	IECM_LOW_LATENCY = 1,
+	IECM_BULK_LATENCY = 2,
+};
+
+struct iecm_itr {
+	u16 current_itr;
+	u16 target_itr;
+	enum virtchnl_itr_idx itr_idx;
+	union iecm_queue_stats stats; /* will reset to 0 when adjusting ITR */
+	enum iecm_latency_range latency_range;
+	unsigned long next_update;	/* jiffies of last ITR update */
+};
+
+/* indices into GLINT_ITR registers */
+#define IECM_ITR_ADAPTIVE_MIN_INC	0x0002
+#define IECM_ITR_ADAPTIVE_MIN_USECS	0x0002
+#define IECM_ITR_ADAPTIVE_MAX_USECS	0x007e
+#define IECM_ITR_ADAPTIVE_LATENCY	0x8000
+#define IECM_ITR_ADAPTIVE_BULK		0x0000
+#define ITR_IS_BULK(x) (!((x) & IECM_ITR_ADAPTIVE_LATENCY))
+
+#define IECM_ITR_DYNAMIC	0X8000	/* use top bit as a flag */
+#define IECM_ITR_MAX		0x1FE0
+#define IECM_ITR_100K		0x000A
+#define IECM_ITR_50K		0x0014
+#define IECM_ITR_20K		0x0032
+#define IECM_ITR_18K		0x003C
+#define IECM_ITR_GRAN_S		1	/* Assume ITR granularity is 2us */
+#define IECM_ITR_MASK		0x1FFE	/* ITR register value alignment mask */
+#define ITR_REG_ALIGN(setting)	__ALIGN_MASK(setting, ~IECM_ITR_MASK)
+#define IECM_ITR_IS_DYNAMIC(setting) (!!((setting) & IECM_ITR_DYNAMIC))
+#define IECM_ITR_SETTING(setting)	((setting) & ~IECM_ITR_DYNAMIC)
+#define ITR_COUNTDOWN_START	100
+#define IECM_ITR_TX_DEF		IECM_ITR_20K
+#define IECM_ITR_RX_DEF		IECM_ITR_50K
+
+/* queue associated with a vport */
+struct iecm_queue {
+	struct device *dev;		/* Used for DMA mapping */
+	struct iecm_vport *vport;	/* Back reference to associated vport */
+	union {
+		struct iecm_txq_group *txq_grp;
+		struct iecm_rxq_group *rxq_grp;
+	};
+	/* bufq: Used as group id, either 0 or 1, on clean Buf Q uses this
+	 *       index to determine which group of refill queues to clean.
+	 *       Bufqs are use in splitq only.
+	 * txq: Index to map between Tx Q group and hot path Tx ptrs stored in
+	 *      vport.  Used in both single Q/split Q.
+	 */
+	u16 idx;
+	/* Used for both Q models single and split. In split Q model relevant
+	 * only to Tx Q and Rx Q
+	 */
+	u8 __iomem *tail;
+	/* Used in both single and split Q.  In single Q, Tx Q uses tx_buf and
+	 * Rx Q uses rx_buf.  In split Q, Tx Q uses tx_buf, Rx Q uses skb, and
+	 * Buf Q uses rx_buf.
+	 */
+	union {
+		struct iecm_tx_buf *tx_buf;
+		struct {
+			struct iecm_rx_buf *buf;
+			struct iecm_rx_buf *hdr_buf;
+		} rx_buf;
+		struct sk_buff *skb;
+	};
+	enum virtchnl_queue_type q_type;
+	/* Queue id(Tx/Tx compl/Rx/Bufq) */
+	u16 q_id;
+	u16 desc_count;		/* Number of descriptors */
+
+	/* Relevant in both split & single Tx Q & Buf Q*/
+	u16 next_to_use;
+	/* In split q model only relevant for Tx Compl Q and Rx Q */
+	u16 next_to_clean;	/* used in interrupt processing */
+	/* Used only for Rx. In split Q model only relevant to Rx Q */
+	u16 next_to_alloc;
+	/* Generation bit check stored, as HW flips the bit at Queue end */
+	DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS);
+
+	union iecm_queue_stats q_stats;
+	struct u64_stats_sync stats_sync;
+
+	enum iecm_rx_hsplit rx_hsplit_en;
+
+	u16 rx_hbuf_size;	/* Header buffer size */
+	u16 rx_buf_size;
+	u16 rx_max_pkt_size;
+	u16 rx_buf_stride;
+	u8 rsc_low_watermark;
+	/* Used for both Q models single and split. In split Q model relevant
+	 * only to Tx compl Q and Rx compl Q
+	 */
+	struct iecm_q_vector *q_vector;	/* Back reference to associated vector */
+	struct iecm_itr itr;
+	unsigned int size;		/* length of descriptor ring in bytes */
+	dma_addr_t dma;			/* physical address of ring */
+	void *desc_ring;		/* Descriptor ring memory */
+
+	struct iecm_buf_lifo buf_stack; /* Stack of empty buffers to store
+					 * buffer info for out of order
+					 * buffer completions
+					 */
+	u16 tx_buf_key;			/* 16 bit unique "identifier" (index)
+					 * to be used as the completion tag when
+					 * queue is using flow based scheduling
+					 */
+	DECLARE_HASHTABLE(sched_buf_hash, 12);
+} ____cacheline_internodealigned_in_smp;
+
+/* Software queues are used in splitq mode to manage buffers between rxq
+ * producer and the bufq consumer.  These are required in order to maintain a
+ * lockless buffer management system and are strictly software only constructs.
+ */
+struct iecm_sw_queue {
+	u16 next_to_clean ____cacheline_aligned_in_smp;
+	u16 next_to_alloc ____cacheline_aligned_in_smp;
+	DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS)
+		____cacheline_aligned_in_smp;
+	u16 *ring ____cacheline_aligned_in_smp;
+	u16 q_entries;
+} ____cacheline_internodealigned_in_smp;
+
+/* Splitq only.  iecm_rxq_set associates an rxq with at most two refillqs.
+ * Each rxq needs a refillq to return used buffers back to the respective bufq.
+ * Bufqs then clean these refillqs for buffers to give to hardware.
+ */
+struct iecm_rxq_set {
+	struct iecm_queue rxq;
+	/* refillqs assoc with bufqX mapped to this rxq */
+	struct iecm_sw_queue *refillq0;
+	struct iecm_sw_queue *refillq1;
+};
+
+/* Splitq only.  iecm_bufq_set associates a bufq to an overflow and array of
+ * refillqs.  In this bufq_set, there will be one refillq for each rxq in this
+ * rxq_group.  Used buffers received by rxqs will be put on refillqs which
+ * bufqs will clean to return new buffers back to hardware.
+ *
+ * Buffers needed by some number of rxqs associated in this rxq_group are
+ * managed by at most two bufqs (depending on performance configuration).
+ */
+struct iecm_bufq_set {
+	struct iecm_queue bufq;
+	struct iecm_sw_queue overflowq;
+	/* This is always equal to num_rxq_sets in idfp_rxq_group */
+	int num_refillqs;
+	struct iecm_sw_queue *refillqs;
+};
+
+/* In singleq mode, an rxq_group is simply an array of rxqs.  In splitq, a
+ * rxq_group contains all the rxqs, bufqs, refillqs, and overflowqs needed to
+ * manage buffers in splitq mode.
+ */
+struct iecm_rxq_group {
+	struct iecm_vport *vport; /* back pointer */
+
+	union {
+		struct {
+			int num_rxq;
+			struct iecm_queue *rxqs;
+		} singleq;
+		struct {
+			int num_rxq_sets;
+			struct iecm_rxq_set *rxq_sets;
+			struct iecm_bufq_set *bufq_sets;
+		} splitq;
+	};
+};
+
+/* Between singleq and splitq, a txq_group is largely the same except for the
+ * complq.  In splitq a single complq is responsible for handling completions
+ * for some number of txqs associated in this txq_group.
+ */
+struct iecm_txq_group {
+	struct iecm_vport *vport; /* back pointer */
+
+	int num_txq;
+	struct iecm_queue *txqs;
+
+	/* splitq only */
+	struct iecm_queue *complq;
+};
+
+int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
+void iecm_vport_init_num_qs(struct iecm_vport *vport,
+			    struct virtchnl_create_vport *vport_msg);
+void iecm_vport_calc_num_q_desc(struct iecm_vport *vport);
+void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg,
+			      int num_req_qs);
+void iecm_vport_calc_num_q_groups(struct iecm_vport *vport);
+int iecm_vport_queues_alloc(struct iecm_vport *vport);
+void iecm_vport_queues_rel(struct iecm_vport *vport);
+void iecm_vport_calc_num_q_vec(struct iecm_vport *vport);
+void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport);
+void iecm_vport_intr_clear_dflt_itr(struct iecm_vport *vport);
+void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector);
+void iecm_vport_intr_deinit(struct iecm_vport *vport);
+int iecm_vport_intr_init(struct iecm_vport *vport);
+irqreturn_t
+iecm_vport_intr_clean_queues(int __always_unused irq, void *data);
+void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport);
+int iecm_config_rss(struct iecm_vport *vport);
+void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list);
+void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list);
+int iecm_init_rss(struct iecm_vport *vport);
+void iecm_deinit_rss(struct iecm_vport *vport);
+int iecm_config_rss(struct iecm_vport *vport);
+void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit,
+			struct iecm_rx_buf *old_buf);
+void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb,
+		      unsigned int size);
+struct sk_buff *iecm_rx_construct_skb(struct iecm_queue *rxq,
+				      struct iecm_rx_buf *rx_buf,
+				      unsigned int size);
+bool iecm_rx_cleanup_headers(struct sk_buff *skb);
+bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit,
+			 struct iecm_rx_buf *rx_buf);
+void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb);
+bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf);
+void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val);
+void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val);
+void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf);
+unsigned int iecm_tx_desc_count_required(struct sk_buff *skb);
+int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size);
+void iecm_tx_timeout(struct net_device *netdev,
+		     unsigned int __always_unused txqueue);
+netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb,
+				 struct net_device *netdev);
+netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb,
+				  struct net_device *netdev);
+bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rxq,
+				      u16 cleaned_count);
+void iecm_get_stats64(struct net_device *netdev,
+		      struct rtnl_link_stats64 *stats);
+#endif /* !_IECM_TXRX_H_ */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 04/15] iecm: Common module introduction and function stubs
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (2 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 03/15] iecm: Add TX/RX " Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 20:41   ` Jakub Kicinski
  2020-08-24 17:32 ` [net-next v5 05/15] iecm: Add basic netdevice functionality Tony Nguyen
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

This introduces function stubs for the framework of the common
module.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../net/ethernet/intel/iecm/iecm_controlq.c   |  194 +++
 .../ethernet/intel/iecm/iecm_controlq_setup.c |   83 ++
 .../net/ethernet/intel/iecm/iecm_ethtool.c    |   16 +
 drivers/net/ethernet/intel/iecm/iecm_lib.c    |  440 ++++++
 drivers/net/ethernet/intel/iecm/iecm_main.c   |   38 +
 .../ethernet/intel/iecm/iecm_singleq_txrx.c   |  258 ++++
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   | 1265 +++++++++++++++++
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   |  577 ++++++++
 8 files changed, 2871 insertions(+)
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_ethtool.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_lib.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_main.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_txrx.c
 create mode 100644 drivers/net/ethernet/intel/iecm/iecm_virtchnl.c

diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c
new file mode 100644
index 000000000000..585392e4d72a
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c
@@ -0,0 +1,194 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2020, Intel Corporation. */
+
+#include <linux/net/intel/iecm_controlq.h>
+
+/**
+ * iecm_ctlq_setup_regs - initialize control queue registers
+ * @cq: pointer to the specific control queue
+ * @q_create_info: structs containing info for each queue to be initialized
+ */
+static void
+iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq,
+		     struct iecm_ctlq_create_info *q_create_info)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_init_regs - Initialize control queue registers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ * @is_rxq: true if receive control queue, false otherwise
+ *
+ * Initialize registers. The caller is expected to have already initialized the
+ * descriptor ring memory and buffer memory
+ */
+static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw,
+					    struct iecm_ctlq_info *cq,
+					    bool is_rxq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
+ * @cq: pointer to the specific Control queue
+ *
+ * Record the address of the receive queue DMA buffers in the descriptors.
+ * The buffers must have been previously allocated.
+ */
+static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_shutdown - shutdown the CQ
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for any controq queue
+ */
+static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_add - add one control queue
+ * @hw: pointer to hardware struct
+ * @qinfo: info for queue to be created
+ * @cq_out: (output) double pointer to control queue to be created
+ *
+ * Allocate and initialize a control queue and add it to the control queue list.
+ * The cq parameter will be allocated/initialized and passed back to the caller
+ * if no errors occur.
+ *
+ * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add
+ */
+enum iecm_status iecm_ctlq_add(struct iecm_hw *hw,
+			       struct iecm_ctlq_create_info *qinfo,
+			       struct iecm_ctlq_info **cq_out)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_remove - deallocate and remove specified control queue
+ * @hw: pointer to hardware struct
+ * @cq: pointer to control queue to be removed
+ */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+		      struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_init - main initialization routine for all control queues
+ * @hw: pointer to hardware struct
+ * @num_q: number of queues to initialize
+ * @q_info: array of structs containing info for each queue to be initialized
+ *
+ * This initializes any number and any type of control queues. This is an all
+ * or nothing routine; if one fails, all previously allocated queues will be
+ * destroyed. This must be called prior to using the individual add/remove
+ * APIs.
+ */
+enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+				struct iecm_ctlq_create_info *q_info)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_deinit - destroy all control queues
+ * @hw: pointer to hw struct
+ */
+enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_send - send command to Control Queue (CTQ)
+ * @hw: pointer to hw struct
+ * @cq: handle to control queue struct to send on
+ * @num_q_msg: number of messages to send on control queue
+ * @q_msg: pointer to array of queue messages to be sent
+ *
+ * The caller is expected to allocate DMAable buffers and pass them to the
+ * send routine via the q_msg struct / control queue specific data struct.
+ * The control queue will hold a reference to each send message until
+ * the completion for that message has been cleaned.
+ */
+enum iecm_status iecm_ctlq_send(struct iecm_hw *hw,
+				struct iecm_ctlq_info *cq,
+				u16 num_q_msg,
+				struct iecm_ctlq_msg q_msg[])
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the
+ * requested queue
+ * @cq: pointer to the specific Control queue
+ * @clean_count: (input|output) number of descriptors to clean as input, and
+ * number of descriptors actually cleaned as output
+ * @msg_status: (output) pointer to msg pointer array to be populated; needs
+ * to be allocated by caller
+ *
+ * Returns an array of message pointers associated with the cleaned
+ * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
+ * descriptors.  The status will be returned for each; any messages that failed
+ * to send will have a non-zero status. The caller is expected to free original
+ * ctlq_msgs and free or reuse the DMA buffers.
+ */
+enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq,
+				    u16 *clean_count,
+				    struct iecm_ctlq_msg *msg_status[])
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue handle
+ * @buff_count: (input|output) input is number of buffers caller is trying to
+ * return; output is number of buffers that were not posted
+ * @buffs: array of pointers to DMA mem structs to be given to hardware
+ *
+ * Caller uses this function to return DMA buffers to the descriptor ring after
+ * consuming them; buff_count will be the number of buffers.
+ *
+ * Note: this function needs to be called after a receive call even
+ * if there are no DMA buffers to be returned, i.e. buff_count = 0,
+ * buffs = NULL to support direct commands
+ */
+enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
+					 struct iecm_ctlq_info *cq,
+					 u16 *buff_count,
+					 struct iecm_dma_mem **buffs)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_recv - receive control queue message call back
+ * @cq: pointer to control queue handle to receive on
+ * @num_q_msg: (input|output) input number of messages that should be received;
+ * output number of messages actually received
+ * @q_msg: (output) array of received control queue messages on this q;
+ * needs to be pre-allocated by caller for as many messages as requested
+ *
+ * Called by interrupt handler or polling mechanism. Caller is expected
+ * to free buffers
+ */
+enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq,
+				u16 *num_q_msg, struct iecm_ctlq_msg *q_msg)
+{
+	/* stub */
+}
diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
new file mode 100644
index 000000000000..d4402be5aa7f
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
@@ -0,0 +1,83 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2020, Intel Corporation. */
+
+#include <linux/net/intel/iecm_controlq.h>
+
+/**
+ * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ */
+static enum iecm_status
+iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
+			  struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Allocate the buffer head for all control queues, and if it's a receive
+ * queue, allocate DMA buffers
+ */
+static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
+					     struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * This assumes the posted send buffers have already been cleaned
+ * and de-allocated
+ */
+static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
+				     struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_free_bufs - Free CQ buffer info elements
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
+ * queues.  The upper layers are expected to manage freeing of TX DMA buffers
+ */
+static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the memory used by the ring, buffers and other related structures
+ */
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue struct
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
+					  struct iecm_ctlq_info *cq)
+{
+	/* stub */
+}
diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c
new file mode 100644
index 000000000000..a6532592f2f4
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c
@@ -0,0 +1,16 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#include <linux/net/intel/iecm.h>
+
+/**
+ * iecm_set_ethtool_ops - Initialize ethtool ops struct
+ * @netdev: network interface device structure
+ *
+ * Sets ethtool ops struct in our netdev so that ethtool can call
+ * our functions.
+ */
+void iecm_set_ethtool_ops(struct net_device *netdev)
+{
+	/* stub */
+}
diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
new file mode 100644
index 000000000000..a5ff9209d290
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -0,0 +1,440 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/net/intel/iecm.h>
+
+static const struct net_device_ops iecm_netdev_ops_splitq;
+static const struct net_device_ops iecm_netdev_ops_singleq;
+
+/**
+ * iecm_mb_intr_rel_irq - Free the IRQ association with the OS
+ * @adapter: adapter structure
+ */
+static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_intr_rel - Release interrupt capabilities and free memory
+ * @adapter: adapter to disable interrupts on
+ */
+static void iecm_intr_rel(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_mb_intr_clean - Interrupt handler for the mailbox
+ * @irq: interrupt number
+ * @data: pointer to the adapter structure
+ */
+static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data)
+{
+	/* stub */
+}
+
+/**
+ * iecm_mb_irq_enable - Enable MSIX interrupt for the mailbox
+ * @adapter: adapter to get the hardware address for register write
+ */
+static void iecm_mb_irq_enable(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_mb_intr_req_irq - Request IRQ for the mailbox interrupt
+ * @adapter: adapter structure to pass to the mailbox IRQ handler
+ */
+static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_get_mb_vec_id - Get vector index for mailbox
+ * @adapter: adapter structure to access the vector chunks
+ *
+ * The first vector id in the requested vector chunks from the CP is for
+ * the mailbox
+ */
+static void iecm_get_mb_vec_id(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_mb_intr_init - Initialize the mailbox interrupt
+ * @adapter: adapter structure to store the mailbox vector
+ */
+static int iecm_mb_intr_init(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_intr_distribute - Distribute MSIX vectors
+ * @adapter: adapter structure to get the vports
+ *
+ * Distribute the MSIX vectors acquired from the OS to the vports based on the
+ * num of vectors requested by each vport
+ */
+static void iecm_intr_distribute(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_intr_req - Request interrupt capabilities
+ * @adapter: adapter to enable interrupts on
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_intr_req(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_cfg_netdev - Allocate, configure and register a netdev
+ * @vport: main vport structure
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int iecm_cfg_netdev(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_cfg_hw - Initialize HW struct
+ * @adapter: adapter to setup hw struct for
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_cfg_hw(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_res_alloc - Allocate vport major memory resources
+ * @vport: virtual port private structure
+ */
+static int iecm_vport_res_alloc(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_res_free - Free vport major memory resources
+ * @vport: virtual port private structure
+ */
+static void iecm_vport_res_free(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_get_free_slot - get the next non-NULL location index in array
+ * @array: array to search
+ * @size: size of the array
+ * @curr: last known occupied index to be used as a search hint
+ *
+ * void * is being used to keep the functionality generic. This lets us use this
+ * function on any array of pointers.
+ */
+static int iecm_get_free_slot(void *array, int size, int curr)
+{
+	/* stub */
+}
+
+/**
+ * iecm_netdev_to_vport - get a vport handle from a netdev
+ * @netdev: network interface device structure
+ */
+struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_netdev_to_adapter - get an adapter handle from a netdev
+ * @netdev: network interface device structure
+ */
+struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_stop - Disable a vport
+ * @vport: vport to disable
+ */
+static void iecm_vport_stop(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_stop - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * The stop entry point is called when an interface is de-activated by the OS,
+ * and the netdevice enters the DOWN state.  The hardware is still under the
+ * driver's control, but the netdev interface is disabled.
+ *
+ * Returns success only - not allowed to fail
+ */
+static int iecm_stop(struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_rel - Delete a vport and free its resources
+ * @vport: the vport being removed
+ */
+static void iecm_vport_rel(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_rel_all - Delete all vports
+ * @adapter: adapter from which all vports are being removed
+ */
+static void iecm_vport_rel_all(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_set_hsplit - enable or disable header split on a given vport
+ * @vport: virtual port
+ * @prog: bpf_program attached to an interface or NULL
+ */
+void iecm_vport_set_hsplit(struct iecm_vport *vport,
+			   struct bpf_prog __always_unused *prog)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_alloc - Allocates the next available struct vport in the adapter
+ * @adapter: board private structure
+ * @vport_id: vport identifier
+ *
+ * returns a pointer to a vport on success, NULL on failure.
+ */
+static struct iecm_vport *
+iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id)
+{
+	/* stub */
+}
+
+/**
+ * iecm_service_task - Delayed task for handling mailbox responses
+ * @work: work_struct handle to our data
+ *
+ */
+static void iecm_service_task(struct work_struct *work)
+{
+	/* stub */
+}
+
+/**
+ * iecm_up_complete - Complete interface up sequence
+ * @vport: virtual port structure
+ */
+static int iecm_up_complete(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_init_buf_tail - Write initial buffer ring tail value
+ * @vport: virtual port struct
+ */
+static void iecm_rx_init_buf_tail(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_open - Bring up a vport
+ * @vport: vport to bring up
+ */
+static int iecm_vport_open(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_init_task - Delayed initialization task
+ * @work: work_struct handle to our data
+ *
+ * Init task finishes up pending work started in probe.  Due to the asynchronous
+ * nature in which the device communicates with hardware, we may have to wait
+ * several milliseconds to get a response.  Instead of busy polling in probe,
+ * pulling it out into a delayed work task prevents us from bogging down the
+ * whole system waiting for a response from hardware.
+ */
+static void iecm_init_task(struct work_struct *work)
+{
+	/* stub */
+}
+
+/**
+ * iecm_api_init - Initialize and verify device API
+ * @adapter: driver specific private structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_api_init(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_deinit_task - Device deinit routine
+ * @adapter: Driver specific private structure
+ *
+ * Extended remove logic which will be used for
+ * hard reset as well
+ */
+static void iecm_deinit_task(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_init_hard_reset - Initiate a hardware reset
+ * @adapter: Driver specific private structure
+ *
+ * Deallocate the vports and all the resources associated with them and
+ * reallocate. Also reinitialize the mailbox.  Return 0 on success,
+ * negative on failure.
+ */
+static int iecm_init_hard_reset(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vc_event_task - Handle virtchannel event logic
+ * @work: work queue struct
+ */
+static void iecm_vc_event_task(struct work_struct *work)
+{
+	/* stub */
+}
+
+/**
+ * iecm_initiate_soft_reset - Initiate a software reset
+ * @vport: virtual port data struct
+ * @reset_cause: reason for the soft reset
+ *
+ * Soft reset only reallocs vport queue resources. Returns 0 on success,
+ * negative on failure.
+ */
+int iecm_initiate_soft_reset(struct iecm_vport *vport,
+			     enum iecm_flags reset_cause)
+{
+	/* stub */
+}
+
+/**
+ * iecm_probe - Device initialization routine
+ * @pdev: PCI device information struct
+ * @ent: entry in iecm_pci_tbl
+ * @adapter: driver specific private structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+int iecm_probe(struct pci_dev *pdev,
+	       const struct pci_device_id __always_unused *ent,
+	       struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_probe);
+
+/**
+ * iecm_remove - Device removal routine
+ * @pdev: PCI device information struct
+ */
+void iecm_remove(struct pci_dev *pdev)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_remove);
+
+/**
+ * iecm_shutdown - PCI callback for shutting down device
+ * @pdev: PCI device information struct
+ */
+void iecm_shutdown(struct pci_dev *pdev)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_shutdown);
+
+/**
+ * iecm_open - Called when a network interface becomes active
+ * @netdev: network interface device structure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP).  At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the netdev watchdog is enabled,
+ * and the stack is notified that the interface is ready.
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int iecm_open(struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_change_mtu - NDO callback to change the MTU
+ * @netdev: network interface device structure
+ * @new_mtu: new value for maximum frame size
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_change_mtu(struct net_device *netdev, int new_mtu)
+{
+	/* stub */
+}
+
+void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size)
+{
+	/* stub */
+}
+
+void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem)
+{
+	/* stub */
+}
+
+static const struct net_device_ops iecm_netdev_ops_splitq = {
+	.ndo_open = iecm_open,
+	.ndo_stop = iecm_stop,
+	.ndo_start_xmit = iecm_tx_splitq_start,
+	.ndo_validate_addr = eth_validate_addr,
+	.ndo_get_stats64 = iecm_get_stats64,
+};
+
+static const struct net_device_ops iecm_netdev_ops_singleq = {
+	.ndo_open = iecm_open,
+	.ndo_stop = iecm_stop,
+	.ndo_start_xmit = iecm_tx_singleq_start,
+	.ndo_validate_addr = eth_validate_addr,
+	.ndo_change_mtu = iecm_change_mtu,
+	.ndo_get_stats64 = iecm_get_stats64,
+};
diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c
new file mode 100644
index 000000000000..68d9f2e6445b
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_main.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/net/intel/iecm.h>
+
+char iecm_drv_name[] = "iecm";
+#define DRV_SUMMARY	"Intel(R) Data Plane Function Linux Driver"
+static const char iecm_driver_string[] = DRV_SUMMARY;
+static const char iecm_copyright[] = "Copyright (c) 2020, Intel Corporation.";
+
+MODULE_DESCRIPTION(DRV_SUMMARY);
+MODULE_LICENSE("GPL v2");
+
+/**
+ * iecm_module_init - Driver registration routine
+ *
+ * iecm_module_init is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ */
+static int __init iecm_module_init(void)
+{
+	/* stub */
+}
+module_init(iecm_module_init);
+
+/**
+ * iecm_module_exit - Driver exit cleanup routine
+ *
+ * iecm_module_exit is called just before the driver is removed
+ * from memory.
+ */
+static void __exit iecm_module_exit(void)
+{
+	/* stub */
+}
+module_exit(iecm_module_exit);
diff --git a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
new file mode 100644
index 000000000000..063c35274f38
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#include <linux/prefetch.h>
+#include <linux/net/intel/iecm.h>
+
+/**
+ * iecm_tx_singleq_build_ctob - populate command tag offset and size
+ * @td_cmd: Command to be filled in desc
+ * @td_offset: Offset to be filled in desc
+ * @size: Size of the buffer
+ * @td_tag: VLAN tag to be filled
+ *
+ * Returns the 64 bit value populated with the input parameters
+ */
+static __le64
+iecm_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size,
+			   u64 td_tag)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_csum - Enable Tx checksum offloads
+ * @first: pointer to first descriptor
+ * @off: pointer to struct that holds offload parameters
+ *
+ * Returns 0 or error (negative) if checksum offload
+ */
+static
+int iecm_tx_singleq_csum(struct iecm_tx_buf *first,
+			 struct iecm_tx_offload_params *off)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_map - Build the Tx base descriptor
+ * @tx_q: queue to send buffer on
+ * @first: first buffer info buffer to use
+ * @offloads: pointer to struct that holds offload parameters
+ *
+ * This function loops over the skb data pointed to by *first
+ * and gets a physical address for each memory location and programs
+ * it and the length into the transmit base mode descriptor.
+ */
+static void
+iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first,
+		    struct iecm_tx_offload_params *offloads)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_frame - Sends buffer on Tx ring using base descriptors
+ * @skb: send buffer
+ * @tx_q: queue to send buffer on
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+static netdev_tx_t
+iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_start - Selects the right Tx queue to send buffer
+ * @skb: send buffer
+ * @netdev: network interface device structure
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb,
+				  struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_clean - Reclaim resources from queue
+ * @tx_q: Tx queue to clean
+ * @napi_budget: Used to determine if we are in netpoll
+ *
+ */
+static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_singleq_clean_all - Clean all Tx queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static inline bool
+iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_test_staterr - tests bits in Rx descriptor
+ * status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_ptype_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static bool
+iecm_rx_singleq_test_staterr(struct iecm_singleq_base_rx_desc *rx_desc,
+			     const u64 stat_err_bits)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_is_non_eop - process handling of non-EOP buffers
+ * @rxq: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ */
+static bool iecm_rx_singleq_is_non_eop(struct iecm_queue *rxq,
+				       struct iecm_singleq_base_rx_desc
+				       *rx_desc, struct sk_buff *skb)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_csum - Indicate in skb if checksum is good
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: skb currently being received and modified
+ * @rx_desc: the receive descriptor
+ * @ptype: the packet type decoded by hardware
+ *
+ * skb->protocol must be set before this function is called
+ */
+static void iecm_rx_singleq_csum(struct iecm_queue *rxq, struct sk_buff *skb,
+				 struct iecm_singleq_base_rx_desc *rx_desc,
+				 u8 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_process_skb_fields - Populate skb header fields from Rx
+ * descriptor
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: descriptor for skb
+ * @ptype: packet type
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
+ */
+static void
+iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
+				   struct iecm_singleq_base_rx_desc *rx_desc,
+				   u8 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_buf_hw_alloc_all - Replace used receive buffers
+ * @rx_q: queue for which the hw buffers are allocated
+ * @cleaned_count: number of buffers to replace
+ *
+ * Returns false if all allocations were successful, true if any fail
+ */
+bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q,
+				      u16 cleaned_count)
+{
+	/* stub */
+}
+
+/**
+ * iecm_singleq_rx_put_buf - wrapper function to clean and recycle buffers
+ * @rx_bufq: Rx descriptor queue to transact packets on
+ * @rx_buf: Rx buffer to pull data from
+ *
+ * This function will update the next_to_use/next_to_alloc if the current
+ * buffer is recycled.
+ */
+static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq,
+				    struct iecm_rx_buf *rx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value
+ * @q: queue to bump
+ */
+static void iecm_singleq_rx_bump_ntc(struct iecm_queue *q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_singleq_rx_get_buf_page - Fetch Rx buffer page and synchronize data
+ * @dev: device struct
+ * @rx_buf: Rx buf to fetch page for
+ * @size: size of buffer to add to skb
+ *
+ * This function will pull an Rx buffer page from the ring and synchronize it
+ * for use by the CPU.
+ */
+static struct sk_buff *
+iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf,
+			     const unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_clean - Reclaim resources after receive completes
+ * @rx_q: Rx queue to clean
+ * @budget: Total limit on number of packets to process
+ *
+ * Returns true if there's any budget left (e.g. the clean is finished)
+ */
+static int iecm_rx_singleq_clean(struct iecm_queue *rx_q, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_singleq_clean_all - Clean all Rx queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static inline bool
+iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget,
+			  int *cleaned)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_singleq_napi_poll - NAPI handler
+ * @napi: struct from which you get q_vector
+ * @budget: budget provided by stack
+ */
+int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget)
+{
+	/* stub */
+}
diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
new file mode 100644
index 000000000000..6df39810264a
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -0,0 +1,1265 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#include <linux/net/intel/iecm.h>
+
+/**
+ * iecm_buf_lifo_push - push a buffer pointer onto stack
+ * @stack: pointer to stack struct
+ * @buf: pointer to buf to push
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int iecm_buf_lifo_push(struct iecm_buf_lifo *stack,
+			      struct iecm_tx_buf *buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_buf_lifo_pop - pop a buffer pointer from stack
+ * @stack: pointer to stack struct
+ **/
+static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack)
+{
+	/* stub */
+}
+
+/**
+ * iecm_get_stats64 - get statistics for network device structure
+ * @netdev: network interface device structure
+ * @stats: main device statistics structure
+ */
+void iecm_get_stats64(struct net_device *netdev,
+		      struct rtnl_link_stats64 *stats)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_buf_rel - Release a Tx buffer
+ * @tx_q: the queue that owns the buffer
+ * @tx_buf: the buffer to free
+ */
+void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_buf_rel all - Free any empty Tx buffers
+ * @txq: queue to be cleaned
+ */
+static void iecm_tx_buf_rel_all(struct iecm_queue *txq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_desc_rel - Free Tx resources per queue
+ * @txq: Tx descriptor ring for a specific queue
+ * @bufq: buffer q or completion q
+ *
+ * Free all transmit software resources
+ */
+static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_desc_rel_all - Free Tx Resources for All Queues
+ * @vport: virtual port structure
+ *
+ * Free all transmit software resources
+ */
+static void iecm_tx_desc_rel_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_buf_alloc_all - Allocate memory for all buffer resources
+ * @tx_q: queue for which the buffers are allocated
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_tx_buf_alloc_all(struct iecm_queue *tx_q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_desc_alloc - Allocate the Tx descriptors
+ * @tx_q: the Tx ring to set up
+ * @bufq: buffer or completion queue
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_desc_alloc_all - allocate all queues Tx resources
+ * @vport: virtual port private structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_tx_desc_alloc_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_rel - Release a Rx buffer
+ * @rxq: the queue that owns the buffer
+ * @rx_buf: the buffer to free
+ */
+static void iecm_rx_buf_rel(struct iecm_queue *rxq,
+			    struct iecm_rx_buf *rx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_rel_all - Free all Rx buffer resources for a queue
+ * @rxq: queue to be cleaned
+ */
+static void iecm_rx_buf_rel_all(struct iecm_queue *rxq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_desc_rel - Free a specific Rx q resources
+ * @rxq: queue to clean the resources from
+ * @bufq: buffer q or completion q
+ * @q_model: single or split q model
+ *
+ * Free a specific Rx queue resources
+ */
+static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq,
+			     enum virtchnl_queue_model q_model)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_desc_rel_all - Free Rx Resources for All Queues
+ * @vport: virtual port structure
+ *
+ * Free all Rx queues resources
+ */
+static void iecm_rx_desc_rel_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_hw_update - Store the new tail and head values
+ * @rxq: queue to bump
+ * @val: new head index
+ */
+void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_hw_alloc - recycle or make a new page
+ * @rxq: ring to use
+ * @buf: rx_buffer struct to modify
+ *
+ * Returns true if the page was successfully allocated or
+ * reused.
+ */
+bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_hdr_buf_hw_alloc - recycle or make a new page for header buffer
+ * @rxq: ring to use
+ * @hdr_buf: rx_buffer struct to modify
+ *
+ * Returns true if the page was successfully allocated or
+ * reused.
+ */
+static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq,
+				     struct iecm_rx_buf *hdr_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_hw_alloc_all - Replace used receive buffers
+ * @rxq: queue for which the hw buffers are allocated
+ * @cleaned_count: number of buffers to replace
+ *
+ * Returns false if all allocations were successful, true if any fail
+ */
+static bool
+iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq,
+			 u16 cleaned_count)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_alloc_all - Allocate memory for all buffer resources
+ * @rxq: queue for which the buffers are allocated
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_rx_buf_alloc_all(struct iecm_queue *rxq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_desc_alloc - Allocate queue Rx resources
+ * @rxq: Rx queue for which the resources are setup
+ * @bufq: buffer or completion queue
+ * @q_model: single or split queue model
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq,
+			      enum virtchnl_queue_model q_model)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_desc_alloc_all - allocate all RX queues resources
+ * @vport: virtual port structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_rx_desc_alloc_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_txq_group_rel - Release all resources for txq groups
+ * @vport: vport to release txq groups on
+ */
+static void iecm_txq_group_rel(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rxq_group_rel - Release all resources for rxq groups
+ * @vport: vport to release rxq groups on
+ */
+static void iecm_rxq_group_rel(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_queue_grp_rel_all - Release all queue groups
+ * @vport: vport to release queue groups for
+ */
+static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_queues_rel - Free memory for all queues
+ * @vport: virtual port
+ *
+ * Free the memory allocated for queues associated to a vport
+ */
+void iecm_vport_queues_rel(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_init_fast_path_txqs - Initialize fast path txq array
+ * @vport: vport to init txqs on
+ *
+ * We get a queue index from skb->queue_mapping and we need a fast way to
+ * dereference the queue from queue groups.  This allows us to quickly pull a
+ * txq based on a queue index.
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_vport_init_fast_path_txqs(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_init_num_qs - Initialize number of queues
+ * @vport: vport to initialize qs
+ * @vport_msg: data to be filled into vport
+ */
+void iecm_vport_init_num_qs(struct iecm_vport *vport,
+			    struct virtchnl_create_vport *vport_msg)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_calc_num_q_desc - Calculate number of queue groups
+ * @vport: vport to calculate q groups for
+ */
+void iecm_vport_calc_num_q_desc(struct iecm_vport *vport)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_vport_calc_num_q_desc);
+
+/**
+ * iecm_vport_calc_total_qs - Calculate total number of queues
+ * @vport_msg: message to fill with data
+ * @num_req_qs: user requested queues
+ */
+void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg,
+			      int num_req_qs)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_calc_num_q_groups - Calculate number of queue groups
+ * @vport: vport to calculate q groups for
+ */
+void iecm_vport_calc_num_q_groups(struct iecm_vport *vport)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_vport_calc_num_q_groups);
+
+/**
+ * iecm_vport_calc_numq_per_grp - Calculate number of queues per group
+ * @vport: vport to calculate queues for
+ * @num_txq: int return parameter
+ * @num_rxq: int return parameter
+ */
+static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport,
+					 int *num_txq, int *num_rxq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_calc_num_q_vec - Calculate total number of vectors required for
+ * this vport
+ * @vport: virtual port
+ *
+ */
+void iecm_vport_calc_num_q_vec(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_txq_group_alloc - Allocate all txq group resources
+ * @vport: vport to allocate txq groups for
+ * @num_txq: number of txqs to allocate for each group
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rxq_group_alloc - Allocate all rxq group resources
+ * @vport: vport to allocate rxq groups for
+ * @num_rxq: number of rxqs to allocate for each group
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_queue_grp_alloc_all - Allocate all queue groups/resources
+ * @vport: vport with qgrps to allocate
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_queues_alloc - Allocate memory for all queues
+ * @vport: virtual port
+ *
+ * Allocate memory for queues associated with a vport.  Returns 0 on success,
+ * negative on failure.
+ */
+int iecm_vport_queues_alloc(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_find_q - Find the Tx q based on q id
+ * @vport: the vport we care about
+ * @q_id: Id of the queue
+ *
+ * Returns queue ptr if found else returns NULL
+ */
+static struct iecm_queue *
+iecm_tx_find_q(struct iecm_vport *vport, int q_id)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_handle_sw_marker - Handle queue marker packet
+ * @tx_q: Tx queue to handle software marker
+ */
+static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_clean_buf - Clean TX buffer resources
+ * @tx_q: Tx queue to clean buffer from
+ * @tx_buf: buffer to be cleaned
+ * @napi_budget: Used to determine if we are in netpoll
+ *
+ * Returns the stats (bytes/packets) cleaned from this buffer
+ */
+static struct iecm_tx_queue_stats
+iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf,
+			 int napi_budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_stash_flow_sch_buffers - store buffere parameter info to be freed at a
+ * later time (only relevant for flow scheduling mode)
+ * @txq: Tx queue to clean
+ * @tx_buf: buffer to store
+ */
+static int
+iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_clean - Reclaim resources from buffer queue
+ * @tx_q: Tx queue to clean
+ * @end: queue index until which it should be cleaned
+ * @napi_budget: Used to determine if we are in netpoll
+ * @descs_only: true if queue is using flow-based scheduling and should
+ * not clean buffers at this time
+ *
+ * Cleans the queue descriptor ring. If the queue is using queue-based
+ * scheduling, the buffers will be cleaned as well and this function will
+ * return the number of bytes/packets cleaned. If the queue is using flow-based
+ * scheduling, only the descriptors are cleaned at this time. Separate packet
+ * completion events will be reported on the completion queue, and the
+ * buffers will be cleaned separately. The stats returned from this function
+ * when using flow-based scheduling are irrelevant.
+ */
+static struct iecm_tx_queue_stats
+iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget,
+		     bool descs_only)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_clean_flow_sch_bufs - clean bufs that were stored for
+ * out of order completions
+ * @txq: queue to clean
+ * @compl_tag: completion tag of packet to clean (from completion descriptor)
+ * @desc_ts: pointer to 3 byte timestamp from descriptor
+ * @budget: Used to determine if we are in netpoll
+ */
+static struct iecm_tx_queue_stats
+iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag,
+			    u8 *desc_ts, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_clean_complq - Reclaim resources on completion queue
+ * @complq: Tx ring to clean
+ * @budget: Used to determine if we are in netpoll
+ *
+ * Returns true if there's any budget left (e.g. the clean is finished)
+ */
+static bool
+iecm_tx_clean_complq(struct iecm_queue *complq, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_build_ctb - populate command tag and size for queue
+ * based scheduling descriptors
+ * @desc: descriptor to populate
+ * @parms: pointer to Tx params struct
+ * @td_cmd: command to be filled in desc
+ * @size: size of buffer
+ */
+static inline void
+iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc,
+			 struct iecm_tx_splitq_params *parms,
+			 u16 td_cmd, u16 size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_build_flow_desc - populate command tag and size for flow
+ * scheduling descriptors
+ * @desc: descriptor to populate
+ * @parms: pointer to Tx params struct
+ * @td_cmd: command to be filled in desc
+ * @size: size of buffer
+ */
+static inline void
+iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc,
+			       struct iecm_tx_splitq_params *parms,
+			       u16 td_cmd, u16 size)
+{
+	/* stub */
+}
+
+/**
+ * __iecm_tx_maybe_stop - 2nd level check for Tx stop conditions
+ * @tx_q: the queue to be checked
+ * @size: the size buffer we want to assure is available
+ *
+ * Returns -EBUSY if a stop is needed, else 0
+ */
+static int
+__iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_maybe_stop - 1st level check for Tx stop conditions
+ * @tx_q: the queue to be checked
+ * @size: number of descriptors we want to assure is available
+ *
+ * Returns 0 if stop is not needed
+ */
+int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_buf_hw_update - Store the new tail and head values
+ * @tx_q: queue to bump
+ * @val: new head index
+ */
+void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val)
+{
+	/* stub */
+}
+
+/**
+ * __iecm_tx_desc_count required - Get the number of descriptors needed for Tx
+ * @size: transmit request size in bytes
+ *
+ * Due to hardware alignment restrictions (4K alignment), we need to
+ * assume that we can have no more than 12K of data per descriptor, even
+ * though each descriptor can take up to 16K - 1 bytes of aligned memory.
+ * Thus, we need to divide by 12K. But division is slow! Instead,
+ * we decompose the operation into shifts and one relatively cheap
+ * multiply operation.
+ *
+ * To divide by 12K, we first divide by 4K, then divide by 3:
+ *     To divide by 4K, shift right by 12 bits
+ *     To divide by 3, multiply by 85, then divide by 256
+ *     (Divide by 256 is done by shifting right by 8 bits)
+ * Finally, we add one to round up. Because 256 isn't an exact multiple of
+ * 3, we'll underestimate near each multiple of 12K. This is actually more
+ * accurate as we have 4K - 1 of wiggle room that we can fit into the last
+ * segment. For our purposes this is accurate out to 1M which is orders of
+ * magnitude greater than our largest possible GSO size.
+ *
+ * This would then be implemented as:
+ *     return (((size >> 12) * 85) >> 8) + IECM_TX_DESCS_FOR_SKB_DATA_PTR;
+ *
+ * Since multiplication and division are commutative, we can reorder
+ * operations into:
+ *     return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR;
+ */
+static unsigned int __iecm_tx_desc_count_required(unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_desc_count_required - calculate number of Tx descriptors needed
+ * @skb: send buffer
+ *
+ * Returns number of data descriptors needed for this skb.
+ */
+unsigned int iecm_tx_desc_count_required(struct sk_buff *skb)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_map - Build the Tx flex descriptor
+ * @tx_q: queue to send buffer on
+ * @off: pointer to offload params struct
+ * @first: first buffer info buffer to use
+ *
+ * This function loops over the skb data pointed to by *first
+ * and gets a physical address for each memory location and programs
+ * it and the length into the transmit flex descriptor.
+ */
+static void
+iecm_tx_splitq_map(struct iecm_queue *tx_q,
+		   struct iecm_tx_offload_params *off,
+		   struct iecm_tx_buf *first)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tso - computes mss and TSO length to prepare for TSO
+ * @first: pointer to struct iecm_tx_buf
+ * @off: pointer to struct that holds offload parameters
+ *
+ * Returns error (negative) if TSO doesn't apply to the given skb,
+ * 0 otherwise.
+ *
+ * Note: this function can be used in the splitq and singleq paths
+ */
+static int iecm_tso(struct iecm_tx_buf *first,
+		    struct iecm_tx_offload_params *off)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors
+ * @skb: send buffer
+ * @tx_q: queue to send buffer on
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+static netdev_tx_t
+iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_start - Selects the right Tx queue to send buffer
+ * @skb: send buffer
+ * @netdev: network interface device structure
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb,
+				 struct net_device *netdev)
+{
+	/* stub */
+}
+
+/**
+ * iecm_ptype_to_htype - get a hash type
+ * @vport: virtual port data
+ * @ptype: the ptype value from the descriptor
+ *
+ * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by
+ * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of
+ * Rx desc.
+ */
+static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport,
+					       u16 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_hash - set the hash value in the skb
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ * @ptype: the packet type decoded by hardware
+ */
+static void
+iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb,
+	     struct iecm_flex_rx_desc *rx_desc, u16 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_csum - Indicate in skb if checksum is good
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ * @ptype: the packet type decoded by hardware
+ *
+ * skb->protocol must be set before this function is called
+ */
+static void
+iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb,
+	     struct iecm_flex_rx_desc *rx_desc, u16 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_rsc - Set the RSC fields in the skb
+ * @rxq : Rx descriptor ring packet is being transacted on
+ * @skb : pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ * @ptype: the packet type decoded by hardware
+ *
+ * Populate the skb fields with the total number of RSC segments, RSC payload
+ * length and packet type.
+ */
+static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb,
+			struct iecm_flex_rx_desc *rx_desc, u16 ptype)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_hwtstamp - check for an RX timestamp and pass up
+ * the stack
+ * @rx_desc: pointer to Rx descriptor containing timestamp
+ * @skb: skb to put timestamp in
+ */
+static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc,
+			     struct sk_buff __maybe_unused *skb)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
+ */
+static bool
+iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
+			   struct iecm_flex_rx_desc *rx_desc)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_skb - Send a completed packet up the stack
+ * @rxq: Rx ring in play
+ * @skb: packet to send up
+ *
+ * This function sends the completed packet (via. skb) up the stack using
+ * GRO receive functions
+ */
+void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_page_is_reserved - check if reuse is possible
+ * @page: page struct to check
+ */
+static bool iecm_rx_page_is_reserved(struct page *page)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_buf_adjust_pg_offset - Prepare Rx buffer for reuse
+ * @rx_buf: Rx buffer to adjust
+ * @size: Size of adjustment
+ *
+ * Update the offset within page so that Rx buf will be ready to be reused.
+ * For systems with PAGE_SIZE < 8192 this function will flip the page offset
+ * so the second half of page assigned to Rx buffer will be used, otherwise
+ * the offset is moved by the @size bytes
+ */
+static void
+iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_can_reuse_page - Determine if page can be reused for another Rx
+ * @rx_buf: buffer containing the page
+ *
+ * If page is reusable, we have a green light for calling iecm_reuse_rx_page,
+ * which will assign the current buffer to the buffer that next_to_alloc is
+ * pointing to; otherwise, the DMA mapping needs to be destroyed and
+ * page freed
+ */
+static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag
+ * @rx_buf: buffer containing page to add
+ * @skb: sk_buff to place the data into
+ * @size: packet length from rx_desc
+ *
+ * This function will add the data contained in rx_buf->page to the skb.
+ * It will just attach the page as a frag to the skb.
+ * The function will then update the page offset.
+ */
+void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb,
+		      unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_reuse_page - page flip buffer and store it back on the queue
+ * @rx_bufq: Rx descriptor ring to store buffers on
+ * @hsplit: true if header buffer, false otherwise
+ * @old_buf: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ */
+void iecm_rx_reuse_page(struct iecm_queue *rx_bufq,
+			bool hsplit,
+			struct iecm_rx_buf *old_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_get_buf_page - Fetch Rx buffer page and synchronize data for use
+ * @dev: device struct
+ * @rx_buf: Rx buf to fetch page for
+ * @size: size of buffer to add to skb
+ * @dev: device struct
+ *
+ * This function will pull an Rx buffer page from the ring and synchronize it
+ * for use by the CPU.
+ */
+static void
+iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf,
+		     const unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_construct_skb - Allocate skb and populate it
+ * @rxq: Rx descriptor queue
+ * @rx_buf: Rx buffer to pull data from
+ * @size: the length of the packet
+ *
+ * This function allocates an skb. It then populates it with the page
+ * data from the current receive descriptor, taking care to set up the
+ * skb correctly.
+ */
+struct sk_buff *
+iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf,
+		      unsigned int size)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_cleanup_headers - Correct empty headers
+ * @skb: pointer to current skb being fixed
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ */
+bool iecm_rx_cleanup_headers(struct sk_buff *skb)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_splitq_test_staterr - tests bits in Rx descriptor
+ * status and error fields
+ * @stat_err_field: field from descriptor to test bits in
+ * @stat_err_bits: value to mask
+ *
+ */
+static bool
+iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_splitq_is_non_eop - process handling of non-EOP buffers
+ * @rx_desc: Rx descriptor for current buffer
+ *
+ * If the buffer is an EOP buffer, this function exits returning false,
+ * otherwise return true indicating that this is in fact a non-EOP buffer.
+ */
+static bool
+iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_recycle_buf - Clean up used buffer and either recycle or free
+ * @rx_bufq: Rx descriptor queue to transact packets on
+ * @hsplit: true if buffer is a header buffer
+ * @rx_buf: Rx buffer to pull data from
+ *
+ * This function will clean up the contents of the rx_buf. It will either
+ * recycle the buffer or unmap it and free the associated resources.
+ *
+ * Returns true if the buffer is reused, false if the buffer is freed.
+ */
+bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit,
+			 struct iecm_rx_buf *rx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_splitq_put_bufs - wrapper function to clean and recycle buffers
+ * @rx_bufq: Rx descriptor queue to transact packets on
+ * @hdr_buf: Rx header buffer to pull data from
+ * @rx_buf: Rx buffer to pull data from
+ *
+ * This function will update the next_to_use/next_to_alloc if the current
+ * buffer is recycled.
+ */
+static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq,
+				    struct iecm_rx_buf *hdr_buf,
+				    struct iecm_rx_buf *rx_buf)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value
+ * @q: queue to bump
+ */
+static void iecm_rx_bump_ntc(struct iecm_queue *q)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_splitq_clean - Clean completed descriptors from Rx queue
+ * @rxq: Rx descriptor queue to retrieve receive buffer queue
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed
+ */
+static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_clean_queues - MSIX mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a q_vector
+ *
+ */
+irqreturn_t
+iecm_vport_intr_clean_queues(int __always_unused irq, void *data)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport
+ * @vport: main vport structure
+ */
+static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_rel - Free memory allocated for interrupt vectors
+ * @vport: virtual port
+ *
+ * Free the memory allocated for interrupt vectors  associated to a vport
+ */
+void iecm_vport_intr_rel(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_rel_irq - Free the IRQ association with the OS
+ * @vport: main vport structure
+ */
+static void iecm_vport_intr_rel_irq(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_dis_irq_all - Disable each interrupt
+ * @vport: main vport structure
+ */
+void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_buildreg_itr - Enable default interrupt generation settings
+ * @q_vector: pointer to q_vector
+ * @type: ITR index
+ * @itr: ITR value
+ */
+static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector,
+					const int type, u16 itr)
+{
+	/* stub */
+}
+
+static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_set_new_itr - update the ITR value based on statistics
+ * @q_vector: structure containing interrupt and ring information
+ * @itr: structure containing queue performance data
+ * @q_type: queue type
+ *
+ * Stores a new ITR value based on packets and byte
+ * counts during the last interrupt.  The advantage of per interrupt
+ * computation is faster updates and more accurate ITR for the current
+ * traffic pattern.  Constants in this function were computed
+ * based on theoretical maximum wire speed and thresholds were set based
+ * on testing data as well as attempting to minimize response time
+ * while increasing bulk throughput.
+ */
+static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector,
+					struct iecm_itr *itr,
+					enum virtchnl_queue_type q_type)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_update_itr_ena_irq - Update ITR and re-enable MSIX interrupt
+ * @q_vector: q_vector for which ITR is being updated and interrupt enabled
+ */
+void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_req_irq - get MSI-X vectors from the OS for the vport
+ * @vport: main vport structure
+ * @basename: name for the vector
+ */
+static int
+iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_ena_irq_all - Enable IRQ for the given vport
+ * @vport: main vport structure
+ */
+void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_deinit - Release all vector associations for the vport
+ * @vport: main vport structure
+ */
+void iecm_vport_intr_deinit(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport
+ * @vport: main vport structure
+ */
+static void
+iecm_vport_intr_napi_ena_all(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_tx_splitq_clean_all- Clean completion queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static inline bool
+iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_rx_splitq_clean_all- Clean completion queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static inline bool
+iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget,
+			 int *cleaned)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_splitq_napi_poll - NAPI handler
+ * @napi: struct from which you get q_vector
+ * @budget: budget provided by stack
+ */
+static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_map_vector_to_qs - Map vectors to queues
+ * @vport: virtual port
+ *
+ * Mapping for vectors to queues
+ */
+static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_init_vec_idx - Initialize the vector indexes
+ * @vport: virtual port
+ *
+ * Initialize vector indexes with values returned over mailbox
+ */
+static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_alloc - Allocate memory for interrupt vectors
+ * @vport: virtual port
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ */
+int iecm_vport_intr_alloc(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_intr_init - Setup all vectors for the given vport
+ * @vport: virtual port
+ *
+ * Returns 0 on success or negative on failure
+ */
+int iecm_vport_intr_init(struct iecm_vport *vport)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_vport_calc_num_q_vec);
+
+/**
+ * iecm_config_rss - Prepare for RSS
+ * @vport: virtual port
+ *
+ * Return 0 on success, negative on failure
+ */
+int iecm_config_rss(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_get_rx_qid_list - Create a list of RX QIDs
+ * @vport: virtual port
+ * @qid_list: list of qids
+ *
+ * qid_list must be allocated for maximum entries to prevent buffer overflow.
+ */
+void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list)
+{
+	/* stub */
+}
+
+/**
+ * iecm_fill_dflt_rss_lut - Fill the indirection table with the default values
+ * @vport: virtual port structure
+ * @qid_list: List of the RX qid's
+ *
+ * qid_list is created and freed by the caller
+ */
+void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list)
+{
+	/* stub */
+}
+
+/**
+ * iecm_init_rss - Prepare for RSS
+ * @vport: virtual port
+ *
+ * Return 0 on success, negative on failure
+ */
+int iecm_init_rss(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_deinit_rss - Prepare for RSS
+ * @vport: virtual port
+ */
+void iecm_deinit_rss(struct iecm_vport *vport)
+{
+	/* stub */
+}
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
new file mode 100644
index 000000000000..7fa8c06c5d7d
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -0,0 +1,577 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#include <linux/net/intel/iecm.h>
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const
+struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = {
+	/* L2 Packet types */
+	IECM_PTT_UNUSED_ENTRY(0),
+	IECM_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	IECM_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	IECM_PTT_UNUSED_ENTRY(12),
+
+	/* Non Tunneled IPv4 */
+	IECM_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	IECM_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	IECM_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	IECM_PTT_UNUSED_ENTRY(25),
+	IECM_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	IECM_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	IECM_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	IECM_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	IECM_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	IECM_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	IECM_PTT_UNUSED_ENTRY(91),
+	IECM_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	IECM_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	IECM_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+};
+
+/**
+ * iecm_recv_event_msg - Receive virtchnl event message
+ * @vport: virtual port structure
+ *
+ * Receive virtchnl event message
+ */
+static void iecm_recv_event_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_mb_clean - Reclaim the send mailbox queue entries
+ * @adapter: Driver specific private structure
+ *
+ * Reclaim the send mailbox queue entries to be used to send further messages
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_mb_clean(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_mb_msg - Send message over mailbox
+ * @adapter: Driver specific private structure
+ * @op: virtchnl opcode
+ * @msg_size: size of the payload
+ * @msg: pointer to buffer holding the payload
+ *
+ * Will prepare the control queue message and initiates the send API
+ *
+ * Returns 0 on success, negative on failure
+ */
+int iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
+		     u16 msg_size, u8 *msg)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_send_mb_msg);
+
+/**
+ * iecm_recv_mb_msg - Receive message over mailbox
+ * @adapter: Driver specific private structure
+ * @op: virtchnl operation code
+ * @msg: Received message holding buffer
+ * @msg_size: message size
+ *
+ * Will receive control queue message and posts the receive buffer. Returns 0
+ * on success and negative on failure.
+ */
+int iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
+		     void *msg, int msg_size)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_recv_mb_msg);
+
+/**
+ * iecm_send_ver_msg - send virtchnl version message
+ * @adapter: Driver specific private structure
+ *
+ * Send virtchnl version message.  Returns 0 on success, negative on failure.
+ */
+static int iecm_send_ver_msg(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_recv_ver_msg - Receive virtchnl version message
+ * @adapter: Driver specific private structure
+ *
+ * Receive virtchnl version message. Returns 0 on success, negative on failure.
+ */
+static int iecm_recv_ver_msg(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_caps_msg - Send virtchnl get capabilities message
+ * @adapter: Driver specific private structure
+ *
+ * Send virtchl get capabilities message. Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_get_caps_msg(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_send_get_caps_msg);
+
+/**
+ * iecm_recv_get_caps_msg - Receive virtchnl get capabilities message
+ * @adapter: Driver specific private structure
+ *
+ * Receive virtchnl get capabilities message.  Returns 0 on succes, negative on
+ * failure.
+ */
+static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_create_vport_msg - Send virtchnl create vport message
+ * @adapter: Driver specific private structure
+ *
+ * send virtchnl create vport message
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int iecm_send_create_vport_msg(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_recv_create_vport_msg - Receive virtchnl create vport message
+ * @adapter: Driver specific private structure
+ * @vport_id: Virtual port identifier
+ *
+ * Receive virtchnl create vport message.  Returns 0 on success, negative on
+ * failure.
+ */
+static int iecm_recv_create_vport_msg(struct iecm_adapter *adapter,
+				      int *vport_id)
+{
+	/* stub */
+}
+
+/**
+ * iecm_wait_for_event - wait for virtchnl response
+ * @adapter: Driver private data structure
+ * @state: check on state upon timeout after 500ms
+ * @err_check: check if this specific error bit is set
+ *
+ * Checks if state is set upon expiry of timeout.  Returns 0 on success,
+ * negative on failure.
+ */
+int iecm_wait_for_event(struct iecm_adapter *adapter,
+			enum iecm_vport_vc_state state,
+			enum iecm_vport_vc_state err_check)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_wait_for_event);
+
+/**
+ * iecm_send_destroy_vport_msg - Send virtchnl destroy vport message
+ * @vport: virtual port data structure
+ *
+ * Send virtchnl destroy vport message.  Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_destroy_vport_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_enable_vport_msg - Send virtchnl enable vport message
+ * @vport: virtual port data structure
+ *
+ * Send enable vport virtchnl message.  Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_enable_vport_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_disable_vport_msg - Send virtchnl disable vport message
+ * @vport: virtual port data structure
+ *
+ * Send disable vport virtchnl message.  Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_disable_vport_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_config_tx_queues_msg - Send virtchnl config Tx queues message
+ * @vport: virtual port data structure
+ *
+ * Send config tx queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_config_tx_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_config_rx_queues_msg - Send virtchnl config Rx queues message
+ * @vport: virtual port data structure
+ *
+ * Send config rx queues virtchnl message.  Returns 0 on success, negative on
+ * failure.
+ */
+int iecm_send_config_rx_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_ena_dis_queues_msg - Send virtchnl enable or disable
+ * queues message
+ * @vport: virtual port data structure
+ * @vc_op: virtchnl op code to send
+ *
+ * Send enable or disable queues virtchnl message. Returns 0 on success,
+ * negative on failure.
+ */
+static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport,
+					enum virtchnl_ops vc_op)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue
+ * vector message
+ * @vport: virtual port data structure
+ * @map: true for map and false for unmap
+ *
+ * Send map or unmap queue vector virtchnl message.  Returns 0 on success,
+ * negative on failure.
+ */
+static int
+iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_enable_queues_msg - send enable queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send enable queues virtchnl message.  Returns 0 on success, negative on
+ * failure.
+ */
+static int iecm_send_enable_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_disable_queues_msg - send disable queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send disable queues virtchnl message.  Returns 0 on success, negative
+ * on failure.
+ */
+static int iecm_send_disable_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_delete_queues_msg - send delete queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send delete queues virtchnl message. Return 0 on success, negative on
+ * failure.
+ */
+int iecm_send_delete_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_config_queues_msg - Send config queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send config queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+static int iecm_send_config_queues_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_add_queues_msg - Send virtchnl add queues message
+ * @vport: Virtual port private data structure
+ * @num_tx_q: number of transmit queues
+ * @num_complq: number of transmit completion queues
+ * @num_rx_q: number of receive queues
+ * @num_rx_bufq: number of receive buffer queues
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q,
+			     u16 num_complq, u16 num_rx_q, u16 num_rx_bufq)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_stats_msg - Send virtchnl get statistics message
+ * @vport: vport to get stats for
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int iecm_send_get_stats_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_set_rss_hash_msg - Send set or get RSS hash message
+ * @vport: virtual port data structure
+ * @get: flag to get or set RSS hash
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message
+ * @vport: virtual port data structure
+ * @get: flag to set or get RSS look up table
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message
+ * @vport: virtual port data structure
+ * @get: flag to set or get RSS look up table
+ *
+ * Returns 0 on success, negative on failure
+ */
+int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get)
+{
+	/* stub */
+}
+
+/**
+ * iecm_send_get_rx_ptype_msg - Send virtchnl get or set RSS key message
+ * @vport: virtual port data structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_find_ctlq - Given a type and id, find ctlq info
+ * @hw: hardware struct
+ * @type: type of ctrlq to find
+ * @id: ctlq id to find
+ *
+ * Returns pointer to found ctlq info struct, NULL otherwise.
+ */
+static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw,
+					     enum iecm_ctlq_type type, int id)
+{
+	/* stub */
+}
+
+/**
+ * iecm_deinit_dflt_mbx - De initialize mailbox
+ * @adapter: adapter info struct
+ */
+void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_init_dflt_mbx - Setup default mailbox parameters and make request
+ * @adapter: adapter info struct
+ *
+ * Returns 0 on success, negative otherwise
+ */
+int iecm_init_dflt_mbx(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_params_buf_alloc - Allocate memory for mailbox resources
+ * @adapter: Driver specific private data structure
+ *
+ * Will alloc memory to hold the vport parameters received on mailbox
+ */
+int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_params_buf_rel - Release memory for mailbox resources
+ * @adapter: Driver specific private data structure
+ *
+ * Will release memory to hold the vport parameters received on mailbox
+ */
+void iecm_vport_params_buf_rel(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vc_core_init - Initialize mailbox and get resources
+ * @adapter: Driver specific private structure
+ * @vport_id: Virtual port identifier
+ *
+ * Will check if HW is ready with reset complete. Initializes the mailbox and
+ * communicate with master to get all the default vport parameters. Returns 0
+ * on success, negative on failure.
+ */
+int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_vc_core_init);
+
+/**
+ * iecm_vport_init - Initialize virtual port
+ * @vport: virtual port to be initialized
+ * @vport_id: Unique identification number of vport
+ *
+ * Will initialize vport with the info received through MB earlier
+ */
+static void iecm_vport_init(struct iecm_vport *vport,
+			    __always_unused int vport_id)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_get_vec_ids - Initialize vector id from Mailbox parameters
+ * @vecids: Array of vector ids
+ * @num_vecids: number of vector ids
+ * @chunks: vector ids received over mailbox
+ *
+ * Will initialize all vector ids with ids received as mailbox parameters
+ * Returns number of ids filled
+ */
+int
+iecm_vport_get_vec_ids(u16 *vecids, int num_vecids,
+		       struct virtchnl_vector_chunks *chunks)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_get_queue_ids - Initialize queue id from Mailbox parameters
+ * @qids: Array of queue ids
+ * @num_qids: number of queue ids
+ * @q_type: queue model
+ * @chunks: queue ids received over mailbox
+ *
+ * Will initialize all queue ids with ids received as mailbox parameters
+ * Returns number of ids filled
+ */
+static int
+iecm_vport_get_queue_ids(u16 *qids, int num_qids,
+			 enum virtchnl_queue_type q_type,
+			 struct virtchnl_queue_chunks *chunks)
+{
+	/* stub */
+}
+
+/**
+ * __iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
+ * @vport: virtual port for which the queues ids are initialized
+ * @qids: queue ids
+ * @num_qids: number of queue ids
+ * @q_type: type of queue
+ *
+ * Will initialize all queue ids with ids received as mailbox
+ * parameters. Returns number of queue ids initialized.
+ */
+static int
+__iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids,
+			    int num_qids, enum virtchnl_queue_type q_type)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
+ * @vport: virtual port for which the queues ids are initialized
+ *
+ * Will initialize all queue ids with ids received as mailbox parameters.
+ * Returns 0 on success, negative if all the queues are not initialized.
+ */
+static int iecm_vport_queue_ids_init(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vport_adjust_qs - Adjust to new requested queues
+ * @vport: virtual port data struct
+ *
+ * Renegotiate queues.  Returns 0 on success, negative on failure.
+ */
+void iecm_vport_adjust_qs(struct iecm_vport *vport)
+{
+	/* stub */
+}
+
+/**
+ * iecm_is_capability_ena - Default implementation of capability checking
+ * @adapter: Private data struct
+ * @flag: flag to check
+ *
+ * Return true if capability is supported, false otherwise
+ */
+static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag)
+{
+	/* stub */
+}
+
+/**
+ * iecm_vc_ops_init - Initialize virtchnl common API
+ * @adapter: Driver specific private structure
+ *
+ * Initialize the function pointers with the extended feature set functions
+ * as APF will deal only with new set of opcodes.
+ */
+void iecm_vc_ops_init(struct iecm_adapter *adapter)
+{
+	/* stub */
+}
+EXPORT_SYMBOL(iecm_vc_ops_init);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 05/15] iecm: Add basic netdevice functionality
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (3 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 04/15] iecm: Common module introduction and function stubs Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 06/15] iecm: Implement mailbox functionality Tony Nguyen
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

This implements probe, interface up/down, and netdev_ops.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/iecm/iecm_lib.c    | 420 +++++++++++++++++-
 drivers/net/ethernet/intel/iecm/iecm_main.c   |   7 +-
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   |   6 +-
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   |  71 ++-
 4 files changed, 479 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index a5ff9209d290..23eef4a9c7ca 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -23,7 +23,17 @@ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)
  */
 static void iecm_intr_rel(struct iecm_adapter *adapter)
 {
-	/* stub */
+	if (!adapter->msix_entries)
+		return;
+	clear_bit(__IECM_MB_INTR_MODE, adapter->flags);
+	clear_bit(__IECM_MB_INTR_TRIGGER, adapter->flags);
+	iecm_mb_intr_rel_irq(adapter);
+
+	pci_free_irq_vectors(adapter->pdev);
+	kfree(adapter->msix_entries);
+	adapter->msix_entries = NULL;
+	kfree(adapter->req_vec_chunks);
+	adapter->req_vec_chunks = NULL;
 }
 
 /**
@@ -95,7 +105,53 @@ static void iecm_intr_distribute(struct iecm_adapter *adapter)
  */
 static int iecm_intr_req(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int min_vectors, max_vectors, err = 0;
+	unsigned int vector;
+	int num_vecs;
+	int v_actual;
+
+	num_vecs = adapter->vports[0]->num_q_vectors +
+		   IECM_MAX_NONQ_VEC + IECM_MAX_RDMA_VEC;
+
+	min_vectors = IECM_MIN_VEC;
+#define IECM_MAX_EVV_MAPPED_VEC 16
+	max_vectors = min(num_vecs, IECM_MAX_EVV_MAPPED_VEC);
+
+	v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors,
+					 max_vectors, PCI_IRQ_MSIX);
+	if (v_actual < 0) {
+		dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n",
+			v_actual);
+		return v_actual;
+	}
+
+	adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry),
+					GFP_KERNEL);
+
+	if (!adapter->msix_entries) {
+		pci_free_irq_vectors(adapter->pdev);
+		return -ENOMEM;
+	}
+
+	for (vector = 0; vector < v_actual; vector++) {
+		adapter->msix_entries[vector].entry = vector;
+		adapter->msix_entries[vector].vector =
+			pci_irq_vector(adapter->pdev, vector);
+	}
+	adapter->num_msix_entries = v_actual;
+	adapter->num_req_msix = num_vecs;
+
+	iecm_intr_distribute(adapter);
+
+	err = iecm_mb_intr_init(adapter);
+	if (err)
+		goto intr_rel;
+	iecm_mb_irq_enable(adapter);
+	return err;
+
+intr_rel:
+	iecm_intr_rel(adapter);
+	return err;
 }
 
 /**
@@ -117,7 +173,18 @@ static int iecm_cfg_netdev(struct iecm_vport *vport)
  */
 static int iecm_cfg_hw(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct pci_dev *pdev = adapter->pdev;
+	struct iecm_hw *hw = &adapter->hw;
+
+	hw->hw_addr_len = pci_resource_len(pdev, 0);
+	hw->hw_addr = ioremap(pci_resource_start(pdev, 0), hw->hw_addr_len);
+
+	if (!hw->hw_addr)
+		return -EIO;
+
+	hw->back = adapter;
+
+	return 0;
 }
 
 /**
@@ -126,7 +193,15 @@ static int iecm_cfg_hw(struct iecm_adapter *adapter)
  */
 static int iecm_vport_res_alloc(struct iecm_vport *vport)
 {
-	/* stub */
+	if (iecm_vport_queues_alloc(vport))
+		return -ENOMEM;
+
+	if (iecm_vport_intr_alloc(vport)) {
+		iecm_vport_queues_rel(vport);
+		return -ENOMEM;
+	}
+
+	return 0;
 }
 
 /**
@@ -135,7 +210,8 @@ static int iecm_vport_res_alloc(struct iecm_vport *vport)
  */
 static void iecm_vport_res_free(struct iecm_vport *vport)
 {
-	/* stub */
+	iecm_vport_intr_rel(vport);
+	iecm_vport_queues_rel(vport);
 }
 
 /**
@@ -149,7 +225,22 @@ static void iecm_vport_res_free(struct iecm_vport *vport)
  */
 static int iecm_get_free_slot(void *array, int size, int curr)
 {
-	/* stub */
+	int **tmp_array = (int **)array;
+	int next;
+
+	if (curr < (size - 1) && !tmp_array[curr + 1]) {
+		next = curr + 1;
+	} else {
+		int i = 0;
+
+		while ((i < size) && (tmp_array[i]))
+			i++;
+		if (i == size)
+			next = IECM_NO_FREE_SLOT;
+		else
+			next = i;
+	}
+	return next;
 }
 
 /**
@@ -158,7 +249,9 @@ static int iecm_get_free_slot(void *array, int size, int curr)
  */
 struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_netdev_priv *np = netdev_priv(netdev);
+
+	return np->vport;
 }
 
 /**
@@ -167,7 +260,9 @@ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev)
  */
 struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_netdev_priv *np = netdev_priv(netdev);
+
+	return np->vport->adapter;
 }
 
 /**
@@ -200,7 +295,17 @@ static int iecm_stop(struct net_device *netdev)
  */
 static void iecm_vport_rel(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+
+	iecm_vport_stop(vport);
+	iecm_vport_res_free(vport);
+	iecm_deinit_rss(vport);
+	unregister_netdev(vport->netdev);
+	free_netdev(vport->netdev);
+	vport->netdev = NULL;
+	if (adapter->dev_ops.vc_ops.destroy_vport)
+		adapter->dev_ops.vc_ops.destroy_vport(vport);
+	kfree(vport);
 }
 
 /**
@@ -209,7 +314,19 @@ static void iecm_vport_rel(struct iecm_vport *vport)
  */
 static void iecm_vport_rel_all(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int i;
+
+	if (!adapter->vports)
+		return;
+
+	for (i = 0; i < adapter->num_alloc_vport; i++) {
+		if (!adapter->vports[i])
+			continue;
+
+		iecm_vport_rel(adapter->vports[i]);
+		adapter->vports[i] = NULL;
+	}
+	adapter->num_alloc_vport = 0;
 }
 
 /**
@@ -233,7 +350,47 @@ void iecm_vport_set_hsplit(struct iecm_vport *vport,
 static struct iecm_vport *
 iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id)
 {
-	/* stub */
+	struct iecm_vport *vport = NULL;
+
+	if (adapter->next_vport == IECM_NO_FREE_SLOT)
+		return vport;
+
+	/* Need to protect the allocation of the vports at the adapter level */
+	mutex_lock(&adapter->sw_mutex);
+
+	vport = kzalloc(sizeof(*vport), GFP_KERNEL);
+	if (!vport)
+		goto unlock_adapter;
+
+	vport->adapter = adapter;
+	vport->idx = adapter->next_vport;
+	vport->compln_clean_budget = IECM_TX_COMPLQ_CLEAN_BUDGET;
+	adapter->num_alloc_vport++;
+	adapter->dev_ops.vc_ops.vport_init(vport, vport_id);
+
+	/* Setup default MSIX irq handler for the vport */
+	vport->irq_q_handler = iecm_vport_intr_clean_queues;
+	vport->q_vector_base = IECM_MAX_NONQ_VEC;
+
+	/* fill vport slot in the adapter struct */
+	adapter->vports[adapter->next_vport] = vport;
+	if (iecm_cfg_netdev(vport))
+		goto cfg_netdev_fail;
+
+	/* prepare adapter->next_vport for next use */
+	adapter->next_vport = iecm_get_free_slot(adapter->vports,
+						 adapter->num_alloc_vport,
+						 adapter->next_vport);
+
+	goto unlock_adapter;
+
+cfg_netdev_fail:
+	adapter->vports[adapter->next_vport] = NULL;
+	kfree(vport);
+	vport = NULL;
+unlock_adapter:
+	mutex_unlock(&adapter->sw_mutex);
+	return vport;
 }
 
 /**
@@ -243,7 +400,22 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id)
  */
 static void iecm_service_task(struct work_struct *work)
 {
-	/* stub */
+	struct iecm_adapter *adapter = container_of(work,
+						    struct iecm_adapter,
+						    serv_task.work);
+
+	if (test_bit(__IECM_MB_INTR_MODE, adapter->flags)) {
+		if (test_and_clear_bit(__IECM_MB_INTR_TRIGGER,
+				       adapter->flags)) {
+			iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0);
+			iecm_mb_irq_enable(adapter);
+		}
+	} else {
+		iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0);
+	}
+
+	queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
+			   msecs_to_jiffies(300));
 }
 
 /**
@@ -285,7 +457,50 @@ static int iecm_vport_open(struct iecm_vport *vport)
  */
 static void iecm_init_task(struct work_struct *work)
 {
-	/* stub */
+	struct iecm_adapter *adapter = container_of(work,
+						    struct iecm_adapter,
+						    init_task.work);
+	struct iecm_vport *vport;
+	struct pci_dev *pdev;
+	int vport_id, err;
+
+	err = adapter->dev_ops.vc_ops.core_init(adapter, &vport_id);
+	if (err)
+		return;
+
+	pdev = adapter->pdev;
+	vport = iecm_vport_alloc(adapter, vport_id);
+	if (!vport) {
+		err = -EFAULT;
+		dev_err(&pdev->dev, "probe failed on vport setup:%d\n",
+			err);
+		return;
+	}
+	/* Start the service task before requesting vectors. This will ensure
+	 * vector information response from mailbox is handled
+	 */
+	queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
+			   msecs_to_jiffies(5 * (pdev->devfn & 0x07)));
+	err = iecm_intr_req(adapter);
+	if (err) {
+		dev_err(&pdev->dev, "failed to enable interrupt vectors: %d\n",
+			err);
+		iecm_vport_rel(vport);
+		return;
+	}
+	/* Deal with major memory allocations for vport resources */
+	err = iecm_vport_res_alloc(vport);
+	if (err) {
+		dev_err(&pdev->dev, "failed to allocate resources: %d\n",
+			err);
+		iecm_vport_rel(vport);
+		return;
+	}
+
+	/* Once state is put into DOWN, driver is ready for dev_open */
+	adapter->state = __IECM_DOWN;
+	if (test_and_clear_bit(__IECM_UP_REQUESTED, adapter->flags))
+		iecm_vport_open(vport);
 }
 
 /**
@@ -296,7 +511,46 @@ static void iecm_init_task(struct work_struct *work)
  */
 static int iecm_api_init(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_reg_ops *reg_ops = &adapter->dev_ops.reg_ops;
+	struct pci_dev *pdev = adapter->pdev;
+
+	if (!adapter->dev_ops.reg_ops_init) {
+		dev_err(&pdev->dev, "Invalid device, register API init not defined\n");
+		return -EINVAL;
+	}
+	adapter->dev_ops.reg_ops_init(adapter);
+	if (!(reg_ops->ctlq_reg_init && reg_ops->vportq_reg_init &&
+	      reg_ops->intr_reg_init && reg_ops->mb_intr_reg_init &&
+	      reg_ops->reset_reg_init && reg_ops->trigger_reset)) {
+		dev_err(&pdev->dev, "Invalid device, missing one or more register functions\n");
+		return -EINVAL;
+	}
+
+	if (adapter->dev_ops.vc_ops_init) {
+		struct iecm_virtchnl_ops *vc_ops;
+
+		adapter->dev_ops.vc_ops_init(adapter);
+		vc_ops = &adapter->dev_ops.vc_ops;
+		if (!(vc_ops->core_init &&
+		      vc_ops->vport_init &&
+		      vc_ops->vport_queue_ids_init &&
+		      vc_ops->get_caps &&
+		      vc_ops->config_queues &&
+		      vc_ops->enable_queues &&
+		      vc_ops->disable_queues &&
+		      vc_ops->irq_map_unmap &&
+		      vc_ops->get_set_rss_lut &&
+		      vc_ops->get_set_rss_hash &&
+		      vc_ops->adjust_qs &&
+		      vc_ops->get_ptype)) {
+			dev_err(&pdev->dev, "Invalid device, missing one or more virtchnl functions\n");
+			return -EINVAL;
+		}
+	} else {
+		iecm_vc_ops_init(adapter);
+	}
+
+	return 0;
 }
 
 /**
@@ -308,7 +562,11 @@ static int iecm_api_init(struct iecm_adapter *adapter)
  */
 static void iecm_deinit_task(struct iecm_adapter *adapter)
 {
-	/* stub */
+	iecm_vport_rel_all(adapter);
+	cancel_delayed_work_sync(&adapter->serv_task);
+	iecm_deinit_dflt_mbx(adapter);
+	iecm_vport_params_buf_rel(adapter);
+	iecm_intr_rel(adapter);
 }
 
 /**
@@ -330,7 +588,13 @@ static int iecm_init_hard_reset(struct iecm_adapter *adapter)
  */
 static void iecm_vc_event_task(struct work_struct *work)
 {
-	/* stub */
+	struct iecm_adapter *adapter = container_of(work,
+						    struct iecm_adapter,
+						    vc_event_task.work);
+
+	if (test_bit(__IECM_HR_CORE_RESET, adapter->flags) ||
+	    test_bit(__IECM_HR_FUNC_RESET, adapter->flags))
+		iecm_init_hard_reset(adapter);
 }
 
 /**
@@ -359,7 +623,104 @@ int iecm_probe(struct pci_dev *pdev,
 	       const struct pci_device_id __always_unused *ent,
 	       struct iecm_adapter *adapter)
 {
-	/* stub */
+	int err;
+
+	adapter->pdev = pdev;
+	err = iecm_api_init(adapter);
+	if (err) {
+		dev_err(&pdev->dev, "Device API is incorrectly configured\n");
+		return err;
+	}
+
+	err = pcim_iomap_regions(pdev, BIT(IECM_BAR0), pci_name(pdev));
+	if (err) {
+		dev_err(&pdev->dev, "BAR0 I/O map error %d\n", err);
+		return err;
+	}
+
+	/* set up for high or low DMA */
+	err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (err)
+		err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+	if (err) {
+		dev_err(&pdev->dev, "DMA configuration failed: 0x%x\n", err);
+		return err;
+	}
+
+	pci_enable_pcie_error_reporting(pdev);
+	pci_set_master(pdev);
+	pci_set_drvdata(pdev, adapter);
+
+	adapter->init_wq =
+		alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME);
+	if (!adapter->init_wq) {
+		dev_err(&pdev->dev, "Failed to allocate workqueue\n");
+		err = -ENOMEM;
+		goto err_wq_alloc;
+	}
+
+	adapter->serv_wq =
+		alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME);
+	if (!adapter->serv_wq) {
+		dev_err(&pdev->dev, "Failed to allocate workqueue\n");
+		err = -ENOMEM;
+		goto err_mbx_wq_alloc;
+	}
+	/* setup msglvl */
+	adapter->msg_enable = netif_msg_init(adapter->debug_msk,
+					     IECM_AVAIL_NETIF_M);
+
+	adapter->vports = kcalloc(IECM_MAX_NUM_VPORTS,
+				  sizeof(*adapter->vports), GFP_KERNEL);
+	if (!adapter->vports) {
+		err = -ENOMEM;
+		goto err_vport_alloc;
+	}
+
+	err = iecm_vport_params_buf_alloc(adapter);
+	if (err) {
+		dev_err(&pdev->dev, "Failed to alloc vport params buffer: %d\n",
+			err);
+		goto err_mb_res;
+	}
+
+	err = iecm_cfg_hw(adapter);
+	if (err) {
+		dev_err(&pdev->dev, "Failed to configure HW structure for adapter: %d\n",
+			err);
+		goto err_cfg_hw;
+	}
+
+	mutex_init(&adapter->sw_mutex);
+	mutex_init(&adapter->vc_msg_lock);
+	mutex_init(&adapter->reset_lock);
+	init_waitqueue_head(&adapter->vchnl_wq);
+
+	INIT_DELAYED_WORK(&adapter->serv_task, iecm_service_task);
+	INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task);
+	INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task);
+
+	mutex_lock(&adapter->reset_lock);
+	set_bit(__IECM_HR_DRV_LOAD, adapter->flags);
+	err = iecm_init_hard_reset(adapter);
+	if (err) {
+		dev_err(&pdev->dev, "Failed to reset device: %d\n", err);
+		goto err_mb_init;
+	}
+
+	return 0;
+err_mb_init:
+err_cfg_hw:
+	iecm_vport_params_buf_rel(adapter);
+err_mb_res:
+	kfree(adapter->vports);
+err_vport_alloc:
+	destroy_workqueue(adapter->serv_wq);
+err_mbx_wq_alloc:
+	destroy_workqueue(adapter->init_wq);
+err_wq_alloc:
+	pci_disable_pcie_error_reporting(pdev);
+	return err;
 }
 EXPORT_SYMBOL(iecm_probe);
 
@@ -369,7 +730,22 @@ EXPORT_SYMBOL(iecm_probe);
  */
 void iecm_remove(struct pci_dev *pdev)
 {
-	/* stub */
+	struct iecm_adapter *adapter = pci_get_drvdata(pdev);
+
+	if (!adapter)
+		return;
+
+	iecm_deinit_task(adapter);
+	cancel_delayed_work_sync(&adapter->vc_event_task);
+	destroy_workqueue(adapter->serv_wq);
+	destroy_workqueue(adapter->init_wq);
+	kfree(adapter->vports);
+	kfree(adapter->vport_params_recvd);
+	kfree(adapter->vport_params_reqd);
+	mutex_destroy(&adapter->sw_mutex);
+	mutex_destroy(&adapter->vc_msg_lock);
+	mutex_destroy(&adapter->reset_lock);
+	pci_disable_pcie_error_reporting(pdev);
 }
 EXPORT_SYMBOL(iecm_remove);
 
@@ -379,7 +755,13 @@ EXPORT_SYMBOL(iecm_remove);
  */
 void iecm_shutdown(struct pci_dev *pdev)
 {
-	/* stub */
+	struct iecm_adapter *adapter;
+
+	adapter = pci_get_drvdata(pdev);
+	adapter->state = __IECM_REMOVE;
+
+	if (system_state == SYSTEM_POWER_OFF)
+		pci_set_power_state(pdev, PCI_D3hot);
 }
 EXPORT_SYMBOL(iecm_shutdown);
 
diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c
index 68d9f2e6445b..3f26fbdf96f6 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_main.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_main.c
@@ -21,7 +21,10 @@ MODULE_LICENSE("GPL v2");
  */
 static int __init iecm_module_init(void)
 {
-	/* stub */
+	pr_info("%s\n", iecm_driver_string);
+	pr_info("%s\n", iecm_copyright);
+
+	return 0;
 }
 module_init(iecm_module_init);
 
@@ -33,6 +36,6 @@ module_init(iecm_module_init);
  */
 static void __exit iecm_module_exit(void)
 {
-	/* stub */
+	pr_info("module unloaded\n");
 }
 module_exit(iecm_module_exit);
diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
index 6df39810264a..e2da7dbc2ced 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -998,7 +998,11 @@ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget)
 irqreturn_t
 iecm_vport_intr_clean_queues(int __always_unused irq, void *data)
 {
-	/* stub */
+	struct iecm_q_vector *q_vector = (struct iecm_q_vector *)data;
+
+	napi_schedule(&q_vector->napi);
+
+	return IRQ_HANDLED;
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index 7fa8c06c5d7d..838bdeba8e61 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -424,7 +424,46 @@ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter)
  */
 int iecm_init_dflt_mbx(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_ctlq_create_info ctlq_info[] = {
+		{
+			.type = IECM_CTLQ_TYPE_MAILBOX_TX,
+			.id = IECM_DFLT_MBX_ID,
+			.len = IECM_DFLT_MBX_Q_LEN,
+			.buf_size = IECM_DFLT_MBX_BUF_SIZE
+		},
+		{
+			.type = IECM_CTLQ_TYPE_MAILBOX_RX,
+			.id = IECM_DFLT_MBX_ID,
+			.len = IECM_DFLT_MBX_Q_LEN,
+			.buf_size = IECM_DFLT_MBX_BUF_SIZE
+		}
+	};
+	struct iecm_hw *hw = &adapter->hw;
+	enum iecm_status status;
+
+	adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info);
+
+#define NUM_Q 2
+	status = iecm_ctlq_init(hw, NUM_Q, ctlq_info);
+	if (status)
+		return -EINVAL;
+
+	hw->asq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_TX,
+				 IECM_DFLT_MBX_ID);
+	hw->arq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_RX,
+				 IECM_DFLT_MBX_ID);
+
+	if (!hw->asq || !hw->arq) {
+		iecm_ctlq_deinit(hw);
+		return -ENOENT;
+	}
+	adapter->state = __IECM_STARTUP;
+	/* Skew the delay for init tasks for each function based on fn number
+	 * to prevent every function from making the same call simultaneously.
+	 */
+	queue_delayed_work(adapter->init_wq, &adapter->init_task,
+			   msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07)));
+	return 0;
 }
 
 /**
@@ -446,7 +485,15 @@ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter)
  */
 void iecm_vport_params_buf_rel(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int i = 0;
+
+	for (i = 0; i < IECM_MAX_NUM_VPORTS; i++) {
+		kfree(adapter->vport_params_recvd[i]);
+		kfree(adapter->vport_params_reqd[i]);
+	}
+
+	kfree(adapter->caps);
+	kfree(adapter->config_data.req_qs_chunks);
 }
 
 /**
@@ -572,6 +619,24 @@ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag)
  */
 void iecm_vc_ops_init(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_virtchnl_ops *vc_ops = &adapter->dev_ops.vc_ops;
+
+	vc_ops->core_init = iecm_vc_core_init;
+	vc_ops->vport_init = iecm_vport_init;
+	vc_ops->vport_queue_ids_init = iecm_vport_queue_ids_init;
+	vc_ops->get_caps = iecm_send_get_caps_msg;
+	vc_ops->is_cap_ena = iecm_is_capability_ena;
+	vc_ops->config_queues = iecm_send_config_queues_msg;
+	vc_ops->enable_queues = iecm_send_enable_queues_msg;
+	vc_ops->disable_queues = iecm_send_disable_queues_msg;
+	vc_ops->irq_map_unmap = iecm_send_map_unmap_queue_vector_msg;
+	vc_ops->enable_vport = iecm_send_enable_vport_msg;
+	vc_ops->disable_vport = iecm_send_disable_vport_msg;
+	vc_ops->destroy_vport = iecm_send_destroy_vport_msg;
+	vc_ops->get_ptype = iecm_send_get_rx_ptype_msg;
+	vc_ops->get_set_rss_lut = iecm_send_get_set_rss_lut_msg;
+	vc_ops->get_set_rss_hash = iecm_send_get_set_rss_hash_msg;
+	vc_ops->adjust_qs = iecm_vport_adjust_qs;
+	vc_ops->recv_mbx_msg = NULL;
 }
 EXPORT_SYMBOL(iecm_vc_ops_init);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 06/15] iecm: Implement mailbox functionality
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (4 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 05/15] iecm: Add basic netdevice functionality Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 20:35   ` Brady, Alan
  2020-08-24 17:32 ` [net-next v5 07/15] iecm: Implement virtchnl commands Tony Nguyen
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement mailbox setup, take down, and commands.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../net/ethernet/intel/iecm/iecm_controlq.c   | 498 +++++++++++++++++-
 .../ethernet/intel/iecm/iecm_controlq_setup.c | 105 +++-
 drivers/net/ethernet/intel/iecm/iecm_lib.c    |  94 +++-
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   | 414 ++++++++++++++-
 4 files changed, 1082 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c
index 585392e4d72a..05451d96ee10 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_controlq.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c
@@ -12,7 +12,15 @@ static void
 iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq,
 		     struct iecm_ctlq_create_info *q_create_info)
 {
-	/* stub */
+	/* set head and tail registers in our local struct */
+	cq->reg.head = q_create_info->reg.head;
+	cq->reg.tail = q_create_info->reg.tail;
+	cq->reg.len = q_create_info->reg.len;
+	cq->reg.bah = q_create_info->reg.bah;
+	cq->reg.bal = q_create_info->reg.bal;
+	cq->reg.len_mask = q_create_info->reg.len_mask;
+	cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
+	cq->reg.head_mask = q_create_info->reg.head_mask;
 }
 
 /**
@@ -28,7 +36,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw,
 					    struct iecm_ctlq_info *cq,
 					    bool is_rxq)
 {
-	/* stub */
+	u32 reg = 0;
+
+	if (is_rxq)
+		/* Update tail to post pre-allocated buffers for Rx queues */
+		wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
+	else
+		wr32(hw, cq->reg.tail, 0);
+
+	/* For non-Mailbox control queues only TAIL need to be set */
+	if (cq->q_id != -1)
+		return 0;
+
+	/* Clear Head for both send or receive */
+	wr32(hw, cq->reg.head, 0);
+
+	/* set starting point */
+	wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa));
+	wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa));
+	wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, cq->reg.bah);
+	if (reg != upper_32_bits(cq->desc_ring.pa))
+		return IECM_ERR_CTLQ_ERROR;
+
+	return 0;
 }
 
 /**
@@ -40,7 +73,30 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw,
  */
 static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	int i = 0;
+
+	for (i = 0; i < cq->ring_size; i++) {
+		struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i);
+		struct iecm_dma_mem *bi = cq->bi.rx_buff[i];
+
+		/* No buffer to post to descriptor, continue */
+		if (!bi)
+			continue;
+
+		desc->flags =
+			cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+		desc->opcode = 0;
+		desc->datalen = (__le16)cpu_to_le16(bi->size);
+		desc->ret_val = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.indirect.addr_high =
+			cpu_to_le32(upper_32_bits(bi->pa));
+		desc->params.indirect.addr_low =
+			cpu_to_le32(lower_32_bits(bi->pa));
+		desc->params.indirect.param0 = 0;
+		desc->params.indirect.param1 = 0;
+	}
 }
 
 /**
@@ -52,7 +108,20 @@ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
  */
 static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	mutex_lock(&cq->cq_lock);
+
+	if (!cq->ring_size)
+		goto shutdown_sq_out;
+
+	/* free ring buffers and the ring itself */
+	iecm_ctlq_dealloc_ring_res(hw, cq);
+
+	/* Set ring_size to 0 to indicate uninitialized queue */
+	cq->ring_size = 0;
+
+shutdown_sq_out:
+	mutex_unlock(&cq->cq_lock);
+	mutex_destroy(&cq->cq_lock);
 }
 
 /**
@@ -71,7 +140,74 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw,
 			       struct iecm_ctlq_create_info *qinfo,
 			       struct iecm_ctlq_info **cq_out)
 {
-	/* stub */
+	enum iecm_status status = 0;
+	bool is_rxq = false;
+
+	if (!qinfo->len || !qinfo->buf_size ||
+	    qinfo->len > IECM_CTLQ_MAX_RING_SIZE ||
+	    qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN)
+		return IECM_ERR_CFG;
+
+	*cq_out = kcalloc(1, sizeof(struct iecm_ctlq_info), GFP_KERNEL);
+	if (!(*cq_out))
+		return IECM_ERR_NO_MEMORY;
+
+	(*cq_out)->cq_type = qinfo->type;
+	(*cq_out)->q_id = qinfo->id;
+	(*cq_out)->buf_size = qinfo->buf_size;
+	(*cq_out)->ring_size = qinfo->len;
+
+	(*cq_out)->next_to_use = 0;
+	(*cq_out)->next_to_clean = 0;
+	(*cq_out)->next_to_post = (*cq_out)->ring_size - 1;
+
+	switch (qinfo->type) {
+	case IECM_CTLQ_TYPE_MAILBOX_RX:
+		is_rxq = true;
+		fallthrough;
+	case IECM_CTLQ_TYPE_MAILBOX_TX:
+		status = iecm_ctlq_alloc_ring_res(hw, *cq_out);
+		break;
+	default:
+		status = IECM_ERR_PARAM;
+		break;
+	}
+
+	if (status)
+		goto init_free_q;
+
+	if (is_rxq) {
+		iecm_ctlq_init_rxq_bufs(*cq_out);
+	} else {
+		/* Allocate the array of msg pointers for TX queues */
+		(*cq_out)->bi.tx_msg = kcalloc(qinfo->len,
+					       sizeof(struct iecm_ctlq_msg *),
+					       GFP_KERNEL);
+		if (!(*cq_out)->bi.tx_msg) {
+			status = IECM_ERR_NO_MEMORY;
+			goto init_dealloc_q_mem;
+		}
+	}
+
+	iecm_ctlq_setup_regs(*cq_out, qinfo);
+
+	status = iecm_ctlq_init_regs(hw, *cq_out, is_rxq);
+	if (status)
+		goto init_dealloc_q_mem;
+
+	mutex_init(&(*cq_out)->cq_lock);
+
+	list_add(&(*cq_out)->cq_list, &hw->cq_list_head);
+
+	return status;
+
+init_dealloc_q_mem:
+	/* free ring buffers and the ring itself */
+	iecm_ctlq_dealloc_ring_res(hw, *cq_out);
+init_free_q:
+	kfree(*cq_out);
+
+	return status;
 }
 
 /**
@@ -82,7 +218,9 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw,
 void iecm_ctlq_remove(struct iecm_hw *hw,
 		      struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	list_del(&cq->cq_list);
+	iecm_ctlq_shutdown(hw, cq);
+	kfree(cq);
 }
 
 /**
@@ -99,7 +237,27 @@ void iecm_ctlq_remove(struct iecm_hw *hw,
 enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
 				struct iecm_ctlq_create_info *q_info)
 {
-	/* stub */
+	struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+	enum iecm_status ret_code = 0;
+	int i = 0;
+
+	INIT_LIST_HEAD(&hw->cq_list_head);
+
+	for (i = 0; i < num_q; i++) {
+		struct iecm_ctlq_create_info *qinfo = q_info + i;
+
+		ret_code = iecm_ctlq_add(hw, qinfo, &cq);
+		if (ret_code)
+			goto init_destroy_qs;
+	}
+
+	return ret_code;
+
+init_destroy_qs:
+	list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
+		iecm_ctlq_remove(hw, cq);
+
+	return ret_code;
 }
 
 /**
@@ -108,7 +266,13 @@ enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
  */
 enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw)
 {
-	/* stub */
+	struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+	enum iecm_status ret_code = 0;
+
+	list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
+		iecm_ctlq_remove(hw, cq);
+
+	return ret_code;
 }
 
 /**
@@ -128,7 +292,79 @@ enum iecm_status iecm_ctlq_send(struct iecm_hw *hw,
 				u16 num_q_msg,
 				struct iecm_ctlq_msg q_msg[])
 {
-	/* stub */
+	enum iecm_status status = 0;
+	struct iecm_ctlq_desc *desc;
+	int num_desc_avail = 0;
+	int i = 0;
+
+	if (!cq || !cq->ring_size)
+		return IECM_ERR_CTLQ_EMPTY;
+
+	mutex_lock(&cq->cq_lock);
+
+	/* Ensure there are enough descriptors to send all messages */
+	num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq);
+	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
+		status = IECM_ERR_CTLQ_FULL;
+		goto sq_send_command_out;
+	}
+
+	for (i = 0; i < num_q_msg; i++) {
+		struct iecm_ctlq_msg *msg = &q_msg[i];
+		u64 msg_cookie;
+
+		desc = IECM_CTLQ_DESC(cq, cq->next_to_use);
+
+		desc->opcode = cpu_to_le16(msg->opcode);
+		desc->pfid_vfid = cpu_to_le16(msg->func_id);
+
+		msg_cookie = *(u64 *)&msg->cookie;
+		desc->cookie_high =
+			cpu_to_le32(upper_32_bits(msg_cookie));
+		desc->cookie_low =
+			cpu_to_le32(lower_32_bits(msg_cookie));
+
+		if (msg->data_len) {
+			struct iecm_dma_mem *buff = msg->ctx.indirect.payload;
+
+			desc->datalen = cpu_to_le16(msg->data_len);
+			desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_BUF);
+			desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_RD);
+
+			/* Update the address values in the desc with the pa
+			 * value for respective buffer
+			 */
+			desc->params.indirect.addr_high =
+				cpu_to_le32(upper_32_bits(buff->pa));
+			desc->params.indirect.addr_low =
+				cpu_to_le32(lower_32_bits(buff->pa));
+
+			memcpy(&desc->params, msg->ctx.indirect.context,
+			       IECM_INDIRECT_CTX_SIZE);
+		} else {
+			memcpy(&desc->params, msg->ctx.direct,
+			       IECM_DIRECT_CTX_SIZE);
+		}
+
+		/* Store buffer info */
+		cq->bi.tx_msg[cq->next_to_use] = msg;
+
+		(cq->next_to_use)++;
+		if (cq->next_to_use == cq->ring_size)
+			cq->next_to_use = 0;
+	}
+
+	/* Force memory write to complete before letting hardware
+	 * know that there are new descriptors to fetch.
+	 */
+	dma_wmb();
+
+	wr32(hw, cq->reg.tail, cq->next_to_use);
+
+sq_send_command_out:
+	mutex_unlock(&cq->cq_lock);
+
+	return status;
 }
 
 /**
@@ -150,7 +386,58 @@ enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq,
 				    u16 *clean_count,
 				    struct iecm_ctlq_msg *msg_status[])
 {
-	/* stub */
+	enum iecm_status ret = 0;
+	struct iecm_ctlq_desc *desc;
+	u16 i = 0, num_to_clean;
+	u16 ntc, desc_err;
+
+	if (!cq || !cq->ring_size)
+		return IECM_ERR_CTLQ_EMPTY;
+
+	if (*clean_count == 0)
+		return 0;
+	if (*clean_count > cq->ring_size)
+		return IECM_ERR_PARAM;
+
+	mutex_lock(&cq->cq_lock);
+
+	ntc = cq->next_to_clean;
+
+	num_to_clean = *clean_count;
+
+	for (i = 0; i < num_to_clean; i++) {
+		/* Fetch next descriptor and check if marked as done */
+		desc = IECM_CTLQ_DESC(cq, ntc);
+		if (!(le16_to_cpu(desc->flags) & IECM_CTLQ_FLAG_DD))
+			break;
+
+		desc_err = le16_to_cpu(desc->ret_val);
+		if (desc_err) {
+			/* strip off FW internal code */
+			desc_err &= 0xff;
+		}
+
+		msg_status[i] = cq->bi.tx_msg[ntc];
+		msg_status[i]->status = desc_err;
+
+		cq->bi.tx_msg[ntc] = NULL;
+
+		/* Zero out any stale data */
+		memset(desc, 0, sizeof(*desc));
+
+		ntc++;
+		if (ntc == cq->ring_size)
+			ntc = 0;
+	}
+
+	cq->next_to_clean = ntc;
+
+	mutex_unlock(&cq->cq_lock);
+
+	/* Return number of descriptors actually cleaned */
+	*clean_count = i;
+
+	return ret;
 }
 
 /**
@@ -173,7 +460,112 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
 					 u16 *buff_count,
 					 struct iecm_dma_mem **buffs)
 {
-	/* stub */
+	enum iecm_status status = 0;
+	struct iecm_ctlq_desc *desc;
+	u16 ntp = cq->next_to_post;
+	bool buffs_avail = false;
+	u16 tbp = ntp + 1;
+	int i = 0;
+
+	if (*buff_count > cq->ring_size)
+		return IECM_ERR_PARAM;
+
+	if (*buff_count > 0)
+		buffs_avail = true;
+
+	mutex_lock(&cq->cq_lock);
+
+	if (tbp >= cq->ring_size)
+		tbp = 0;
+
+	if (tbp == cq->next_to_clean)
+		/* Nothing to do */
+		goto post_buffs_out;
+
+	/* Post buffers for as many as provided or up until the last one used */
+	while (ntp != cq->next_to_clean) {
+		desc = IECM_CTLQ_DESC(cq, ntp);
+
+		if (cq->bi.rx_buff[ntp])
+			goto fill_desc;
+		if (!buffs_avail) {
+			/* If the caller hasn't given us any buffers or
+			 * there are none left, search the ring itself
+			 * for an available buffer to move to this
+			 * entry starting at the next entry in the ring
+			 */
+			tbp = ntp + 1;
+
+			/* Wrap ring if necessary */
+			if (tbp >= cq->ring_size)
+				tbp = 0;
+
+			while (tbp != cq->next_to_clean) {
+				if (cq->bi.rx_buff[tbp]) {
+					cq->bi.rx_buff[ntp] =
+						cq->bi.rx_buff[tbp];
+					cq->bi.rx_buff[tbp] = NULL;
+
+					/* Found a buffer, no need to
+					 * search anymore
+					 */
+					break;
+				}
+
+				/* Wrap ring if necessary */
+				tbp++;
+				if (tbp >= cq->ring_size)
+					tbp = 0;
+			}
+
+			if (tbp == cq->next_to_clean)
+				goto post_buffs_out;
+		} else {
+			/* Give back pointer to DMA buffer */
+			cq->bi.rx_buff[ntp] = buffs[i];
+			i++;
+
+			if (i >= *buff_count)
+				buffs_avail = false;
+		}
+
+fill_desc:
+		desc->flags =
+			cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+
+		/* Post buffers to descriptor */
+		desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size);
+		desc->params.indirect.addr_high =
+			cpu_to_le32(upper_32_bits(cq->bi.rx_buff[ntp]->pa));
+		desc->params.indirect.addr_low =
+			cpu_to_le32(lower_32_bits(cq->bi.rx_buff[ntp]->pa));
+
+		ntp++;
+		if (ntp == cq->ring_size)
+			ntp = 0;
+	}
+
+post_buffs_out:
+	/* Only update tail if buffers were actually posted */
+	if (cq->next_to_post != ntp) {
+		if (ntp)
+			/* Update next_to_post to ntp - 1 since current ntp
+			 * will not have a buffer
+			 */
+			cq->next_to_post = ntp - 1;
+		else
+			/* Wrap to end of end ring since current ntp is 0 */
+			cq->next_to_post = cq->ring_size - 1;
+
+		wr32(hw, cq->reg.tail, cq->next_to_post);
+	}
+
+	mutex_unlock(&cq->cq_lock);
+
+	/* return the number of buffers that were not posted */
+	*buff_count = *buff_count - i;
+
+	return status;
 }
 
 /**
@@ -190,5 +582,87 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
 enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq,
 				u16 *num_q_msg, struct iecm_ctlq_msg *q_msg)
 {
-	/* stub */
+	u16 num_to_clean, ntc, ret_val, flags;
+	enum iecm_status ret_code = 0;
+	struct iecm_ctlq_desc *desc;
+	u16 i = 0;
+
+	if (!cq || !cq->ring_size)
+		return IECM_ERR_CTLQ_EMPTY;
+
+	if (*num_q_msg == 0)
+		return 0;
+	else if (*num_q_msg > cq->ring_size)
+		return IECM_ERR_PARAM;
+
+	/* take the lock before we start messing with the ring */
+	mutex_lock(&cq->cq_lock);
+
+	ntc = cq->next_to_clean;
+
+	num_to_clean = *num_q_msg;
+
+	for (i = 0; i < num_to_clean; i++) {
+		u64 msg_cookie;
+
+		/* Fetch next descriptor and check if marked as done */
+		desc = IECM_CTLQ_DESC(cq, ntc);
+		flags = le16_to_cpu(desc->flags);
+
+		if (!(flags & IECM_CTLQ_FLAG_DD))
+			break;
+
+		ret_val = le16_to_cpu(desc->ret_val);
+
+		q_msg[i].vmvf_type = (flags &
+				      (IECM_CTLQ_FLAG_FTYPE_VM |
+				       IECM_CTLQ_FLAG_FTYPE_PF)) >>
+				      IECM_CTLQ_FLAG_FTYPE_S;
+
+		if (flags & IECM_CTLQ_FLAG_ERR)
+			ret_code = IECM_ERR_CTLQ_ERROR;
+
+		msg_cookie = (u64)le32_to_cpu(desc->cookie_high) << 32;
+		msg_cookie |= (u64)le32_to_cpu(desc->cookie_low);
+		memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64));
+
+		q_msg[i].opcode = le16_to_cpu(desc->opcode);
+		q_msg[i].data_len = le16_to_cpu(desc->datalen);
+		q_msg[i].status = ret_val;
+
+		if (desc->datalen) {
+			memcpy(q_msg[i].ctx.indirect.context,
+			       &desc->params.indirect, IECM_INDIRECT_CTX_SIZE);
+
+			/* Assign pointer to DMA buffer to ctlq_msg array
+			 * to be given to upper layer
+			 */
+			q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
+
+			/* Zero out pointer to DMA buffer info;
+			 * will be repopulated by post buffers API
+			 */
+			cq->bi.rx_buff[ntc] = NULL;
+		} else {
+			memcpy(q_msg[i].ctx.direct, desc->params.raw,
+			       IECM_DIRECT_CTX_SIZE);
+		}
+
+		/* Zero out stale data in descriptor */
+		memset(desc, 0, sizeof(struct iecm_ctlq_desc));
+
+		ntc++;
+		if (ntc == cq->ring_size)
+			ntc = 0;
+	};
+
+	cq->next_to_clean = ntc;
+
+	mutex_unlock(&cq->cq_lock);
+
+	*num_q_msg = i;
+	if (*num_q_msg == 0)
+		ret_code = IECM_ERR_CTLQ_NO_WORK;
+
+	return ret_code;
 }
diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
index d4402be5aa7f..c6f5c935f803 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c
@@ -12,7 +12,13 @@ static enum iecm_status
 iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
 			  struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc);
+
+	cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size);
+	if (!cq->desc_ring.va)
+		return IECM_ERR_NO_MEMORY;
+
+	return 0;
 }
 
 /**
@@ -26,7 +32,52 @@ iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
 static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
 					     struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	int i = 0;
+
+	/* Do not allocate DMA buffers for transmit queues */
+	if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+		return 0;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct iecm_dma_mem *),
+				 GFP_KERNEL);
+	if (!cq->bi.rx_buff)
+		return IECM_ERR_NO_MEMORY;
+
+	/* allocate the mapped buffers (except for the last one) */
+	for (i = 0; i < cq->ring_size - 1; i++) {
+		struct iecm_dma_mem *bi;
+		int num = 1; /* number of iecm_dma_mem to be allocated */
+
+		cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct iecm_dma_mem),
+					    GFP_KERNEL);
+		if (!cq->bi.rx_buff[i])
+			goto unwind_alloc_cq_bufs;
+
+		bi = cq->bi.rx_buff[i];
+
+		bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size);
+		if (!bi->va) {
+			/* unwind will not free the failed entry */
+			kfree(cq->bi.rx_buff[i]);
+			goto unwind_alloc_cq_bufs;
+		}
+	}
+
+	return 0;
+
+unwind_alloc_cq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--) {
+		iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+		kfree(cq->bi.rx_buff[i]);
+	}
+	kfree(cq->bi.rx_buff);
+
+	return IECM_ERR_NO_MEMORY;
 }
 
 /**
@@ -40,7 +91,7 @@ static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
 static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
 				     struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	iecm_free_dma_mem(hw, &cq->desc_ring);
 }
 
 /**
@@ -53,7 +104,26 @@ static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
  */
 static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	void *bi;
+
+	if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) {
+		int i;
+
+		/* free DMA buffers for Rx queues*/
+		for (i = 0; i < cq->ring_size; i++) {
+			if (cq->bi.rx_buff[i]) {
+				iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+				kfree(cq->bi.rx_buff[i]);
+			}
+		}
+
+		bi = (void *)cq->bi.rx_buff;
+	} else {
+		bi = (void *)cq->bi.tx_msg;
+	}
+
+	/* free the buffer header */
+	kfree(bi);
 }
 
 /**
@@ -65,7 +135,9 @@ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
  */
 void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	/* free ring buffers and the ring itself */
+	iecm_ctlq_free_bufs(hw, cq);
+	iecm_ctlq_free_desc_ring(hw, cq);
 }
 
 /**
@@ -79,5 +151,26 @@ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
 enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
 					  struct iecm_ctlq_info *cq)
 {
-	/* stub */
+	enum iecm_status ret_code;
+
+	/* verify input for valid configuration */
+	if (!cq->ring_size || !cq->buf_size)
+		return IECM_ERR_CFG;
+
+	/* allocate the ring memory */
+	ret_code = iecm_ctlq_alloc_desc_ring(hw, cq);
+	if (ret_code)
+		return ret_code;
+
+	/* allocate buffers in the rings */
+	ret_code = iecm_ctlq_alloc_bufs(hw, cq);
+	if (ret_code)
+		goto iecm_init_cq_free_ring;
+
+	/* success! */
+	return 0;
+
+iecm_init_cq_free_ring:
+	iecm_free_dma_mem(hw, &cq->desc_ring);
+	return ret_code;
 }
diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index 23eef4a9c7ca..7151aa5fa95c 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -162,7 +162,82 @@ static int iecm_intr_req(struct iecm_adapter *adapter)
  */
 static int iecm_cfg_netdev(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	netdev_features_t dflt_features;
+	netdev_features_t offloads = 0;
+	struct iecm_netdev_priv *np;
+	struct net_device *netdev;
+	int err;
+
+	netdev = alloc_etherdev_mqs(sizeof(struct iecm_netdev_priv),
+				    IECM_MAX_Q, IECM_MAX_Q);
+	if (!netdev)
+		return -ENOMEM;
+	vport->netdev = netdev;
+	np = netdev_priv(netdev);
+	np->vport = vport;
+
+	if (!is_valid_ether_addr(vport->default_mac_addr)) {
+		eth_hw_addr_random(netdev);
+		ether_addr_copy(vport->default_mac_addr, netdev->dev_addr);
+	} else {
+		ether_addr_copy(netdev->dev_addr, vport->default_mac_addr);
+		ether_addr_copy(netdev->perm_addr, vport->default_mac_addr);
+	}
+
+	/* assign netdev_ops */
+	if (iecm_is_queue_model_split(vport->txq_model))
+		netdev->netdev_ops = &iecm_netdev_ops_splitq;
+	else
+		netdev->netdev_ops = &iecm_netdev_ops_singleq;
+
+	/* setup watchdog timeout value to be 5 second */
+	netdev->watchdog_timeo = 5 * HZ;
+
+	/* configure default MTU size */
+	netdev->min_mtu = ETH_MIN_MTU;
+	netdev->max_mtu = vport->max_mtu;
+
+	dflt_features = NETIF_F_SG	|
+			NETIF_F_HIGHDMA	|
+			NETIF_F_RXHASH;
+
+	if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_STATELESS_OFFLOADS)) {
+		dflt_features |=
+			NETIF_F_RXCSUM |
+			NETIF_F_IP_CSUM |
+			NETIF_F_IPV6_CSUM |
+			0;
+		offloads |= NETIF_F_TSO |
+			    NETIF_F_TSO6;
+	}
+	if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_UDP_SEG_OFFLOAD))
+		offloads |= NETIF_F_GSO_UDP_L4;
+
+	netdev->features |= dflt_features;
+	netdev->hw_features |= dflt_features | offloads;
+	netdev->hw_enc_features |= dflt_features | offloads;
+
+	iecm_set_ethtool_ops(netdev);
+	SET_NETDEV_DEV(netdev, &adapter->pdev->dev);
+
+	/* carrier off on init to avoid Tx hangs */
+	netif_carrier_off(netdev);
+
+	/* make sure transmit queues start off as stopped */
+	netif_tx_stop_all_queues(netdev);
+
+	/* register last */
+	err = register_netdev(netdev);
+	if (err)
+		goto err;
+
+	return 0;
+err:
+	free_netdev(vport->netdev);
+	vport->netdev = NULL;
+
+	return err;
 }
 
 /**
@@ -796,12 +871,25 @@ static int iecm_change_mtu(struct net_device *netdev, int new_mtu)
 
 void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size)
 {
-	/* stub */
+	size_t sz = ALIGN(size, 4096);
+	struct iecm_adapter *pf = hw->back;
+
+	mem->va = dma_alloc_coherent(&pf->pdev->dev, sz,
+				     &mem->pa, GFP_KERNEL | __GFP_ZERO);
+	mem->size = size;
+
+	return mem->va;
 }
 
 void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem)
 {
-	/* stub */
+	struct iecm_adapter *pf = hw->back;
+
+	dma_free_coherent(&pf->pdev->dev, mem->size,
+			  mem->va, mem->pa);
+	mem->size = 0;
+	mem->va = NULL;
+	mem->pa = 0;
 }
 
 static const struct net_device_ops iecm_netdev_ops_splitq = {
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index 838bdeba8e61..86de33c308e3 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -39,7 +39,42 @@ struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = {
  */
 static void iecm_recv_event_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	enum virtchnl_link_speed link_speed;
+	struct virtchnl_pf_event *vpe;
+
+	vpe = (struct virtchnl_pf_event *)vport->adapter->vc_msg;
+
+	switch (vpe->event) {
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		link_speed = vpe->event_data.link_event.link_speed;
+		adapter->link_speed = link_speed;
+		if (adapter->link_up !=
+		    vpe->event_data.link_event.link_status) {
+			adapter->link_up =
+				vpe->event_data.link_event.link_status;
+			if (adapter->link_up) {
+				netif_tx_start_all_queues(vport->netdev);
+				netif_carrier_on(vport->netdev);
+			} else {
+				netif_tx_stop_all_queues(vport->netdev);
+				netif_carrier_off(vport->netdev);
+			}
+		}
+		break;
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+			mutex_lock(&adapter->reset_lock);
+			set_bit(__IECM_HR_CORE_RESET, adapter->flags);
+			queue_delayed_work(adapter->vc_event_wq,
+					   &adapter->vc_event_task,
+					   msecs_to_jiffies(10));
+		break;
+	default:
+		dev_err(&vport->adapter->pdev->dev,
+			"Unknown event %d from PF\n", vpe->event);
+		break;
+	}
+	mutex_unlock(&vport->adapter->vc_msg_lock);
 }
 
 /**
@@ -52,7 +87,29 @@ static void iecm_recv_event_msg(struct iecm_vport *vport)
  */
 static int iecm_mb_clean(struct iecm_adapter *adapter)
 {
-	/* stub */
+	u16 i, num_q_msg = IECM_DFLT_MBX_Q_LEN;
+	struct iecm_ctlq_msg **q_msg;
+	struct iecm_dma_mem *dma_mem;
+	int err = 0;
+
+	q_msg = kcalloc(num_q_msg, sizeof(struct iecm_ctlq_msg *), GFP_KERNEL);
+	if (!q_msg)
+		return -ENOMEM;
+
+	err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+	if (err)
+		goto error;
+
+	for (i = 0; i < num_q_msg; i++) {
+		dma_mem = q_msg[i]->ctx.indirect.payload;
+		if (dma_mem)
+			dmam_free_coherent(&adapter->pdev->dev, dma_mem->size,
+					   dma_mem->va, dma_mem->pa);
+		kfree(q_msg[i]);
+	}
+error:
+	kfree(q_msg);
+	return err;
 }
 
 /**
@@ -69,7 +126,53 @@ static int iecm_mb_clean(struct iecm_adapter *adapter)
 int iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
 		     u16 msg_size, u8 *msg)
 {
-	/* stub */
+	struct iecm_ctlq_msg *ctlq_msg;
+	struct iecm_dma_mem *dma_mem;
+	int err = 0;
+
+	err = iecm_mb_clean(adapter);
+	if (err)
+		return err;
+
+	ctlq_msg = kzalloc(sizeof(*ctlq_msg), GFP_KERNEL);
+	if (!ctlq_msg)
+		return -ENOMEM;
+
+	dma_mem = kzalloc(sizeof(*dma_mem), GFP_KERNEL);
+	if (!dma_mem) {
+		err = -ENOMEM;
+		goto dma_mem_error;
+	}
+
+	memset(ctlq_msg, 0, sizeof(struct iecm_ctlq_msg));
+	ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_cp;
+	ctlq_msg->func_id = 0;
+	ctlq_msg->data_len = msg_size;
+	ctlq_msg->cookie.mbx.chnl_opcode = op;
+	ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+	dma_mem->size = IECM_DFLT_MBX_BUF_SIZE;
+	dma_mem->va = dmam_alloc_coherent(&adapter->pdev->dev, dma_mem->size,
+					  &dma_mem->pa, GFP_KERNEL);
+	if (!dma_mem->va) {
+		err = -ENOMEM;
+		goto dma_alloc_error;
+	}
+	memcpy(dma_mem->va, msg, msg_size);
+	ctlq_msg->ctx.indirect.payload = dma_mem;
+
+	err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+	if (err)
+		goto send_error;
+
+	return 0;
+send_error:
+	dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, dma_mem->va,
+			   dma_mem->pa);
+dma_alloc_error:
+	kfree(dma_mem);
+dma_mem_error:
+	kfree(ctlq_msg);
+	return err;
 }
 EXPORT_SYMBOL(iecm_send_mb_msg);
 
@@ -86,7 +189,264 @@ EXPORT_SYMBOL(iecm_send_mb_msg);
 int iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op,
 		     void *msg, int msg_size)
 {
-	/* stub */
+	struct iecm_ctlq_msg ctlq_msg;
+	struct iecm_dma_mem *dma_mem;
+	struct iecm_vport *vport;
+	bool work_done = false;
+	int payload_size = 0;
+	int num_retry = 10;
+	u16 num_q_msg;
+	int err = 0;
+
+	vport = adapter->vports[0];
+	while (1) {
+		/* Try to get one message */
+		num_q_msg = 1;
+		dma_mem = NULL;
+		err = iecm_ctlq_recv(adapter->hw.arq, &num_q_msg, &ctlq_msg);
+		/* If no message then decide if we have to retry based on
+		 * opcode
+		 */
+		if (err || !num_q_msg) {
+			if (op && num_retry--) {
+				msleep(20);
+				continue;
+			} else {
+				break;
+			}
+		}
+
+		/* If we are here a message is received. Check if we are looking
+		 * for a specific message based on opcode. If it is different
+		 * ignore and post buffers
+		 */
+		if (op && ctlq_msg.cookie.mbx.chnl_opcode != op)
+			goto post_buffs;
+
+		if (ctlq_msg.data_len)
+			payload_size = ctlq_msg.ctx.indirect.payload->size;
+
+		/* All conditions are met. Either a message requested is
+		 * received or we received a message to be processed
+		 */
+		switch (ctlq_msg.cookie.mbx.chnl_opcode) {
+		case VIRTCHNL_OP_VERSION:
+		case VIRTCHNL_OP_GET_CAPS:
+		case VIRTCHNL_OP_CREATE_VPORT:
+			if (msg)
+				memcpy(msg, ctlq_msg.ctx.indirect.payload->va,
+				       min_t(int,
+					     payload_size, msg_size));
+			work_done = true;
+			break;
+		case VIRTCHNL_OP_ENABLE_VPORT:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_ENA_VPORT_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_ENA_VPORT, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_DISABLE_VPORT:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_DIS_VPORT_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_DIS_VPORT, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_DESTROY_VPORT:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_DESTROY_VPORT_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_DESTROY_VPORT, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_CONFIG_TX_QUEUES:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_CONFIG_TXQ_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_CONFIG_TXQ, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_CONFIG_RX_QUEUES:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_CONFIG_RXQ_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_CONFIG_RXQ, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_ENA_QUEUES_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_ENA_QUEUES, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_DIS_QUEUES_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_DIS_QUEUES, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_ADD_QUEUES:
+			if (ctlq_msg.cookie.mbx.chnl_retval) {
+				set_bit(IECM_VC_ADD_QUEUES_ERR,
+					adapter->vc_state);
+			} else {
+				mutex_lock(&adapter->vc_msg_lock);
+				memcpy(adapter->vc_msg,
+				       ctlq_msg.ctx.indirect.payload->va,
+				       min((int)
+					   ctlq_msg.ctx.indirect.payload->size,
+					   IECM_DFLT_MBX_BUF_SIZE));
+			}
+			set_bit(IECM_VC_ADD_QUEUES, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_DEL_QUEUES:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_DEL_QUEUES_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_DEL_QUEUES, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_MAP_IRQ_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_MAP_IRQ, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_UNMAP_IRQ_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_UNMAP_IRQ, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_GET_STATS:
+			if (ctlq_msg.cookie.mbx.chnl_retval) {
+				set_bit(IECM_VC_GET_STATS_ERR,
+					adapter->vc_state);
+			} else {
+				mutex_lock(&adapter->vc_msg_lock);
+				memcpy(adapter->vc_msg,
+				       ctlq_msg.ctx.indirect.payload->va,
+				       min_t(int,
+					     payload_size,
+					     IECM_DFLT_MBX_BUF_SIZE));
+			}
+			set_bit(IECM_VC_GET_STATS, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_GET_RSS_HASH:
+			if (ctlq_msg.cookie.mbx.chnl_retval) {
+				set_bit(IECM_VC_GET_RSS_HASH_ERR,
+					adapter->vc_state);
+			} else {
+				mutex_lock(&adapter->vc_msg_lock);
+				memcpy(adapter->vc_msg,
+				       ctlq_msg.ctx.indirect.payload->va,
+				       min_t(int,
+					     payload_size,
+					     IECM_DFLT_MBX_BUF_SIZE));
+			}
+			set_bit(IECM_VC_GET_RSS_HASH, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_SET_RSS_HASH:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_SET_RSS_HASH_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_SET_RSS_HASH, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_GET_RSS_LUT:
+			if (ctlq_msg.cookie.mbx.chnl_retval) {
+				set_bit(IECM_VC_GET_RSS_LUT_ERR,
+					adapter->vc_state);
+			} else {
+				mutex_lock(&adapter->vc_msg_lock);
+				memcpy(adapter->vc_msg,
+				       ctlq_msg.ctx.indirect.payload->va,
+				       min_t(int,
+					     payload_size,
+					     IECM_DFLT_MBX_BUF_SIZE));
+			}
+			set_bit(IECM_VC_GET_RSS_LUT, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_SET_RSS_LUT:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_SET_RSS_LUT_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_SET_RSS_LUT, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_GET_RSS_KEY:
+			if (ctlq_msg.cookie.mbx.chnl_retval) {
+				set_bit(IECM_VC_GET_RSS_KEY_ERR,
+					adapter->vc_state);
+			} else {
+				mutex_lock(&adapter->vc_msg_lock);
+				memcpy(adapter->vc_msg,
+				       ctlq_msg.ctx.indirect.payload->va,
+				       min_t(int,
+					     payload_size,
+					     IECM_DFLT_MBX_BUF_SIZE));
+			}
+			set_bit(IECM_VC_GET_RSS_KEY, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_CONFIG_RSS_KEY:
+			if (ctlq_msg.cookie.mbx.chnl_retval)
+				set_bit(IECM_VC_CONFIG_RSS_KEY_ERR,
+					adapter->vc_state);
+			set_bit(IECM_VC_CONFIG_RSS_KEY, adapter->vc_state);
+			wake_up(&adapter->vchnl_wq);
+			break;
+		case VIRTCHNL_OP_EVENT:
+			mutex_lock(&adapter->vc_msg_lock);
+			memcpy(adapter->vc_msg,
+			       ctlq_msg.ctx.indirect.payload->va,
+			       min_t(int,
+				     payload_size,
+				     IECM_DFLT_MBX_BUF_SIZE));
+			iecm_recv_event_msg(vport);
+			break;
+		default:
+			if (adapter->dev_ops.vc_ops.recv_mbx_msg)
+				err =
+				adapter->dev_ops.vc_ops.recv_mbx_msg(adapter,
+								   msg,
+								   msg_size,
+								   &ctlq_msg,
+								   &work_done);
+			else
+				dev_warn(&adapter->pdev->dev,
+					 "Unhandled virtchnl response %d\n",
+					 ctlq_msg.cookie.mbx.chnl_opcode);
+			break;
+		} /* switch v_opcode */
+post_buffs:
+		if (ctlq_msg.data_len)
+			dma_mem = ctlq_msg.ctx.indirect.payload;
+		else
+			num_q_msg = 0;
+
+		err = iecm_ctlq_post_rx_buffs(&adapter->hw, adapter->hw.arq,
+					      &num_q_msg, &dma_mem);
+
+		/* If post failed clear the only buffer we supplied */
+		if (err && dma_mem)
+			dmam_free_coherent(&adapter->pdev->dev, dma_mem->size,
+					   dma_mem->va, dma_mem->pa);
+		/* Applies only if we are looking for a specific opcode */
+		if (work_done)
+			break;
+	}
+
+	return err;
 }
 EXPORT_SYMBOL(iecm_recv_mb_msg);
 
@@ -177,7 +537,23 @@ int iecm_wait_for_event(struct iecm_adapter *adapter,
 			enum iecm_vport_vc_state state,
 			enum iecm_vport_vc_state err_check)
 {
-	/* stub */
+	int event;
+
+	event = wait_event_timeout(adapter->vchnl_wq,
+				   test_and_clear_bit(state, adapter->vc_state),
+				   msecs_to_jiffies(500));
+	if (event) {
+		if (test_and_clear_bit(err_check, adapter->vc_state)) {
+			dev_err(&adapter->pdev->dev,
+				"VC response error %d\n", err_check);
+			return -EINVAL;
+		}
+		return 0;
+	}
+
+	/* Timeout occurred */
+	dev_err(&adapter->pdev->dev, "VC timeout, state = %u\n", state);
+	return -ETIMEDOUT;
 }
 EXPORT_SYMBOL(iecm_wait_for_event);
 
@@ -404,7 +780,14 @@ int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport)
 static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw,
 					     enum iecm_ctlq_type type, int id)
 {
-	/* stub */
+	struct iecm_ctlq_info *cq, *tmp;
+
+	list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) {
+		if (cq->q_id == id && cq->cq_type == type)
+			return cq;
+	}
+
+	return NULL;
 }
 
 /**
@@ -413,7 +796,8 @@ static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw,
  */
 void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter)
 {
-	/* stub */
+	cancel_delayed_work_sync(&adapter->init_task);
+	iecm_ctlq_deinit(&adapter->hw);
 }
 
 /**
@@ -474,7 +858,21 @@ int iecm_init_dflt_mbx(struct iecm_adapter *adapter)
  */
 int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter)
 {
-	/* stub */
+	adapter->vport_params_reqd = kcalloc(IECM_MAX_NUM_VPORTS,
+					     sizeof(*adapter->vport_params_reqd),
+					     GFP_KERNEL);
+	if (!adapter->vport_params_reqd)
+		return -ENOMEM;
+
+	adapter->vport_params_recvd = kcalloc(IECM_MAX_NUM_VPORTS,
+					      sizeof(*adapter->vport_params_recvd),
+					      GFP_KERNEL);
+	if (!adapter->vport_params_recvd) {
+		kfree(adapter->vport_params_reqd);
+		return -ENOMEM;
+	}
+
+	return 0;
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 07/15] iecm: Implement virtchnl commands
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (5 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 06/15] iecm: Implement mailbox functionality Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 17:32 ` [net-next v5 08/15] iecm: Implement vector allocation Tony Nguyen
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement various virtchnl commands that enable
communication with hardware.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   | 1151 ++++++++++++++++-
 1 file changed, 1124 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index 86de33c308e3..2f716ec42cf2 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -458,7 +458,13 @@ EXPORT_SYMBOL(iecm_recv_mb_msg);
  */
 static int iecm_send_ver_msg(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct virtchnl_version_info vvi;
+
+	vvi.major = VIRTCHNL_VERSION_MAJOR;
+	vvi.minor = VIRTCHNL_VERSION_MINOR;
+
+	return iecm_send_mb_msg(adapter, VIRTCHNL_OP_VERSION, sizeof(vvi),
+				(u8 *)&vvi);
 }
 
 /**
@@ -469,7 +475,19 @@ static int iecm_send_ver_msg(struct iecm_adapter *adapter)
  */
 static int iecm_recv_ver_msg(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct virtchnl_version_info vvi;
+	int err = 0;
+
+	err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_VERSION, &vvi, sizeof(vvi));
+	if (err)
+		return err;
+
+	if (vvi.major > VIRTCHNL_VERSION_MAJOR ||
+	    (vvi.major == VIRTCHNL_VERSION_MAJOR &&
+	     vvi.minor > VIRTCHNL_VERSION_MINOR))
+		dev_warn(&adapter->pdev->dev, "Virtchnl version not matched\n");
+
+	return 0;
 }
 
 /**
@@ -481,7 +499,25 @@ static int iecm_recv_ver_msg(struct iecm_adapter *adapter)
  */
 int iecm_send_get_caps_msg(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct virtchnl_get_capabilities caps = {0};
+	int buf_size;
+
+	buf_size = sizeof(struct virtchnl_get_capabilities);
+	adapter->caps = kzalloc(buf_size, GFP_KERNEL);
+	if (!adapter->caps)
+		return -ENOMEM;
+
+	caps.cap_flags = VIRTCHNL_CAP_STATELESS_OFFLOADS |
+	       VIRTCHNL_CAP_UDP_SEG_OFFLOAD |
+	       VIRTCHNL_CAP_RSS |
+	       VIRTCHNL_CAP_TCP_RSC |
+	       VIRTCHNL_CAP_HEADER_SPLIT |
+	       VIRTCHNL_CAP_RDMA |
+	       VIRTCHNL_CAP_SRIOV |
+	       VIRTCHNL_CAP_EDT;
+
+	return iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, sizeof(caps),
+				(u8 *)&caps);
 }
 EXPORT_SYMBOL(iecm_send_get_caps_msg);
 
@@ -494,7 +530,8 @@ EXPORT_SYMBOL(iecm_send_get_caps_msg);
  */
 static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter)
 {
-	/* stub */
+	return iecm_recv_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, adapter->caps,
+				sizeof(struct virtchnl_get_capabilities));
 }
 
 /**
@@ -507,7 +544,25 @@ static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter)
  */
 static int iecm_send_create_vport_msg(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct virtchnl_create_vport *vport_msg;
+	int buf_size;
+
+	buf_size = sizeof(struct virtchnl_create_vport);
+	if (!adapter->vport_params_reqd[0]) {
+		adapter->vport_params_reqd[0] = kzalloc(buf_size, GFP_KERNEL);
+		if (!adapter->vport_params_reqd[0])
+			return -ENOMEM;
+	}
+
+	vport_msg = (struct virtchnl_create_vport *)
+			adapter->vport_params_reqd[0];
+	vport_msg->vport_type = VIRTCHNL_VPORT_TYPE_DEFAULT;
+	vport_msg->txq_model = VIRTCHNL_QUEUE_MODEL_SPLIT;
+	vport_msg->rxq_model = VIRTCHNL_QUEUE_MODEL_SPLIT;
+	iecm_vport_calc_total_qs(vport_msg, 0);
+
+	return iecm_send_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, buf_size,
+				(u8 *)vport_msg);
 }
 
 /**
@@ -521,7 +576,25 @@ static int iecm_send_create_vport_msg(struct iecm_adapter *adapter)
 static int iecm_recv_create_vport_msg(struct iecm_adapter *adapter,
 				      int *vport_id)
 {
-	/* stub */
+	struct virtchnl_create_vport *vport_msg;
+	int err;
+
+	if (!adapter->vport_params_recvd[0]) {
+		adapter->vport_params_recvd[0] = kzalloc(IECM_DFLT_MBX_BUF_SIZE,
+							 GFP_KERNEL);
+		if (!adapter->vport_params_recvd[0])
+			return -ENOMEM;
+	}
+
+	vport_msg = (struct virtchnl_create_vport *)
+			adapter->vport_params_recvd[0];
+
+	err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, vport_msg,
+			       IECM_DFLT_MBX_BUF_SIZE);
+	if (err)
+		return err;
+	*vport_id = vport_msg->vport_id;
+	return 0;
 }
 
 /**
@@ -566,7 +639,19 @@ EXPORT_SYMBOL(iecm_wait_for_event);
  */
 int iecm_send_destroy_vport_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_vport v_id;
+	int err;
+
+	v_id.vport_id = vport->vport_id;
+
+	err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_DESTROY_VPORT,
+			       sizeof(v_id), (u8 *)&v_id);
+
+	if (err)
+		return err;
+	return iecm_wait_for_event(adapter, IECM_VC_DESTROY_VPORT,
+				   IECM_VC_DESTROY_VPORT_ERR);
 }
 
 /**
@@ -602,7 +687,119 @@ int iecm_send_disable_vport_msg(struct iecm_vport *vport)
  */
 int iecm_send_config_tx_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct virtchnl_config_tx_queues *ctq = NULL;
+	struct virtchnl_txq_info_v2 *qi;
+	int totqs, num_msgs;
+	int num_qs, err = 0;
+	int i, k = 0;
+
+	totqs = vport->num_txq + vport->num_complq;
+	qi = kcalloc(totqs, sizeof(struct virtchnl_txq_info_v2), GFP_KERNEL);
+	if (!qi)
+		return -ENOMEM;
+
+	/* Populate the queue info buffer with all queue context info */
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+		int j;
+
+		for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+			qi[k].queue_id = tx_qgrp->txqs[j].q_id;
+			qi[k].model = vport->txq_model;
+			qi[k].type = tx_qgrp->txqs[j].q_type;
+			qi[k].ring_len = tx_qgrp->txqs[j].desc_count;
+			qi[k].dma_ring_addr = tx_qgrp->txqs[j].dma;
+			if (iecm_is_queue_model_split(vport->txq_model)) {
+				struct iecm_queue *q = &tx_qgrp->txqs[j];
+
+				qi[k].tx_compl_queue_id = tx_qgrp->complq->q_id;
+				qi[k].desc_profile =
+					VIRTCHNL_TXQ_DESC_PROFILE_NATIVE;
+
+				if (test_bit(__IECM_Q_FLOW_SCH_EN, q->flags))
+					qi[k].sched_mode =
+						VIRTCHNL_TXQ_SCHED_MODE_FLOW;
+				else
+					qi[k].sched_mode =
+						VIRTCHNL_TXQ_SCHED_MODE_QUEUE;
+			} else {
+				qi[k].sched_mode =
+					VIRTCHNL_TXQ_SCHED_MODE_QUEUE;
+				qi[k].desc_profile =
+					VIRTCHNL_TXQ_DESC_PROFILE_BASE;
+			}
+		}
+
+		if (iecm_is_queue_model_split(vport->txq_model)) {
+			qi[k].queue_id = tx_qgrp->complq->q_id;
+			qi[k].model = vport->txq_model;
+			qi[k].type = tx_qgrp->complq->q_type;
+			qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_NATIVE;
+			qi[k].ring_len = tx_qgrp->complq->desc_count;
+			qi[k].dma_ring_addr = tx_qgrp->complq->dma;
+			k++;
+		}
+	}
+
+	/* Make sure accounting agrees */
+	if (k != totqs) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	/* Chunk up the queue contexts into multiple messages to avoid
+	 * sending a control queue message buffer that is too large
+	 */
+	if (totqs < IECM_NUM_QCTX_PER_MSG)
+		num_qs = totqs;
+	else
+		num_qs = IECM_NUM_QCTX_PER_MSG;
+
+	num_msgs = totqs / IECM_NUM_QCTX_PER_MSG;
+	if (totqs % IECM_NUM_QCTX_PER_MSG)
+		num_msgs++;
+
+	for (i = 0, k = 0; i < num_msgs || num_qs; i++) {
+		int buf_size = sizeof(struct virtchnl_config_tx_queues) +
+			   (sizeof(struct virtchnl_txq_info_v2) * (num_qs - 1));
+		if (!ctq || num_qs != IECM_NUM_QCTX_PER_MSG) {
+			kfree(ctq);
+			ctq = kzalloc(buf_size, GFP_KERNEL);
+			if (!ctq) {
+				err = -ENOMEM;
+				goto error;
+			}
+		} else {
+			memset(ctq, 0, buf_size);
+		}
+
+		ctq->vport_id = vport->vport_id;
+		ctq->num_qinfo = num_qs;
+		memcpy(ctq->qinfo, &qi[k],
+		       sizeof(struct virtchnl_txq_info_v2) * num_qs);
+
+		err = iecm_send_mb_msg(vport->adapter,
+				       VIRTCHNL_OP_CONFIG_TX_QUEUES,
+				       buf_size, (u8 *)ctq);
+		if (err)
+			goto mbx_error;
+
+		err = iecm_wait_for_event(vport->adapter, IECM_VC_CONFIG_TXQ,
+					  IECM_VC_CONFIG_TXQ_ERR);
+		if (err)
+			goto mbx_error;
+
+		k += num_qs;
+		totqs -= num_qs;
+		if (totqs < IECM_NUM_QCTX_PER_MSG)
+			num_qs = totqs;
+	}
+
+mbx_error:
+	kfree(ctq);
+error:
+	kfree(qi);
+	return err;
 }
 
 /**
@@ -614,7 +811,145 @@ int iecm_send_config_tx_queues_msg(struct iecm_vport *vport)
  */
 int iecm_send_config_rx_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct virtchnl_config_rx_queues *crq = NULL;
+	int totqs, num_msgs, num_qs, err = 0;
+	struct virtchnl_rxq_info_v2 *qi;
+	int i, k = 0;
+
+	totqs = vport->num_rxq + vport->num_bufq;
+	qi = kcalloc(totqs, sizeof(struct virtchnl_rxq_info_v2), GFP_KERNEL);
+	if (!qi)
+		return -ENOMEM;
+
+	/* Populate the queue info buffer with all queue context info */
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+		int num_rxq;
+		int j;
+
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+		else
+			num_rxq = rx_qgrp->singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++, k++) {
+			struct iecm_queue *rxq;
+
+			if (iecm_is_queue_model_split(vport->rxq_model)) {
+				rxq = &rx_qgrp->splitq.rxq_sets[j].rxq;
+				qi[k].rx_bufq1_id =
+				  rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id;
+				qi[k].rx_bufq2_id =
+				  rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id;
+				qi[k].hdr_buffer_size = rxq->rx_hbuf_size;
+				qi[k].rsc_low_watermark =
+					rxq->rsc_low_watermark;
+
+				if (rxq->rx_hsplit_en) {
+					qi[k].queue_flags =
+						VIRTCHNL_RXQ_HDR_SPLIT;
+					qi[k].hdr_buffer_size =
+						rxq->rx_hbuf_size;
+				}
+				if (iecm_is_feature_ena(vport, NETIF_F_GRO_HW))
+					qi[k].queue_flags |= VIRTCHNL_RXQ_RSC;
+			} else {
+				rxq = &rx_qgrp->singleq.rxqs[j];
+			}
+
+			qi[k].queue_id = rxq->q_id;
+			qi[k].model = vport->rxq_model;
+			qi[k].type = rxq->q_type;
+			qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_BASE;
+			qi[k].desc_size = VIRTCHNL_RXQ_DESC_SIZE_32BYTE;
+			qi[k].ring_len = rxq->desc_count;
+			qi[k].dma_ring_addr = rxq->dma;
+			qi[k].max_pkt_size = rxq->rx_max_pkt_size;
+			qi[k].data_buffer_size = rxq->rx_buf_size;
+		}
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) {
+				struct iecm_queue *bufq =
+					&rx_qgrp->splitq.bufq_sets[j].bufq;
+
+				qi[k].queue_id = bufq->q_id;
+				qi[k].model = vport->rxq_model;
+				qi[k].type = bufq->q_type;
+				qi[k].desc_profile =
+					VIRTCHNL_TXQ_DESC_PROFILE_NATIVE;
+				qi[k].desc_size =
+					VIRTCHNL_RXQ_DESC_SIZE_32BYTE;
+				qi[k].ring_len = bufq->desc_count;
+				qi[k].dma_ring_addr = bufq->dma;
+				qi[k].data_buffer_size = bufq->rx_buf_size;
+				qi[k].buffer_notif_stride =
+					bufq->rx_buf_stride;
+				qi[k].rsc_low_watermark =
+					bufq->rsc_low_watermark;
+			}
+		}
+	}
+
+	/* Make sure accounting agrees */
+	if (k != totqs) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	/* Chunk up the queue contexts into multiple messages to avoid
+	 * sending a control queue message buffer that is too large
+	 */
+	if (totqs < IECM_NUM_QCTX_PER_MSG)
+		num_qs = totqs;
+	else
+		num_qs = IECM_NUM_QCTX_PER_MSG;
+
+	num_msgs = totqs / IECM_NUM_QCTX_PER_MSG;
+	if (totqs % IECM_NUM_QCTX_PER_MSG)
+		num_msgs++;
+
+	for (i = 0, k = 0; i < num_msgs || num_qs; i++) {
+		int buf_size = sizeof(struct virtchnl_config_rx_queues) +
+			   (sizeof(struct virtchnl_rxq_info_v2) * (num_qs - 1));
+		if (!crq || num_qs != IECM_NUM_QCTX_PER_MSG) {
+			kfree(crq);
+			crq = kzalloc(buf_size, GFP_KERNEL);
+			if (!crq) {
+				err = -ENOMEM;
+				goto error;
+			}
+		} else {
+			memset(crq, 0, buf_size);
+		}
+
+		crq->vport_id = vport->vport_id;
+		crq->num_qinfo = num_qs;
+		memcpy(crq->qinfo, &qi[k],
+		       sizeof(struct virtchnl_rxq_info_v2) * num_qs);
+
+		err = iecm_send_mb_msg(vport->adapter,
+				       VIRTCHNL_OP_CONFIG_RX_QUEUES,
+				       buf_size, (u8 *)crq);
+		if (err)
+			goto mbx_error;
+
+		err = iecm_wait_for_event(vport->adapter, IECM_VC_CONFIG_RXQ,
+					  IECM_VC_CONFIG_RXQ_ERR);
+		if (err)
+			goto mbx_error;
+
+		k += num_qs;
+		totqs -= num_qs;
+		if (totqs < IECM_NUM_QCTX_PER_MSG)
+			num_qs = totqs;
+	}
+
+mbx_error:
+	kfree(crq);
+error:
+	kfree(qi);
+	return err;
 }
 
 /**
@@ -629,7 +964,113 @@ int iecm_send_config_rx_queues_msg(struct iecm_vport *vport)
 static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport,
 					enum virtchnl_ops vc_op)
 {
-	/* stub */
+	int num_txq, num_rxq, num_q, buf_size, err = 0;
+	struct virtchnl_del_ena_dis_queues *eq;
+	struct virtchnl_queue_chunk *qc;
+	int i, j, k = 0;
+
+	/* validate virtchnl op */
+	switch (vc_op) {
+	case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+	case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	num_txq = vport->num_txq + vport->num_complq;
+	num_rxq = vport->num_rxq + vport->num_bufq;
+	num_q = num_txq + num_rxq;
+	buf_size = sizeof(struct virtchnl_del_ena_dis_queues) +
+		       (sizeof(struct virtchnl_queue_chunk) * (num_q - 1));
+	eq = kzalloc(buf_size, GFP_KERNEL);
+	if (!eq)
+		return -ENOMEM;
+
+	eq->vport_id = vport->vport_id;
+	eq->chunks.num_chunks = num_q;
+	qc = eq->chunks.chunks;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+		for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+			qc[k].type = tx_qgrp->txqs[j].q_type;
+			qc[k].start_queue_id = tx_qgrp->txqs[j].q_id;
+			qc[k].num_queues = 1;
+		}
+	}
+	if (vport->num_txq != k) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	if (iecm_is_queue_model_split(vport->txq_model)) {
+		for (i = 0; i < vport->num_txq_grp; i++, k++) {
+			struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+			qc[k].type = tx_qgrp->complq->q_type;
+			qc[k].start_queue_id = tx_qgrp->complq->q_id;
+			qc[k].num_queues = 1;
+		}
+		if (vport->num_complq != (k - vport->num_txq)) {
+			err = -EINVAL;
+			goto error;
+		}
+	}
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+		else
+			num_rxq = rx_qgrp->singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++, k++) {
+			if (iecm_is_queue_model_split(vport->rxq_model)) {
+				qc[k].start_queue_id =
+					rx_qgrp->splitq.rxq_sets[j].rxq.q_id;
+				qc[k].type =
+					rx_qgrp->splitq.rxq_sets[j].rxq.q_type;
+			} else {
+				qc[k].start_queue_id =
+					rx_qgrp->singleq.rxqs[j].q_id;
+				qc[k].type =
+					rx_qgrp->singleq.rxqs[j].q_type;
+			}
+			qc[k].num_queues = 1;
+		}
+	}
+	if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	if (iecm_is_queue_model_split(vport->rxq_model)) {
+		for (i = 0; i < vport->num_rxq_grp; i++) {
+			struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+			struct iecm_queue *q;
+
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) {
+				q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+				qc[k].type = q->q_type;
+				qc[k].start_queue_id = q->q_id;
+				qc[k].num_queues = 1;
+			}
+		}
+		if (vport->num_bufq != k - (vport->num_txq +
+					    vport->num_complq +
+					    vport->num_rxq)) {
+			err = -EINVAL;
+			goto error;
+		}
+	}
+
+	err = iecm_send_mb_msg(vport->adapter, vc_op, buf_size, (u8 *)eq);
+error:
+	kfree(eq);
+	return err;
 }
 
 /**
@@ -644,7 +1085,104 @@ static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport,
 static int
 iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_queue_vector_maps *vqvm;
+	struct virtchnl_queue_vector *vqv;
+	int buf_size, num_q, err = 0;
+	int i, j, k = 0;
+
+	num_q = vport->num_txq + vport->num_rxq;
+
+	buf_size = sizeof(struct virtchnl_queue_vector_maps) +
+		   (sizeof(struct virtchnl_queue_vector) * (num_q - 1));
+	vqvm = kzalloc(buf_size, GFP_KERNEL);
+	if (!vqvm)
+		return -ENOMEM;
+
+	vqvm->vport_id = vport->vport_id;
+	vqvm->num_maps = num_q;
+	vqv = vqvm->qv_maps;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+		for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+			vqv[k].queue_type = tx_qgrp->txqs[j].q_type;
+			vqv[k].queue_id = tx_qgrp->txqs[j].q_id;
+
+			if (iecm_is_queue_model_split(vport->txq_model)) {
+				vqv[k].vector_id =
+					tx_qgrp->complq->q_vector->v_idx;
+				vqv[k].itr_idx =
+					tx_qgrp->complq->itr.itr_idx;
+			} else {
+				vqv[k].vector_id =
+					tx_qgrp->txqs[j].q_vector->v_idx;
+				vqv[k].itr_idx =
+					tx_qgrp->txqs[j].itr.itr_idx;
+			}
+		}
+	}
+
+	if (vport->num_txq != k) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+		int num_rxq;
+
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+		else
+			num_rxq = rx_qgrp->singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++, k++) {
+			struct iecm_queue *rxq;
+
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				rxq = &rx_qgrp->splitq.rxq_sets[j].rxq;
+			else
+				rxq = &rx_qgrp->singleq.rxqs[j];
+
+			vqv[k].queue_type = rxq->q_type;
+			vqv[k].queue_id = rxq->q_id;
+			vqv[k].vector_id = rxq->q_vector->v_idx;
+			vqv[k].itr_idx = rxq->itr.itr_idx;
+		}
+	}
+
+	if (iecm_is_queue_model_split(vport->txq_model)) {
+		if (vport->num_rxq != k - vport->num_complq) {
+			err = -EINVAL;
+			goto error;
+		}
+	} else {
+		if (vport->num_rxq != k - vport->num_txq) {
+			err = -EINVAL;
+			goto error;
+		}
+	}
+
+	if (map) {
+		err = iecm_send_mb_msg(adapter,
+				       VIRTCHNL_OP_MAP_QUEUE_VECTOR,
+				       buf_size, (u8 *)vqvm);
+		if (!err)
+			err = iecm_wait_for_event(adapter, IECM_VC_MAP_IRQ,
+						  IECM_VC_MAP_IRQ_ERR);
+	} else {
+		err = iecm_send_mb_msg(adapter,
+				       VIRTCHNL_OP_UNMAP_QUEUE_VECTOR,
+				       buf_size, (u8 *)vqvm);
+		if (!err)
+			err = iecm_wait_for_event(adapter, IECM_VC_UNMAP_IRQ,
+						  IECM_VC_UNMAP_IRQ_ERR);
+	}
+error:
+	kfree(vqvm);
+	return err;
 }
 
 /**
@@ -656,7 +1194,16 @@ iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map)
  */
 static int iecm_send_enable_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	int err;
+
+	err = iecm_send_ena_dis_queues_msg(vport,
+					   VIRTCHNL_OP_ENABLE_QUEUES_V2);
+	if (err)
+		return err;
+
+	return iecm_wait_for_event(adapter, IECM_VC_ENA_QUEUES,
+				   IECM_VC_ENA_QUEUES_ERR);
 }
 
 /**
@@ -668,7 +1215,16 @@ static int iecm_send_enable_queues_msg(struct iecm_vport *vport)
  */
 static int iecm_send_disable_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	int err;
+
+	err = iecm_send_ena_dis_queues_msg(vport,
+					   VIRTCHNL_OP_DISABLE_QUEUES_V2);
+	if (err)
+		return err;
+
+	return iecm_wait_for_event(adapter, IECM_VC_DIS_QUEUES,
+				   IECM_VC_DIS_QUEUES_ERR);
 }
 
 /**
@@ -680,7 +1236,48 @@ static int iecm_send_disable_queues_msg(struct iecm_vport *vport)
  */
 int iecm_send_delete_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_create_vport *vport_params;
+	struct virtchnl_del_ena_dis_queues *eq;
+	struct virtchnl_queue_chunks *chunks;
+	int buf_size, num_chunks, err;
+
+	if (vport->adapter->config_data.req_qs_chunks) {
+		struct virtchnl_add_queues *vc_aq =
+			(struct virtchnl_add_queues *)
+			vport->adapter->config_data.req_qs_chunks;
+		chunks = &vc_aq->chunks;
+	} else {
+		vport_params = (struct virtchnl_create_vport *)
+				vport->adapter->vport_params_recvd[0];
+		 chunks = &vport_params->chunks;
+	}
+
+	num_chunks = chunks->num_chunks;
+	buf_size = sizeof(struct virtchnl_del_ena_dis_queues) +
+		   (sizeof(struct virtchnl_queue_chunk) *
+		    (num_chunks - 1));
+
+	eq = kzalloc(buf_size, GFP_KERNEL);
+	if (!eq)
+		return -ENOMEM;
+
+	eq->vport_id = vport->vport_id;
+	eq->chunks.num_chunks = num_chunks;
+
+	memcpy(eq->chunks.chunks, chunks->chunks, num_chunks *
+	       sizeof(struct virtchnl_queue_chunk));
+
+	err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_DEL_QUEUES,
+			       buf_size, (u8 *)eq);
+	if (err)
+		goto error;
+
+	err = iecm_wait_for_event(adapter, IECM_VC_DEL_QUEUES,
+				  IECM_VC_DEL_QUEUES_ERR);
+error:
+	kfree(eq);
+	return err;
 }
 
 /**
@@ -692,7 +1289,13 @@ int iecm_send_delete_queues_msg(struct iecm_vport *vport)
  */
 static int iecm_send_config_queues_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	int err;
+
+	err = iecm_send_config_tx_queues_msg(vport);
+	if (err)
+		return err;
+
+	return iecm_send_config_rx_queues_msg(vport);
 }
 
 /**
@@ -708,7 +1311,56 @@ static int iecm_send_config_queues_msg(struct iecm_vport *vport)
 int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q,
 			     u16 num_complq, u16 num_rx_q, u16 num_rx_bufq)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_add_queues aq = {0};
+	struct virtchnl_add_queues *vc_msg;
+	int size, err;
+
+	vc_msg = (struct virtchnl_add_queues *)adapter->vc_msg;
+
+	aq.vport_id = vport->vport_id;
+	aq.num_tx_q = num_tx_q;
+	aq.num_tx_complq = num_complq;
+	aq.num_rx_q = num_rx_q;
+	aq.num_rx_bufq = num_rx_bufq;
+
+	err = iecm_send_mb_msg(adapter,
+			       VIRTCHNL_OP_ADD_QUEUES,
+			       sizeof(struct virtchnl_add_queues), (u8 *)&aq);
+	if (err)
+		return err;
+
+	err = iecm_wait_for_event(adapter, IECM_VC_ADD_QUEUES,
+				  IECM_VC_ADD_QUEUES_ERR);
+	if (err)
+		return err;
+
+	kfree(adapter->config_data.req_qs_chunks);
+	adapter->config_data.req_qs_chunks = NULL;
+
+	/* compare vc_msg num queues with vport num queues */
+	if (vc_msg->num_tx_q != num_tx_q ||
+	    vc_msg->num_rx_q != num_rx_q ||
+	    vc_msg->num_tx_complq != num_complq ||
+	    vc_msg->num_rx_bufq != num_rx_bufq) {
+		err = -EINVAL;
+		goto error;
+	}
+
+	size = sizeof(struct virtchnl_add_queues) +
+			((vc_msg->chunks.num_chunks - 1) *
+			sizeof(struct virtchnl_queue_chunk));
+	adapter->config_data.req_qs_chunks =
+		kzalloc(size, GFP_KERNEL);
+	if (!adapter->config_data.req_qs_chunks) {
+		err = -ENOMEM;
+		goto error;
+	}
+	memcpy(adapter->config_data.req_qs_chunks,
+	       adapter->vc_msg, size);
+error:
+	mutex_unlock(&adapter->vc_msg_lock);
+	return err;
 }
 
 /**
@@ -719,7 +1371,47 @@ int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q,
  */
 int iecm_send_get_stats_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_eth_stats *stats;
+	struct virtchnl_queue_select vqs;
+	int err;
+
+	stats = (struct virtchnl_eth_stats *)adapter->vc_msg;
+
+	/* Don't send get_stats message if one is pending or the
+	 * link is down
+	 */
+	if (test_bit(IECM_VC_GET_STATS, adapter->vc_state) ||
+	    adapter->state <= __IECM_DOWN)
+		return 0;
+
+	vqs.vsi_id = vport->vport_id;
+
+	err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_STATS,
+			       sizeof(vqs), (u8 *)&vqs);
+
+	if (err)
+		return err;
+
+	err = iecm_wait_for_event(adapter, IECM_VC_GET_STATS,
+				  IECM_VC_GET_STATS_ERR);
+	if (err)
+		return err;
+
+	vport->netstats.rx_packets = stats->rx_unicast +
+				     stats->rx_multicast +
+				     stats->rx_broadcast;
+	vport->netstats.tx_packets = stats->tx_unicast +
+				     stats->tx_multicast +
+				     stats->tx_broadcast;
+	vport->netstats.rx_bytes = stats->rx_bytes;
+	vport->netstats.tx_bytes = stats->tx_bytes;
+	vport->netstats.tx_errors = stats->tx_errors;
+	vport->netstats.rx_dropped = stats->rx_discards;
+	vport->netstats.tx_dropped = stats->tx_discards;
+	mutex_unlock(&adapter->vc_msg_lock);
+
+	return 0;
 }
 
 /**
@@ -731,7 +1423,40 @@ int iecm_send_get_stats_msg(struct iecm_vport *vport)
  */
 int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_rss_hash rh = {0};
+	int err;
+
+	rh.vport_id = vport->vport_id;
+	rh.hash = adapter->rss_data.rss_hash;
+
+	if (get) {
+		err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_HASH,
+				       sizeof(rh), (u8 *)&rh);
+		if (err)
+			return err;
+
+		err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_HASH,
+					  IECM_VC_GET_RSS_HASH_ERR);
+		if (err)
+			return err;
+
+		memcpy(&rh, adapter->vc_msg, sizeof(rh));
+		adapter->rss_data.rss_hash = rh.hash;
+		/* Leave the buffer clean for next message */
+		memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE);
+		mutex_unlock(&adapter->vc_msg_lock);
+
+		return 0;
+	}
+
+	err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_HASH,
+			       sizeof(rh), (u8 *)&rh);
+	if (err)
+		return err;
+
+	return  iecm_wait_for_event(adapter, IECM_VC_SET_RSS_HASH,
+				  IECM_VC_SET_RSS_HASH_ERR);
 }
 
 /**
@@ -743,7 +1468,74 @@ int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get)
  */
 int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_rss_lut_v2 *recv_rl;
+	struct virtchnl_rss_lut_v2 *rl;
+	int buf_size, lut_buf_size;
+	int i, err = 0;
+
+	buf_size = sizeof(struct virtchnl_rss_lut_v2) +
+		       (sizeof(u16) * (adapter->rss_data.rss_lut_size - 1));
+	rl = kzalloc(buf_size, GFP_KERNEL);
+	if (!rl)
+		return -ENOMEM;
+
+	if (!get) {
+		rl->lut_entries = adapter->rss_data.rss_lut_size;
+		for (i = 0; i < adapter->rss_data.rss_lut_size; i++)
+			rl->lut[i] = adapter->rss_data.rss_lut[i];
+	}
+	rl->vport_id = vport->vport_id;
+
+	if (get) {
+		err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_GET_RSS_LUT,
+				       buf_size, (u8 *)rl);
+		if (err)
+			goto error;
+
+		err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_LUT,
+					  IECM_VC_GET_RSS_LUT_ERR);
+		if (err)
+			goto error;
+
+		recv_rl = (struct virtchnl_rss_lut_v2 *)adapter->vc_msg;
+		if (adapter->rss_data.rss_lut_size !=
+		    recv_rl->lut_entries) {
+			adapter->rss_data.rss_lut_size =
+				recv_rl->lut_entries;
+			kfree(adapter->rss_data.rss_lut);
+
+			lut_buf_size = adapter->rss_data.rss_lut_size *
+					sizeof(u16);
+			adapter->rss_data.rss_lut = kzalloc(lut_buf_size,
+							    GFP_KERNEL);
+			if (!adapter->rss_data.rss_lut) {
+				adapter->rss_data.rss_lut_size = 0;
+				/* Leave the buffer clean */
+				memset(adapter->vc_msg, 0,
+				       IECM_DFLT_MBX_BUF_SIZE);
+				mutex_unlock(&adapter->vc_msg_lock);
+				err = -ENOMEM;
+				goto error;
+			}
+		}
+		memcpy(adapter->rss_data.rss_lut, adapter->vc_msg,
+		       adapter->rss_data.rss_lut_size);
+		/* Leave the buffer clean for next message */
+		memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE);
+		mutex_unlock(&adapter->vc_msg_lock);
+	} else {
+		err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_LUT,
+				       buf_size, (u8 *)rl);
+		if (err)
+			goto error;
+
+		err = iecm_wait_for_event(adapter, IECM_VC_SET_RSS_LUT,
+					  IECM_VC_SET_RSS_LUT_ERR);
+	}
+error:
+	kfree(rl);
+	return err;
 }
 
 /**
@@ -755,7 +1547,70 @@ int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get)
  */
 int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_rss_key *recv_rk;
+	struct virtchnl_rss_key *rk;
+	int i, buf_size, err = 0;
+
+	buf_size = sizeof(struct virtchnl_rss_key) +
+		   (sizeof(u8) * (adapter->rss_data.rss_key_size - 1));
+	rk = kzalloc(buf_size, GFP_KERNEL);
+	if (!rk)
+		return -ENOMEM;
+	rk->vsi_id = vport->vport_id;
+
+	if (get) {
+		err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_KEY,
+				       buf_size, (u8 *)rk);
+		if (err)
+			goto error;
+
+		err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_KEY,
+					  IECM_VC_GET_RSS_KEY_ERR);
+		if (err)
+			goto error;
+
+		recv_rk = (struct virtchnl_rss_key *)adapter->vc_msg;
+		if (adapter->rss_data.rss_key_size !=
+		    recv_rk->key_len) {
+			adapter->rss_data.rss_key_size =
+				min_t(u16, NETDEV_RSS_KEY_LEN,
+				      recv_rk->key_len);
+			kfree(adapter->rss_data.rss_key);
+			adapter->rss_data.rss_key = (u8 *)
+				kzalloc(adapter->rss_data.rss_key_size,
+					GFP_KERNEL);
+			if (!adapter->rss_data.rss_key) {
+				adapter->rss_data.rss_key_size = 0;
+				/* Leave the buffer clean */
+				memset(adapter->vc_msg, 0,
+				       IECM_DFLT_MBX_BUF_SIZE);
+				mutex_unlock(&adapter->vc_msg_lock);
+				err = -ENOMEM;
+				goto error;
+			}
+		}
+		memcpy(adapter->rss_data.rss_key, adapter->vc_msg,
+		       adapter->rss_data.rss_key_size);
+		/* Leave the buffer clean for next message */
+		memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE);
+		mutex_unlock(&adapter->vc_msg_lock);
+	} else {
+		rk->key_len = adapter->rss_data.rss_key_size;
+		for (i = 0; i < adapter->rss_data.rss_key_size; i++)
+			rk->key[i] = adapter->rss_data.rss_key[i];
+
+		err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				       buf_size, (u8 *)rk);
+		if (err)
+			goto error;
+
+		err = iecm_wait_for_event(adapter, IECM_VC_CONFIG_RSS_KEY,
+					  IECM_VC_CONFIG_RSS_KEY_ERR);
+	}
+error:
+	kfree(rk);
+	return err;
 }
 
 /**
@@ -766,7 +1621,26 @@ int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get)
  */
 int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_rx_ptype_decoded *rx_ptype_lkup = vport->rx_ptype_lkup;
+	static const int ptype_list[IECM_RX_SUPP_PTYPE] = {
+		0, 1,
+		11, 12,
+		22, 23, 24, 25, 26, 27, 28,
+		88, 89, 90, 91, 92, 93, 94
+	};
+	int i;
+
+	for (i = 0; i < IECM_RX_MAX_PTYPE; i++)
+		rx_ptype_lkup[i] = iecm_rx_ptype_lkup[0];
+
+	for (i = 0; i < IECM_RX_SUPP_PTYPE; i++) {
+		int j = ptype_list[i];
+
+		rx_ptype_lkup[j] = iecm_rx_ptype_lkup[i];
+		rx_ptype_lkup[j].ptype = ptype_list[i];
+	};
+
+	return 0;
 }
 
 /**
@@ -905,7 +1779,53 @@ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter)
  */
 int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id)
 {
-	/* stub */
+	switch (adapter->state) {
+	case __IECM_STARTUP:
+		if (iecm_send_ver_msg(adapter))
+			goto init_failed;
+		adapter->state = __IECM_VER_CHECK;
+		goto restart;
+	case __IECM_VER_CHECK:
+		if (iecm_recv_ver_msg(adapter))
+			goto init_failed;
+		adapter->state = __IECM_GET_CAPS;
+		if (adapter->dev_ops.vc_ops.get_caps(adapter))
+			goto init_failed;
+		goto restart;
+	case __IECM_GET_CAPS:
+		if (iecm_recv_get_caps_msg(adapter))
+			goto init_failed;
+		if (iecm_send_create_vport_msg(adapter))
+			goto init_failed;
+		adapter->state = __IECM_GET_DFLT_VPORT_PARAMS;
+		goto restart;
+	case __IECM_GET_DFLT_VPORT_PARAMS:
+		if (iecm_recv_create_vport_msg(adapter, vport_id))
+			goto init_failed;
+		adapter->state = __IECM_INIT_SW;
+		break;
+	case __IECM_INIT_SW:
+		break;
+	default:
+		dev_err(&adapter->pdev->dev, "Device is in bad state: %d\n",
+			adapter->state);
+		goto init_failed;
+	}
+
+	return 0;
+restart:
+	queue_delayed_work(adapter->init_wq, &adapter->init_task,
+			   msecs_to_jiffies(30));
+	/* Not an error. Using try again to continue with state machine */
+	return -EAGAIN;
+init_failed:
+	if (++adapter->mb_wait_count > IECM_MB_MAX_ERR) {
+		dev_err(&adapter->pdev->dev, "Failed to establish mailbox communications with hardware\n");
+		return -EFAULT;
+	}
+	adapter->state = __IECM_STARTUP;
+	queue_delayed_work(adapter->init_wq, &adapter->init_task, HZ);
+	return -EAGAIN;
 }
 EXPORT_SYMBOL(iecm_vc_core_init);
 
@@ -953,7 +1873,32 @@ iecm_vport_get_queue_ids(u16 *qids, int num_qids,
 			 enum virtchnl_queue_type q_type,
 			 struct virtchnl_queue_chunks *chunks)
 {
-	/* stub */
+	int num_chunks = chunks->num_chunks;
+	struct virtchnl_queue_chunk *chunk;
+	int num_q_id_filled = 0;
+	int start_q_id;
+	int num_q;
+	int i;
+
+	while (num_chunks) {
+		chunk = &chunks->chunks[num_chunks - 1];
+		if (chunk->type == q_type) {
+			num_q = chunk->num_queues;
+			start_q_id = chunk->start_queue_id;
+			for (i = 0; i < num_q; i++) {
+				if ((num_q_id_filled + i) < num_qids) {
+					qids[num_q_id_filled + i] = start_q_id;
+					start_q_id++;
+				} else {
+					break;
+				}
+			}
+			num_q_id_filled = num_q_id_filled + i;
+		}
+		num_chunks--;
+	}
+
+	return num_q_id_filled;
 }
 
 /**
@@ -970,7 +1915,80 @@ static int
 __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids,
 			    int num_qids, enum virtchnl_queue_type q_type)
 {
-	/* stub */
+	struct iecm_queue *q;
+	int i, j, k = 0;
+
+	switch (q_type) {
+	case VIRTCHNL_QUEUE_TYPE_TX:
+		for (i = 0; i < vport->num_txq_grp; i++) {
+			struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+			for (j = 0; j < tx_qgrp->num_txq; j++) {
+				if (k < num_qids) {
+					tx_qgrp->txqs[j].q_id = qids[k];
+					tx_qgrp->txqs[j].q_type =
+						VIRTCHNL_QUEUE_TYPE_TX;
+					k++;
+				} else {
+					break;
+				}
+			}
+		}
+		break;
+	case VIRTCHNL_QUEUE_TYPE_RX:
+		for (i = 0; i < vport->num_rxq_grp; i++) {
+			struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+			int num_rxq;
+
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				num_rxq = rx_qgrp->splitq.num_rxq_sets;
+			else
+				num_rxq = rx_qgrp->singleq.num_rxq;
+
+			for (j = 0; j < num_rxq && k < num_qids; j++, k++) {
+				if (iecm_is_queue_model_split(vport->rxq_model))
+					q = &rx_qgrp->splitq.rxq_sets[j].rxq;
+				else
+					q = &rx_qgrp->singleq.rxqs[j];
+				q->q_id = qids[k];
+				q->q_type = VIRTCHNL_QUEUE_TYPE_RX;
+			}
+		}
+		break;
+	case VIRTCHNL_QUEUE_TYPE_TX_COMPLETION:
+		for (i = 0; i < vport->num_txq_grp; i++) {
+			struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+			if (k < num_qids) {
+				tx_qgrp->complq->q_id = qids[k];
+				tx_qgrp->complq->q_type =
+					VIRTCHNL_QUEUE_TYPE_TX_COMPLETION;
+				k++;
+			} else {
+				break;
+			}
+		}
+		break;
+	case VIRTCHNL_QUEUE_TYPE_RX_BUFFER:
+		for (i = 0; i < vport->num_rxq_grp; i++) {
+			struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				if (k < num_qids) {
+					q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+					q->q_id = qids[k];
+					q->q_type =
+						VIRTCHNL_QUEUE_TYPE_RX_BUFFER;
+					k++;
+				} else {
+					break;
+				}
+			}
+		}
+		break;
+	}
+
+	return k;
 }
 
 /**
@@ -982,7 +2000,76 @@ __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids,
  */
 static int iecm_vport_queue_ids_init(struct iecm_vport *vport)
 {
-	/* stub */
+	struct virtchnl_create_vport *vport_params;
+	struct virtchnl_queue_chunks *chunks;
+	enum virtchnl_queue_type q_type;
+	/* We may never deal with more that 256 same type of queues */
+#define IECM_MAX_QIDS	256
+	u16 qids[IECM_MAX_QIDS];
+	int num_ids;
+
+	if (vport->adapter->config_data.num_req_qs) {
+		struct virtchnl_add_queues *vc_aq =
+			(struct virtchnl_add_queues *)
+			vport->adapter->config_data.req_qs_chunks;
+		chunks = &vc_aq->chunks;
+	} else {
+		vport_params = (struct virtchnl_create_vport *)
+				vport->adapter->vport_params_recvd[0];
+		chunks = &vport_params->chunks;
+		/* compare vport_params num queues with vport num queues */
+		if (vport_params->num_tx_q != vport->num_txq ||
+		    vport_params->num_rx_q != vport->num_rxq ||
+		    vport_params->num_tx_complq != vport->num_complq ||
+		    vport_params->num_rx_bufq != vport->num_bufq)
+			return -EINVAL;
+	}
+
+	num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS,
+					   VIRTCHNL_QUEUE_TYPE_TX,
+					   chunks);
+	if (num_ids != vport->num_txq)
+		return -EINVAL;
+	num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids,
+					      VIRTCHNL_QUEUE_TYPE_TX);
+	if (num_ids != vport->num_txq)
+		return -EINVAL;
+	num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS,
+					   VIRTCHNL_QUEUE_TYPE_RX,
+					   chunks);
+	if (num_ids != vport->num_rxq)
+		return -EINVAL;
+	num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids,
+					      VIRTCHNL_QUEUE_TYPE_RX);
+	if (num_ids != vport->num_rxq)
+		return -EINVAL;
+
+	if (iecm_is_queue_model_split(vport->txq_model)) {
+		q_type = VIRTCHNL_QUEUE_TYPE_TX_COMPLETION;
+		num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type,
+						   chunks);
+		if (num_ids != vport->num_complq)
+			return -EINVAL;
+		num_ids = __iecm_vport_queue_ids_init(vport, qids,
+						      num_ids,
+						      q_type);
+		if (num_ids != vport->num_complq)
+			return -EINVAL;
+	}
+
+	if (iecm_is_queue_model_split(vport->rxq_model)) {
+		q_type = VIRTCHNL_QUEUE_TYPE_RX_BUFFER;
+		num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type,
+						   chunks);
+		if (num_ids != vport->num_bufq)
+			return -EINVAL;
+		num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids,
+						      q_type);
+		if (num_ids != vport->num_bufq)
+			return -EINVAL;
+	}
+
+	return 0;
 }
 
 /**
@@ -993,7 +2080,16 @@ static int iecm_vport_queue_ids_init(struct iecm_vport *vport)
  */
 void iecm_vport_adjust_qs(struct iecm_vport *vport)
 {
-	/* stub */
+	struct virtchnl_create_vport vport_msg;
+
+	vport_msg.txq_model = vport->txq_model;
+	vport_msg.rxq_model = vport->rxq_model;
+	iecm_vport_calc_total_qs(&vport_msg,
+				 vport->adapter->config_data.num_req_qs);
+
+	iecm_vport_init_num_qs(vport, &vport_msg);
+	iecm_vport_calc_num_q_groups(vport);
+	iecm_vport_calc_num_q_vec(vport);
 }
 
 /**
@@ -1005,7 +2101,8 @@ void iecm_vport_adjust_qs(struct iecm_vport *vport)
  */
 static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag)
 {
-	/* stub */
+	return ((struct virtchnl_get_capabilities *)adapter->caps)->cap_flags &
+	       flag;
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 08/15] iecm: Implement vector allocation
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (6 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 07/15] iecm: Implement virtchnl commands Tony Nguyen
@ 2020-08-24 17:32 ` Tony Nguyen
  2020-08-24 20:41   ` Jakub Kicinski
  2020-08-24 17:33 ` [net-next v5 09/15] iecm: Init and allocate vport Tony Nguyen
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:32 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

This allocates PCI vectors and maps to interrupt
routines.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/iecm/iecm_lib.c    |  63 +-
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   | 605 +++++++++++++++++-
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   |  24 +-
 3 files changed, 668 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index 7151aa5fa95c..4037d13e4512 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -14,7 +14,11 @@ static const struct net_device_ops iecm_netdev_ops_singleq;
  */
 static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int irq_num;
+
+	irq_num = adapter->msix_entries[0].vector;
+	synchronize_irq(irq_num);
+	free_irq(irq_num, adapter);
 }
 
 /**
@@ -43,7 +47,12 @@ static void iecm_intr_rel(struct iecm_adapter *adapter)
  */
 static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data)
 {
-	/* stub */
+	struct iecm_adapter *adapter = (struct iecm_adapter *)data;
+
+	set_bit(__IECM_MB_INTR_TRIGGER, adapter->flags);
+	queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
+			   msecs_to_jiffies(0));
+	return IRQ_HANDLED;
 }
 
 /**
@@ -52,7 +61,12 @@ static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data)
  */
 static void iecm_mb_irq_enable(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_hw *hw = &adapter->hw;
+	struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg;
+	u32 val;
+
+	val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m;
+	writel_relaxed(val, hw->hw_addr + intr->dyn_ctl);
 }
 
 /**
@@ -61,7 +75,22 @@ static void iecm_mb_irq_enable(struct iecm_adapter *adapter)
  */
 static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_q_vector *mb_vector = &adapter->mb_vector;
+	int irq_num, mb_vidx = 0, err;
+
+	irq_num = adapter->msix_entries[mb_vidx].vector;
+	snprintf(mb_vector->name, sizeof(mb_vector->name) - 1,
+		 "%s-%s-%d", dev_driver_string(&adapter->pdev->dev),
+		 "Mailbox", mb_vidx);
+	err = request_irq(irq_num, adapter->irq_mb_handler, 0,
+			  mb_vector->name, adapter);
+	if (err) {
+		dev_err(&adapter->pdev->dev,
+			"Request_irq for mailbox failed, error: %d\n", err);
+		return err;
+	}
+	set_bit(__IECM_MB_INTR_MODE, adapter->flags);
+	return 0;
 }
 
 /**
@@ -73,7 +102,16 @@ static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter)
  */
 static void iecm_get_mb_vec_id(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct virtchnl_vector_chunks *vchunks;
+	struct virtchnl_vector_chunk *chunk;
+
+	if (adapter->req_vec_chunks) {
+		vchunks = &adapter->req_vec_chunks->vchunks;
+		chunk = &vchunks->num_vchunk[0];
+		adapter->mb_vector.v_idx = chunk->start_vector_id;
+	} else {
+		adapter->mb_vector.v_idx = 0;
+	}
 }
 
 /**
@@ -82,7 +120,13 @@ static void iecm_get_mb_vec_id(struct iecm_adapter *adapter)
  */
 static int iecm_mb_intr_init(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int err = 0;
+
+	iecm_get_mb_vec_id(adapter);
+	adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter);
+	adapter->irq_mb_handler = iecm_mb_intr_clean;
+	err = iecm_mb_intr_req_irq(adapter);
+	return err;
 }
 
 /**
@@ -94,7 +138,12 @@ static int iecm_mb_intr_init(struct iecm_adapter *adapter)
  */
 static void iecm_intr_distribute(struct iecm_adapter *adapter)
 {
-	/* stub */
+	struct iecm_vport *vport;
+
+	vport = adapter->vports[0];
+	if (adapter->num_msix_entries != adapter->num_req_msix)
+		vport->num_q_vectors = adapter->num_msix_entries -
+				       IECM_MAX_NONQ_VEC - IECM_MIN_RDMA_VEC;
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
index e2da7dbc2ced..8214de2506af 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -1011,7 +1011,16 @@ iecm_vport_intr_clean_queues(int __always_unused irq, void *data)
  */
 static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport)
 {
-	/* stub */
+	int q_idx;
+
+	if (!vport->netdev)
+		return;
+
+	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx];
+
+		napi_disable(&q_vector->napi);
+	}
 }
 
 /**
@@ -1022,7 +1031,44 @@ static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport)
  */
 void iecm_vport_intr_rel(struct iecm_vport *vport)
 {
-	/* stub */
+	int i, j, v_idx;
+
+	if (!vport->netdev)
+		return;
+
+	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[v_idx];
+
+		if (q_vector)
+			netif_napi_del(&q_vector->napi);
+	}
+
+	/* Clean up the mapping of queues to vectors */
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++)
+				rx_qgrp->splitq.rxq_sets[j].rxq.q_vector =
+									   NULL;
+		} else {
+			for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+				rx_qgrp->singleq.rxqs[j].q_vector = NULL;
+		}
+	}
+
+	if (iecm_is_queue_model_split(vport->txq_model)) {
+		for (i = 0; i < vport->num_txq_grp; i++)
+			vport->txq_grps[i].complq->q_vector = NULL;
+	} else {
+		for (i = 0; i < vport->num_txq_grp; i++) {
+			for (j = 0; j < vport->txq_grps[i].num_txq; j++)
+				vport->txq_grps[i].txqs[j].q_vector = NULL;
+		}
+	}
+
+	kfree(vport->q_vectors);
+	vport->q_vectors = NULL;
 }
 
 /**
@@ -1031,7 +1077,25 @@ void iecm_vport_intr_rel(struct iecm_vport *vport)
  */
 static void iecm_vport_intr_rel_irq(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	int vector;
+
+	for (vector = 0; vector < vport->num_q_vectors; vector++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[vector];
+		int irq_num, vidx;
+
+		/* free only the IRQs that were actually requested */
+		if (!q_vector)
+			continue;
+
+		vidx = vector + vport->q_vector_base;
+		irq_num = adapter->msix_entries[vidx].vector;
+
+		/* clear the affinity_mask in the IRQ descriptor */
+		irq_set_affinity_hint(irq_num, NULL);
+		synchronize_irq(irq_num);
+		free_irq(irq_num, q_vector);
+	}
 }
 
 /**
@@ -1040,7 +1104,13 @@ static void iecm_vport_intr_rel_irq(struct iecm_vport *vport)
  */
 void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_q_vector *q_vector = vport->q_vectors;
+	struct iecm_hw *hw = &vport->adapter->hw;
+	int q_idx;
+
+	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++)
+		writel_relaxed(0, hw->hw_addr +
+				  q_vector[q_idx].intr_reg.dyn_ctl);
 }
 
 /**
@@ -1052,12 +1122,42 @@ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)
 static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector,
 					const int type, u16 itr)
 {
-	/* stub */
+	u32 itr_val;
+
+	itr &= IECM_ITR_MASK;
+	/* Don't clear PBA because that can cause lost interrupts that
+	 * came in while we were cleaning/polling
+	 */
+	itr_val = q_vector->intr_reg.dyn_ctl_intena_m |
+		  (type << q_vector->intr_reg.dyn_ctl_itridx_s) |
+		  (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1));
+
+	return itr_val;
 }
 
 static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector)
 {
-	/* stub */
+	unsigned int divisor;
+
+	switch (q_vector->vport->adapter->link_speed) {
+	case VIRTCHNL_LINK_SPEED_40GB:
+		divisor = IECM_ITR_ADAPTIVE_MIN_INC * 1024;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+	case VIRTCHNL_LINK_SPEED_20GB:
+		divisor = IECM_ITR_ADAPTIVE_MIN_INC * 512;
+		break;
+	default:
+	case VIRTCHNL_LINK_SPEED_10GB:
+		divisor = IECM_ITR_ADAPTIVE_MIN_INC * 256;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+	case VIRTCHNL_LINK_SPEED_100MB:
+		divisor = IECM_ITR_ADAPTIVE_MIN_INC * 32;
+		break;
+	}
+
+	return divisor;
 }
 
 /**
@@ -1078,7 +1178,206 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector,
 					struct iecm_itr *itr,
 					enum virtchnl_queue_type q_type)
 {
-	/* stub */
+	unsigned int avg_wire_size, packets = 0, bytes = 0, new_itr;
+	unsigned long next_update = jiffies;
+
+	/* If we don't have any queues just leave ourselves set for maximum
+	 * possible latency so we take ourselves out of the equation.
+	 */
+	if (!IECM_ITR_IS_DYNAMIC(itr->target_itr))
+		return;
+
+	/* For Rx we want to push the delay up and default to low latency.
+	 * for Tx we want to pull the delay down and default to high latency.
+	 */
+	new_itr = q_type == VIRTCHNL_QUEUE_TYPE_RX ?
+	      IECM_ITR_ADAPTIVE_MIN_USECS | IECM_ITR_ADAPTIVE_LATENCY :
+	      IECM_ITR_ADAPTIVE_MAX_USECS | IECM_ITR_ADAPTIVE_LATENCY;
+
+	/* If we didn't update within up to 1 - 2 jiffies we can assume
+	 * that either packets are coming in so slow there hasn't been
+	 * any work, or that there is so much work that NAPI is dealing
+	 * with interrupt moderation and we don't need to do anything.
+	 */
+	if (time_after(next_update, itr->next_update))
+		goto clear_counts;
+
+	/* If itr_countdown is set it means we programmed an ITR within
+	 * the last 4 interrupt cycles. This has a side effect of us
+	 * potentially firing an early interrupt. In order to work around
+	 * this we need to throw out any data received for a few
+	 * interrupts following the update.
+	 */
+	if (q_vector->itr_countdown) {
+		new_itr = itr->target_itr;
+		goto clear_counts;
+	}
+
+	if (q_type == VIRTCHNL_QUEUE_TYPE_TX) {
+		packets = itr->stats.tx.packets;
+		bytes = itr->stats.tx.bytes;
+	}
+
+	if (q_type == VIRTCHNL_QUEUE_TYPE_RX) {
+		packets = itr->stats.rx.packets;
+		bytes = itr->stats.rx.bytes;
+
+		/* If there are 1 to 4 RX packets and bytes are less than
+		 * 9000 assume insufficient data to use bulk rate limiting
+		 * approach unless Tx is already in bulk rate limiting. We
+		 * are likely latency driven.
+		 */
+		if (packets && packets < 4 && bytes < 9000 &&
+		    (q_vector->tx[0]->itr.target_itr &
+		     IECM_ITR_ADAPTIVE_LATENCY)) {
+			new_itr = IECM_ITR_ADAPTIVE_LATENCY;
+			goto adjust_by_size;
+		}
+	} else if (packets < 4) {
+		/* If we have Tx and Rx ITR maxed and Tx ITR is running in
+		 * bulk mode and we are receiving 4 or fewer packets just
+		 * reset the ITR_ADAPTIVE_LATENCY bit for latency mode so
+		 * that the Rx can relax.
+		 */
+		if (itr->target_itr == IECM_ITR_ADAPTIVE_MAX_USECS &&
+		    ((q_vector->rx[0]->itr.target_itr & IECM_ITR_MASK) ==
+		     IECM_ITR_ADAPTIVE_MAX_USECS))
+			goto clear_counts;
+	} else if (packets > 32) {
+		/* If we have processed over 32 packets in a single interrupt
+		 * for Tx assume we need to switch over to "bulk" mode.
+		 */
+		itr->target_itr &= ~IECM_ITR_ADAPTIVE_LATENCY;
+	}
+
+	/* We have no packets to actually measure against. This means
+	 * either one of the other queues on this vector is active or
+	 * we are a Tx queue doing TSO with too high of an interrupt rate.
+	 *
+	 * Between 4 and 56 we can assume that our current interrupt delay
+	 * is only slightly too low. As such we should increase it by a small
+	 * fixed amount.
+	 */
+	if (packets < 56) {
+		new_itr = itr->target_itr + IECM_ITR_ADAPTIVE_MIN_INC;
+		if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) {
+			new_itr &= IECM_ITR_ADAPTIVE_LATENCY;
+			new_itr += IECM_ITR_ADAPTIVE_MAX_USECS;
+		}
+		goto clear_counts;
+	}
+
+	if (packets <= 256) {
+		new_itr = min(q_vector->tx[0]->itr.current_itr,
+			      q_vector->rx[0]->itr.current_itr);
+		new_itr &= IECM_ITR_MASK;
+
+		/* Between 56 and 112 is our "goldilocks" zone where we are
+		 * working out "just right". Just report that our current
+		 * ITR is good for us.
+		 */
+		if (packets <= 112)
+			goto clear_counts;
+
+		/* If packet count is 128 or greater we are likely looking
+		 * at a slight overrun of the delay we want. Try halving
+		 * our delay to see if that will cut the number of packets
+		 * in half per interrupt.
+		 */
+		new_itr /= 2;
+		new_itr &= IECM_ITR_MASK;
+		if (new_itr < IECM_ITR_ADAPTIVE_MIN_USECS)
+			new_itr = IECM_ITR_ADAPTIVE_MIN_USECS;
+
+		goto clear_counts;
+	}
+
+	/* The paths below assume we are dealing with a bulk ITR since
+	 * number of packets is greater than 256. We are just going to have
+	 * to compute a value and try to bring the count under control,
+	 * though for smaller packet sizes there isn't much we can do as
+	 * NAPI polling will likely be kicking in sooner rather than later.
+	 */
+	new_itr = IECM_ITR_ADAPTIVE_BULK;
+
+adjust_by_size:
+	/* If packet counts are 256 or greater we can assume we have a gross
+	 * overestimation of what the rate should be. Instead of trying to fine
+	 * tune it just use the formula below to try and dial in an exact value
+	 * give the current packet size of the frame.
+	 */
+	avg_wire_size = bytes / packets;
+
+	/* The following is a crude approximation of:
+	 *  wmem_default / (size + overhead) = desired_pkts_per_int
+	 *  rate / bits_per_byte / (size + Ethernet overhead) = pkt_rate
+	 *  (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value
+	 *
+	 * Assuming wmem_default is 212992 and overhead is 640 bytes per
+	 * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the
+	 * formula down to
+	 *
+	 *  (170 * (size + 24)) / (size + 640) = ITR
+	 *
+	 * We first do some math on the packet size and then finally bit shift
+	 * by 8 after rounding up. We also have to account for PCIe link speed
+	 * difference as ITR scales based on this.
+	 */
+	if (avg_wire_size <= 60) {
+		/* Start at 250k ints/sec */
+		avg_wire_size = 4096;
+	} else if (avg_wire_size <= 380) {
+		/* 250K ints/sec to 60K ints/sec */
+		avg_wire_size *= 40;
+		avg_wire_size += 1696;
+	} else if (avg_wire_size <= 1084) {
+		/* 60K ints/sec to 36K ints/sec */
+		avg_wire_size *= 15;
+		avg_wire_size += 11452;
+	} else if (avg_wire_size <= 1980) {
+		/* 36K ints/sec to 30K ints/sec */
+		avg_wire_size *= 5;
+		avg_wire_size += 22420;
+	} else {
+		/* plateau at a limit of 30K ints/sec */
+		avg_wire_size = 32256;
+	}
+
+	/* If we are in low latency mode halve our delay which doubles the
+	 * rate to somewhere between 100K to 16K ints/sec
+	 */
+	if (new_itr & IECM_ITR_ADAPTIVE_LATENCY)
+		avg_wire_size /= 2;
+
+	/* Resultant value is 256 times larger than it needs to be. This
+	 * gives us room to adjust the value as needed to either increase
+	 * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc.
+	 *
+	 * Use addition as we have already recorded the new latency flag
+	 * for the ITR value.
+	 */
+	new_itr += DIV_ROUND_UP(avg_wire_size, iecm_itr_divisor(q_vector)) *
+		   IECM_ITR_ADAPTIVE_MIN_INC;
+
+	if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) {
+		new_itr &= IECM_ITR_ADAPTIVE_LATENCY;
+		new_itr += IECM_ITR_ADAPTIVE_MAX_USECS;
+	}
+
+clear_counts:
+	/* write back value */
+	itr->target_itr = new_itr;
+
+	/* next update should occur within next jiffy */
+	itr->next_update = next_update + 1;
+
+	if (q_type == VIRTCHNL_QUEUE_TYPE_RX) {
+		itr->stats.rx.bytes = 0;
+		itr->stats.rx.packets = 0;
+	} else if (q_type == VIRTCHNL_QUEUE_TYPE_TX) {
+		itr->stats.tx.bytes = 0;
+		itr->stats.tx.packets = 0;
+	}
 }
 
 /**
@@ -1087,7 +1386,59 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector,
  */
 void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector)
 {
-	/* stub */
+	struct iecm_hw *hw = &q_vector->vport->adapter->hw;
+	struct iecm_itr *tx_itr = &q_vector->tx[0]->itr;
+	struct iecm_itr *rx_itr = &q_vector->rx[0]->itr;
+	u32 intval;
+
+	/* These will do nothing if dynamic updates are not enabled */
+	iecm_vport_intr_set_new_itr(q_vector, tx_itr, q_vector->tx[0]->q_type);
+	iecm_vport_intr_set_new_itr(q_vector, rx_itr, q_vector->rx[0]->q_type);
+
+	/* This block of logic allows us to get away with only updating
+	 * one ITR value with each interrupt. The idea is to perform a
+	 * pseudo-lazy update with the following criteria.
+	 *
+	 * 1. Rx is given higher priority than Tx if both are in same state
+	 * 2. If we must reduce an ITR that is given highest priority.
+	 * 3. We then give priority to increasing ITR based on amount.
+	 */
+	if (rx_itr->target_itr < rx_itr->current_itr) {
+		/* Rx ITR needs to be reduced, this is highest priority */
+		intval = iecm_vport_intr_buildreg_itr(q_vector,
+						      rx_itr->itr_idx,
+						      rx_itr->target_itr);
+		rx_itr->current_itr = rx_itr->target_itr;
+		q_vector->itr_countdown = ITR_COUNTDOWN_START;
+	} else if ((tx_itr->target_itr < tx_itr->current_itr) ||
+		   ((rx_itr->target_itr - rx_itr->current_itr) <
+		    (tx_itr->target_itr - tx_itr->current_itr))) {
+		/* Tx ITR needs to be reduced, this is second priority
+		 * Tx ITR needs to be increased more than Rx, fourth priority
+		 */
+		intval = iecm_vport_intr_buildreg_itr(q_vector,
+						      tx_itr->itr_idx,
+						      tx_itr->target_itr);
+		tx_itr->current_itr = tx_itr->target_itr;
+		q_vector->itr_countdown = ITR_COUNTDOWN_START;
+	} else if (rx_itr->current_itr != rx_itr->target_itr) {
+		/* Rx ITR needs to be increased, third priority */
+		intval = iecm_vport_intr_buildreg_itr(q_vector,
+						      rx_itr->itr_idx,
+						      rx_itr->target_itr);
+		rx_itr->current_itr = rx_itr->target_itr;
+		q_vector->itr_countdown = ITR_COUNTDOWN_START;
+	} else {
+		/* No ITR update, lowest priority */
+		intval = iecm_vport_intr_buildreg_itr(q_vector,
+						      VIRTCHNL_ITR_IDX_NO_ITR,
+						      0);
+		if (q_vector->itr_countdown)
+			q_vector->itr_countdown--;
+	}
+
+	writel_relaxed(intval, hw->hw_addr +
+			       q_vector->intr_reg.dyn_ctl);
 }
 
 /**
@@ -1098,7 +1449,40 @@ void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector)
 static int
 iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	int vector, err, irq_num, vidx;
+
+	for (vector = 0; vector < vport->num_q_vectors; vector++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[vector];
+
+		vidx = vector + vport->q_vector_base;
+		irq_num = adapter->msix_entries[vidx].vector;
+
+		snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+			 "%s-%s-%d", basename, "TxRx", vidx);
+
+		err = request_irq(irq_num, vport->irq_q_handler, 0,
+				  q_vector->name, q_vector);
+		if (err) {
+			netdev_err(vport->netdev,
+				   "Request_irq failed, error: %d\n", err);
+			goto free_q_irqs;
+		}
+		/* assign the mask for this IRQ */
+		irq_set_affinity_hint(irq_num, &q_vector->affinity_mask);
+	}
+
+	return 0;
+
+free_q_irqs:
+	while (vector) {
+		vector--;
+		vidx = vector + vport->q_vector_base;
+		irq_num = adapter->msix_entries[vidx].vector,
+		free_irq(irq_num,
+			 &vport->q_vectors[vector]);
+	}
+	return err;
 }
 
 /**
@@ -1107,7 +1491,14 @@ iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename)
  */
 void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport)
 {
-	/* stub */
+	int q_idx;
+
+	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx];
+
+		if (q_vector->num_txq || q_vector->num_rxq)
+			iecm_vport_intr_update_itr_ena_irq(q_vector);
+	}
 }
 
 /**
@@ -1116,7 +1507,9 @@ void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport)
  */
 void iecm_vport_intr_deinit(struct iecm_vport *vport)
 {
-	/* stub */
+	iecm_vport_intr_napi_dis_all(vport);
+	iecm_vport_intr_dis_irq_all(vport);
+	iecm_vport_intr_rel_irq(vport);
 }
 
 /**
@@ -1126,7 +1519,16 @@ void iecm_vport_intr_deinit(struct iecm_vport *vport)
 static void
 iecm_vport_intr_napi_ena_all(struct iecm_vport *vport)
 {
-	/* stub */
+	int q_idx;
+
+	if (!vport->netdev)
+		return;
+
+	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx];
+
+		napi_enable(&q_vector->napi);
+	}
 }
 
 /**
@@ -1175,7 +1577,65 @@ static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
  */
 static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport)
 {
-	/* stub */
+	int i, j, k = 0, num_rxq, num_txq;
+	struct iecm_rxq_group *rx_qgrp;
+	struct iecm_txq_group *tx_qgrp;
+	struct iecm_queue *q;
+	int q_index;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		rx_qgrp = &vport->rxq_grps[i];
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = rx_qgrp->splitq.num_rxq_sets;
+		else
+			num_rxq = rx_qgrp->singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++) {
+			if (k >= vport->num_q_vectors)
+				k = k % vport->num_q_vectors;
+
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				q = &rx_qgrp->splitq.rxq_sets[j].rxq;
+			else
+				q = &rx_qgrp->singleq.rxqs[j];
+			q->q_vector = &vport->q_vectors[k];
+			q_index = q->q_vector->num_rxq;
+			q->q_vector->rx[q_index] = q;
+			q->q_vector->num_rxq++;
+
+			k++;
+		}
+	}
+	k = 0;
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		tx_qgrp = &vport->txq_grps[i];
+		num_txq = tx_qgrp->num_txq;
+
+		if (iecm_is_queue_model_split(vport->txq_model)) {
+			if (k >= vport->num_q_vectors)
+				k = k % vport->num_q_vectors;
+
+			q = tx_qgrp->complq;
+			q->q_vector = &vport->q_vectors[k];
+			q_index = q->q_vector->num_txq;
+			q->q_vector->tx[q_index] = q;
+			q->q_vector->num_txq++;
+			k++;
+		} else {
+			for (j = 0; j < num_txq; j++) {
+				if (k >= vport->num_q_vectors)
+					k = k % vport->num_q_vectors;
+
+				q = &tx_qgrp->txqs[j];
+				q->q_vector = &vport->q_vectors[k];
+				q_index = q->q_vector->num_txq;
+				q->q_vector->tx[q_index] = q;
+				q->q_vector->num_txq++;
+
+				k++;
+			}
+		}
+	}
 }
 
 /**
@@ -1186,7 +1646,38 @@ static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport)
  */
 static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct iecm_q_vector *q_vector;
+	int i;
+
+	if (adapter->req_vec_chunks) {
+		struct virtchnl_vector_chunks *vchunks;
+		struct virtchnl_alloc_vectors *ac;
+		/* We may never deal with more that 256 same type of vectors */
+#define IECM_MAX_VECIDS	256
+		u16 vecids[IECM_MAX_VECIDS];
+		int num_ids;
+
+		ac = adapter->req_vec_chunks;
+		vchunks = &ac->vchunks;
+
+		num_ids = iecm_vport_get_vec_ids(vecids, IECM_MAX_VECIDS,
+						 vchunks);
+		if (num_ids != adapter->num_msix_entries)
+			return -EFAULT;
+
+		for (i = 0; i < vport->num_q_vectors; i++) {
+			q_vector = &vport->q_vectors[i];
+			q_vector->v_idx = vecids[i + vport->q_vector_base];
+		}
+	} else {
+		for (i = 0; i < vport->num_q_vectors; i++) {
+			q_vector = &vport->q_vectors[i];
+			q_vector->v_idx = i + vport->q_vector_base;
+		}
+	}
+
+	return 0;
 }
 
 /**
@@ -1198,7 +1689,64 @@ static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport)
  */
 int iecm_vport_intr_alloc(struct iecm_vport *vport)
 {
-	/* stub */
+	int txqs_per_vector, rxqs_per_vector;
+	struct iecm_q_vector *q_vector;
+	int v_idx, err = 0;
+
+	vport->q_vectors = kcalloc(vport->num_q_vectors,
+				   sizeof(struct iecm_q_vector), GFP_KERNEL);
+
+	if (!vport->q_vectors)
+		return -ENOMEM;
+
+	txqs_per_vector = DIV_ROUND_UP(vport->num_txq, vport->num_q_vectors);
+	rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq, vport->num_q_vectors);
+
+	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+		q_vector = &vport->q_vectors[v_idx];
+		q_vector->vport = vport;
+		q_vector->itr_countdown = ITR_COUNTDOWN_START;
+
+		q_vector->tx = kcalloc(txqs_per_vector,
+				       sizeof(struct iecm_queue *),
+				       GFP_KERNEL);
+		if (!q_vector->tx) {
+			err = -ENOMEM;
+			goto free_vport_q_vec;
+		}
+
+		q_vector->rx = kcalloc(rxqs_per_vector,
+				       sizeof(struct iecm_queue *),
+				       GFP_KERNEL);
+		if (!q_vector->rx) {
+			err = -ENOMEM;
+			goto free_vport_q_vec_tx;
+		}
+
+		/* only set affinity_mask if the CPU is online */
+		if (cpu_online(v_idx))
+			cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
+
+		/* Register the NAPI handler */
+		if (vport->netdev) {
+			if (iecm_is_queue_model_split(vport->txq_model))
+				netif_napi_add(vport->netdev, &q_vector->napi,
+					       iecm_vport_splitq_napi_poll,
+					       NAPI_POLL_WEIGHT);
+			else
+				netif_napi_add(vport->netdev, &q_vector->napi,
+					       iecm_vport_singleq_napi_poll,
+					       NAPI_POLL_WEIGHT);
+		}
+	}
+
+	return 0;
+free_vport_q_vec_tx:
+	kfree(q_vector->tx);
+free_vport_q_vec:
+	kfree(vport->q_vectors);
+
+	return err;
 }
 
 /**
@@ -1209,7 +1757,32 @@ int iecm_vport_intr_alloc(struct iecm_vport *vport)
  */
 int iecm_vport_intr_init(struct iecm_vport *vport)
 {
-	/* stub */
+	char int_name[IECM_INT_NAME_STR_LEN];
+	int err = 0;
+
+	err = iecm_vport_intr_init_vec_idx(vport);
+	if (err)
+		goto handle_err;
+
+	iecm_vport_intr_map_vector_to_qs(vport);
+	iecm_vport_intr_napi_ena_all(vport);
+
+	vport->adapter->dev_ops.reg_ops.intr_reg_init(vport);
+
+	snprintf(int_name, sizeof(int_name) - 1, "%s-%s",
+		 dev_driver_string(&vport->adapter->pdev->dev),
+		 vport->netdev->name);
+
+	err = iecm_vport_intr_req_irq(vport, int_name);
+	if (err)
+		goto unroll_vectors_alloc;
+
+	iecm_vport_intr_ena_irq_all(vport);
+	goto handle_err;
+unroll_vectors_alloc:
+	iecm_vport_intr_rel(vport);
+handle_err:
+	return err;
 }
 EXPORT_SYMBOL(iecm_vport_calc_num_q_vec);
 
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index 2f716ec42cf2..b8cf8eb15052 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -1855,7 +1855,29 @@ int
 iecm_vport_get_vec_ids(u16 *vecids, int num_vecids,
 		       struct virtchnl_vector_chunks *chunks)
 {
-	/* stub */
+	int num_chunks = chunks->num_vector_chunks;
+	struct virtchnl_vector_chunk *chunk;
+	int num_vecid_filled = 0;
+	int start_vecid;
+	int num_vec;
+	int i, j;
+
+	for (j = 0; j < num_chunks; j++) {
+		chunk = &chunks->num_vchunk[j];
+		num_vec = chunk->num_vectors;
+		start_vecid = chunk->start_vector_id;
+		for (i = 0; i < num_vec; i++) {
+			if ((num_vecid_filled + i) < num_vecids) {
+				vecids[num_vecid_filled + i] = start_vecid;
+				start_vecid++;
+			} else {
+				break;
+			}
+		}
+		num_vecid_filled = num_vecid_filled + i;
+	}
+
+	return num_vecid_filled;
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 09/15] iecm: Init and allocate vport
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (7 preceding siblings ...)
  2020-08-24 17:32 ` [net-next v5 08/15] iecm: Implement vector allocation Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 17:33 ` [net-next v5 10/15] iecm: Deinit vport Tony Nguyen
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Initialize vport and allocate queue resources.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/iecm/iecm_lib.c    | 125 ++-
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   | 782 +++++++++++++++++-
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   |  36 +-
 3 files changed, 911 insertions(+), 32 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index 4037d13e4512..8c5e90f275c3 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -461,7 +461,15 @@ static void iecm_vport_rel_all(struct iecm_adapter *adapter)
 void iecm_vport_set_hsplit(struct iecm_vport *vport,
 			   struct bpf_prog __always_unused *prog)
 {
-	/* stub */
+	if (prog) {
+		vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT;
+		return;
+	}
+	if (iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_HEADER_SPLIT) &&
+	    iecm_is_queue_model_split(vport->rxq_model))
+		vport->rx_hsplit_en = IECM_RX_HDR_SPLIT;
+	else
+		vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT;
 }
 
 /**
@@ -548,7 +556,19 @@ static void iecm_service_task(struct work_struct *work)
  */
 static int iecm_up_complete(struct iecm_vport *vport)
 {
-	/* stub */
+	int err;
+
+	err = netif_set_real_num_rx_queues(vport->netdev, vport->num_txq);
+	if (err)
+		return err;
+	err = netif_set_real_num_tx_queues(vport->netdev, vport->num_rxq);
+	if (err)
+		return err;
+	netif_carrier_on(vport->netdev);
+	netif_tx_start_all_queues(vport->netdev);
+
+	vport->adapter->state = __IECM_UP;
+	return 0;
 }
 
 /**
@@ -557,7 +577,27 @@ static int iecm_up_complete(struct iecm_vport *vport)
  */
 static void iecm_rx_init_buf_tail(struct iecm_vport *vport)
 {
-	/* stub */
+	int i, j;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *grp = &vport->rxq_grps[i];
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				struct iecm_queue *q =
+					&grp->splitq.bufq_sets[j].bufq;
+
+				writel_relaxed(q->next_to_alloc, q->tail);
+			}
+		} else {
+			for (j = 0; j < grp->singleq.num_rxq; j++) {
+				struct iecm_queue *q =
+					&grp->singleq.rxqs[j];
+
+				writel_relaxed(q->next_to_alloc, q->tail);
+			}
+		}
+	}
 }
 
 /**
@@ -566,7 +606,80 @@ static void iecm_rx_init_buf_tail(struct iecm_vport *vport)
  */
 static int iecm_vport_open(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	int err;
+
+	if (vport->adapter->state != __IECM_DOWN)
+		return -EBUSY;
+
+	/* we do not allow interface up just yet */
+	netif_carrier_off(vport->netdev);
+
+	if (adapter->dev_ops.vc_ops.enable_vport) {
+		err = adapter->dev_ops.vc_ops.enable_vport(vport);
+		if (err)
+			return -EAGAIN;
+	}
+
+	err = adapter->dev_ops.vc_ops.vport_queue_ids_init(vport);
+	if (err) {
+		dev_err(&vport->adapter->pdev->dev,
+			"Call to queue ids init returned %d\n", err);
+		return err;
+	}
+
+	adapter->dev_ops.reg_ops.vportq_reg_init(vport);
+	iecm_rx_init_buf_tail(vport);
+
+	err = iecm_vport_intr_init(vport);
+	if (err) {
+		dev_err(&vport->adapter->pdev->dev,
+			"Call to vport interrupt init returned %d\n", err);
+		return err;
+	}
+
+	err = vport->adapter->dev_ops.vc_ops.config_queues(vport);
+	if (err)
+		goto unroll_config_queues;
+	err = vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, true);
+	if (err) {
+		dev_err(&vport->adapter->pdev->dev,
+			"Call to irq_map_unmap returned %d\n", err);
+		goto unroll_config_queues;
+	}
+	err = vport->adapter->dev_ops.vc_ops.enable_queues(vport);
+	if (err)
+		goto unroll_enable_queues;
+
+	err = vport->adapter->dev_ops.vc_ops.get_ptype(vport);
+	if (err)
+		goto unroll_get_ptype;
+
+	if (adapter->rss_data.rss_lut)
+		err = iecm_config_rss(vport);
+	else
+		err = iecm_init_rss(vport);
+	if (err)
+		goto unroll_init_rss;
+	err = iecm_up_complete(vport);
+	if (err)
+		goto unroll_up_comp;
+
+	netif_info(vport->adapter, hw, vport->netdev, "%s\n", __func__);
+
+	return 0;
+unroll_up_comp:
+	iecm_deinit_rss(vport);
+unroll_init_rss:
+	adapter->dev_ops.vc_ops.disable_vport(vport);
+unroll_get_ptype:
+	vport->adapter->dev_ops.vc_ops.disable_queues(vport);
+unroll_enable_queues:
+	vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, false);
+unroll_config_queues:
+	iecm_vport_intr_deinit(vport);
+
+	return err;
 }
 
 /**
@@ -903,7 +1016,9 @@ EXPORT_SYMBOL(iecm_shutdown);
  */
 static int iecm_open(struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_netdev_priv *np = netdev_priv(netdev);
+
+	return iecm_vport_open(np->vport);
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
index 8214de2506af..781942f934df 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -86,7 +86,38 @@ static void iecm_tx_desc_rel_all(struct iecm_vport *vport)
  */
 static int iecm_tx_buf_alloc_all(struct iecm_queue *tx_q)
 {
-	/* stub */
+	int buf_size;
+	int i = 0;
+
+	/* Allocate book keeping buffers only. Buffers to be supplied to HW
+	 * are allocated by kernel network stack and received as part of skb
+	 */
+	buf_size = sizeof(struct iecm_tx_buf) * tx_q->desc_count;
+	tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL);
+	if (!tx_q->tx_buf)
+		return -ENOMEM;
+
+	/* Initialize Tx buf stack for out-of-order completions if
+	 * flow scheduling offload is enabled
+	 */
+	tx_q->buf_stack.bufs =
+		kcalloc(tx_q->desc_count, sizeof(struct iecm_tx_buf *),
+			GFP_KERNEL);
+	if (!tx_q->buf_stack.bufs)
+		return -ENOMEM;
+
+	for (i = 0; i < tx_q->desc_count; i++) {
+		tx_q->buf_stack.bufs[i] =
+			kzalloc(sizeof(*tx_q->buf_stack.bufs[i]),
+				GFP_KERNEL);
+		if (!tx_q->buf_stack.bufs[i])
+			return IECM_ERR_NO_MEMORY;
+	}
+
+	tx_q->buf_stack.size = tx_q->desc_count;
+	tx_q->buf_stack.top = tx_q->desc_count;
+
+	return 0;
 }
 
 /**
@@ -98,7 +129,40 @@ static int iecm_tx_buf_alloc_all(struct iecm_queue *tx_q)
  */
 static int iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq)
 {
-	/* stub */
+	struct device *dev = tx_q->dev;
+	int err = 0;
+
+	if (bufq) {
+		err = iecm_tx_buf_alloc_all(tx_q);
+		if (err)
+			goto err_alloc;
+		tx_q->size = tx_q->desc_count *
+				sizeof(struct iecm_base_tx_desc);
+	} else {
+		tx_q->size = tx_q->desc_count *
+				sizeof(struct iecm_splitq_tx_compl_desc);
+	}
+
+	/* Allocate descriptors also round up to nearest 4K */
+	tx_q->size = ALIGN(tx_q->size, 4096);
+	tx_q->desc_ring = dmam_alloc_coherent(dev, tx_q->size, &tx_q->dma,
+					      GFP_KERNEL);
+	if (!tx_q->desc_ring) {
+		dev_info(dev, "Unable to allocate memory for the Tx descriptor ring, size=%d\n",
+			 tx_q->size);
+		err = -ENOMEM;
+		goto err_alloc;
+	}
+
+	tx_q->next_to_alloc = 0;
+	tx_q->next_to_use = 0;
+	tx_q->next_to_clean = 0;
+	set_bit(__IECM_Q_GEN_CHK, tx_q->flags);
+
+err_alloc:
+	if (err)
+		iecm_tx_desc_rel(tx_q, bufq);
+	return err;
 }
 
 /**
@@ -109,7 +173,41 @@ static int iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq)
  */
 static int iecm_tx_desc_alloc_all(struct iecm_vport *vport)
 {
-	/* stub */
+	struct pci_dev *pdev = vport->adapter->pdev;
+	int err = 0;
+	int i, j;
+
+	/* Setup buffer queues. In single queue model buffer queues and
+	 * completion queues will be same
+	 */
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
+			err = iecm_tx_desc_alloc(&vport->txq_grps[i].txqs[j],
+						 true);
+			if (err) {
+				dev_err(&pdev->dev,
+					"Allocation for Tx Queue %u failed\n",
+					i);
+				goto err_out;
+			}
+		}
+
+		if (iecm_is_queue_model_split(vport->txq_model)) {
+			/* Setup completion queues */
+			err = iecm_tx_desc_alloc(vport->txq_grps[i].complq,
+						 false);
+			if (err) {
+				dev_err(&pdev->dev,
+					"Allocation for Tx Completion Queue %u failed\n",
+					i);
+				goto err_out;
+			}
+		}
+	}
+err_out:
+	if (err)
+		iecm_tx_desc_rel_all(vport);
+	return err;
 }
 
 /**
@@ -164,7 +262,19 @@ static void iecm_rx_desc_rel_all(struct iecm_vport *vport)
  */
 void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val)
 {
-	/* stub */
+	/* update next to alloc since we have filled the ring */
+	rxq->next_to_alloc = val;
+
+	rxq->next_to_use = val;
+	if (!rxq->tail)
+		return;
+	/* Force memory writes to complete before letting h/w
+	 * know there are new descriptors to fetch.  (Only
+	 * applicable for weak-ordered memory model archs,
+	 * such as IA-64).
+	 */
+	wmb();
+	writel_relaxed(val, rxq->tail);
 }
 
 /**
@@ -177,7 +287,34 @@ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val)
  */
 bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf)
 {
-	/* stub */
+	struct page *page = buf->page;
+	dma_addr_t dma;
+
+	/* since we are recycling buffers we should seldom need to alloc */
+	if (likely(page))
+		return true;
+
+	/* alloc new page for storage */
+	page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(!page))
+		return false;
+
+	/* map page for use */
+	dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+
+	/* if mapping failed free memory back to system since
+	 * there isn't much point in holding memory we can't use
+	 */
+	if (dma_mapping_error(rxq->dev, dma)) {
+		__free_pages(page, 0);
+		return false;
+	}
+
+	buf->dma = dma;
+	buf->page = page;
+	buf->page_offset = iecm_rx_offset(rxq);
+
+	return true;
 }
 
 /**
@@ -191,7 +328,34 @@ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf)
 static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq,
 				     struct iecm_rx_buf *hdr_buf)
 {
-	/* stub */
+	struct page *page = hdr_buf->page;
+	dma_addr_t dma;
+
+	/* since we are recycling buffers we should seldom need to alloc */
+	if (likely(page))
+		return true;
+
+	/* alloc new page for storage */
+	page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(!page))
+		return false;
+
+	/* map page for use */
+	dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+
+	/* if mapping failed free memory back to system since
+	 * there isn't much point in holding memory we can't use
+	 */
+	if (dma_mapping_error(rxq->dev, dma)) {
+		__free_pages(page, 0);
+		return false;
+	}
+
+	hdr_buf->dma = dma;
+	hdr_buf->page = page;
+	hdr_buf->page_offset = 0;
+
+	return true;
 }
 
 /**
@@ -205,7 +369,59 @@ static bool
 iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq,
 			 u16 cleaned_count)
 {
-	/* stub */
+	struct iecm_splitq_rx_buf_desc *splitq_rx_desc = NULL;
+	struct iecm_rx_buf *hdr_buf = NULL;
+	u16 nta = rxq->next_to_alloc;
+	struct iecm_rx_buf *buf;
+
+	/* do nothing if no valid netdev defined */
+	if (!rxq->vport->netdev || !cleaned_count)
+		return false;
+
+	splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, nta);
+
+	buf = &rxq->rx_buf.buf[nta];
+	if (rxq->rx_hsplit_en)
+		hdr_buf = &rxq->rx_buf.hdr_buf[nta];
+
+	do {
+		if (rxq->rx_hsplit_en) {
+			if (!iecm_rx_hdr_buf_hw_alloc(rxq, hdr_buf))
+				break;
+
+			splitq_rx_desc->hdr_addr =
+				cpu_to_le64(hdr_buf->dma +
+					    hdr_buf->page_offset);
+			hdr_buf++;
+		}
+
+		if (!iecm_rx_buf_hw_alloc(rxq, buf))
+			break;
+
+		/* Refresh the desc even if buffer_addrs didn't change
+		 * because each write-back erases this info.
+		 */
+		splitq_rx_desc->pkt_addr =
+			cpu_to_le64(buf->dma + buf->page_offset);
+		splitq_rx_desc->qword0.buf_id = cpu_to_le16(nta);
+
+		splitq_rx_desc++;
+		buf++;
+		nta++;
+		if (unlikely(nta == rxq->desc_count)) {
+			splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, 0);
+			buf = rxq->rx_buf.buf;
+			hdr_buf = rxq->rx_buf.hdr_buf;
+			nta = 0;
+		}
+
+		cleaned_count--;
+	} while (cleaned_count);
+
+	if (rxq->next_to_alloc != nta)
+		iecm_rx_buf_hw_update(rxq, nta);
+
+	return !!cleaned_count;
 }
 
 /**
@@ -216,7 +432,44 @@ iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq,
  */
 static int iecm_rx_buf_alloc_all(struct iecm_queue *rxq)
 {
-	/* stub */
+	int err = 0;
+
+	/* Allocate book keeping buffers */
+	rxq->rx_buf.buf = kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf),
+				  GFP_KERNEL);
+	if (!rxq->rx_buf.buf) {
+		err = -ENOMEM;
+		goto rx_buf_alloc_all_out;
+	}
+
+	if (rxq->rx_hsplit_en) {
+		rxq->rx_buf.hdr_buf =
+			kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf),
+				GFP_KERNEL);
+		if (!rxq->rx_buf.hdr_buf) {
+			err = -ENOMEM;
+			goto rx_buf_alloc_all_out;
+		}
+	} else {
+		rxq->rx_buf.hdr_buf = NULL;
+	}
+
+	/* Allocate buffers to be given to HW. Allocate one less than
+	 * total descriptor count as RX splits 4k buffers to 2K and recycles
+	 */
+	if (iecm_is_queue_model_split(rxq->vport->rxq_model)) {
+		if (iecm_rx_buf_hw_alloc_all(rxq,
+					     rxq->desc_count - 1))
+			err = -ENOMEM;
+	} else if (iecm_rx_singleq_buf_hw_alloc_all(rxq,
+						    rxq->desc_count - 1)) {
+		err = -ENOMEM;
+	}
+
+rx_buf_alloc_all_out:
+	if (err)
+		iecm_rx_buf_rel_all(rxq);
+	return err;
 }
 
 /**
@@ -230,7 +483,48 @@ static int iecm_rx_buf_alloc_all(struct iecm_queue *rxq)
 static int iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq,
 			      enum virtchnl_queue_model q_model)
 {
-	/* stub */
+	struct device *dev = rxq->dev;
+
+	/* As both single and split descriptors are 32 byte, memory size
+	 * will be same for all three singleq_base Rx, buf., splitq_base
+	 * Rx. So pick anyone of them for size
+	 */
+	if (bufq) {
+		rxq->size = rxq->desc_count *
+			sizeof(struct iecm_splitq_rx_buf_desc);
+	} else {
+		rxq->size = rxq->desc_count *
+			sizeof(union iecm_rx_desc);
+	}
+
+	/* Allocate descriptors and also round up to nearest 4K */
+	rxq->size = ALIGN(rxq->size, 4096);
+	rxq->desc_ring = dmam_alloc_coherent(dev, rxq->size,
+					     &rxq->dma, GFP_KERNEL);
+	if (!rxq->desc_ring) {
+		dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n",
+			 rxq->size);
+		return -ENOMEM;
+	}
+
+	rxq->next_to_alloc = 0;
+	rxq->next_to_clean = 0;
+	rxq->next_to_use = 0;
+	set_bit(__IECM_Q_GEN_CHK, rxq->flags);
+
+	/* Allocate buffers for a Rx queue if the q_model is single OR if it
+	 * is a buffer queue in split queue model
+	 */
+	if (bufq || !iecm_is_queue_model_split(q_model)) {
+		int err = 0;
+
+		err = iecm_rx_buf_alloc_all(rxq);
+		if (err) {
+			iecm_rx_desc_rel(rxq, bufq, q_model);
+			return err;
+		}
+	}
+	return 0;
 }
 
 /**
@@ -241,7 +535,48 @@ static int iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq,
  */
 static int iecm_rx_desc_alloc_all(struct iecm_vport *vport)
 {
-	/* stub */
+	struct device *dev = &vport->adapter->pdev->dev;
+	struct iecm_queue *q;
+	int i, j, num_rxq;
+	int err = 0;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = vport->rxq_grps[i].splitq.num_rxq_sets;
+		else
+			num_rxq = vport->rxq_grps[i].singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++) {
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				q = &vport->rxq_grps[i].splitq.rxq_sets[j].rxq;
+			else
+				q = &vport->rxq_grps[i].singleq.rxqs[j];
+			err = iecm_rx_desc_alloc(q, false, vport->rxq_model);
+			if (err) {
+				dev_err(dev, "Memory allocation for Rx Queue %u failed\n",
+					i);
+				goto err_out;
+			}
+		}
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				q =
+				  &vport->rxq_grps[i].splitq.bufq_sets[j].bufq;
+				err = iecm_rx_desc_alloc(q, true,
+							 vport->rxq_model);
+				if (err) {
+					dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n",
+						i);
+					goto err_out;
+				}
+			}
+		}
+	}
+err_out:
+	if (err)
+		iecm_rx_desc_rel_all(vport);
+	return err;
 }
 
 /**
@@ -294,7 +629,23 @@ void iecm_vport_queues_rel(struct iecm_vport *vport)
  */
 static int iecm_vport_init_fast_path_txqs(struct iecm_vport *vport)
 {
-	/* stub */
+	int i, j, k = 0;
+
+	vport->txqs = kcalloc(vport->num_txq, sizeof(struct iecm_queue *),
+			      GFP_KERNEL);
+
+	if (!vport->txqs)
+		return -ENOMEM;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *tx_grp = &vport->txq_grps[i];
+
+		for (j = 0; j < tx_grp->num_txq; j++, k++) {
+			vport->txqs[k] = &tx_grp->txqs[j];
+			vport->txqs[k]->idx = k;
+		}
+	}
+	return 0;
 }
 
 /**
@@ -305,7 +656,12 @@ static int iecm_vport_init_fast_path_txqs(struct iecm_vport *vport)
 void iecm_vport_init_num_qs(struct iecm_vport *vport,
 			    struct virtchnl_create_vport *vport_msg)
 {
-	/* stub */
+	vport->num_txq = vport_msg->num_tx_q;
+	vport->num_rxq = vport_msg->num_rx_q;
+	if (iecm_is_queue_model_split(vport->txq_model))
+		vport->num_complq = vport_msg->num_tx_complq;
+	if (iecm_is_queue_model_split(vport->rxq_model))
+		vport->num_bufq = vport_msg->num_rx_bufq;
 }
 
 /**
@@ -314,7 +670,32 @@ void iecm_vport_init_num_qs(struct iecm_vport *vport,
  */
 void iecm_vport_calc_num_q_desc(struct iecm_vport *vport)
 {
-	/* stub */
+	int num_req_txq_desc = vport->adapter->config_data.num_req_txq_desc;
+	int num_req_rxq_desc = vport->adapter->config_data.num_req_rxq_desc;
+
+	vport->complq_desc_count = 0;
+	vport->bufq_desc_count = 0;
+	if (num_req_txq_desc) {
+		vport->txq_desc_count = num_req_txq_desc;
+		if (iecm_is_queue_model_split(vport->txq_model))
+			vport->complq_desc_count = num_req_txq_desc;
+	} else {
+		vport->txq_desc_count =
+			IECM_DFLT_TX_Q_DESC_COUNT;
+		if (iecm_is_queue_model_split(vport->txq_model)) {
+			vport->complq_desc_count =
+				IECM_DFLT_TX_COMPLQ_DESC_COUNT;
+		}
+	}
+	if (num_req_rxq_desc) {
+		vport->rxq_desc_count = num_req_rxq_desc;
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			vport->bufq_desc_count = num_req_rxq_desc;
+	} else {
+		vport->rxq_desc_count = IECM_DFLT_RX_Q_DESC_COUNT;
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			vport->bufq_desc_count = IECM_DFLT_RX_BUFQ_DESC_COUNT;
+	}
 }
 EXPORT_SYMBOL(iecm_vport_calc_num_q_desc);
 
@@ -326,7 +707,51 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_desc);
 void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg,
 			      int num_req_qs)
 {
-	/* stub */
+	int dflt_splitq_txq_grps, dflt_singleq_txqs;
+	int dflt_splitq_rxq_grps, dflt_singleq_rxqs;
+	int num_txq_grps, num_rxq_grps;
+	int num_cpus;
+
+	/* Restrict num of queues to cpus online as a default configuration to
+	 * give best performance. User can always override to a max number
+	 * of queues via ethtool.
+	 */
+	num_cpus = num_online_cpus();
+	dflt_splitq_txq_grps = min_t(int, IECM_DFLT_SPLITQ_TX_Q_GROUPS,
+				     num_cpus);
+	dflt_singleq_txqs = min_t(int, IECM_DFLT_SINGLEQ_TXQ_PER_GROUP,
+				  num_cpus);
+	dflt_splitq_rxq_grps = min_t(int, IECM_DFLT_SPLITQ_RX_Q_GROUPS,
+				     num_cpus);
+	dflt_singleq_rxqs = min_t(int, IECM_DFLT_SINGLEQ_RXQ_PER_GROUP,
+				  num_cpus);
+
+	if (iecm_is_queue_model_split(vport_msg->txq_model)) {
+		num_txq_grps = num_req_qs ? num_req_qs : dflt_splitq_txq_grps;
+		vport_msg->num_tx_complq = num_txq_grps *
+			IECM_COMPLQ_PER_GROUP;
+		vport_msg->num_tx_q = num_txq_grps *
+				      IECM_DFLT_SPLITQ_TXQ_PER_GROUP;
+	} else {
+		num_txq_grps = IECM_DFLT_SINGLEQ_TX_Q_GROUPS;
+		vport_msg->num_tx_q = num_txq_grps *
+				      (num_req_qs ? num_req_qs :
+				       dflt_singleq_txqs);
+		vport_msg->num_tx_complq = 0;
+	}
+	if (iecm_is_queue_model_split(vport_msg->rxq_model)) {
+		num_rxq_grps = num_req_qs ? num_req_qs : dflt_splitq_rxq_grps;
+		vport_msg->num_rx_bufq = num_rxq_grps *
+					 IECM_BUFQS_PER_RXQ_SET;
+		vport_msg->num_rx_q = num_rxq_grps *
+				      IECM_DFLT_SPLITQ_RXQ_PER_GROUP;
+	} else {
+		num_rxq_grps = IECM_DFLT_SINGLEQ_RX_Q_GROUPS;
+		vport_msg->num_rx_bufq = 0;
+		vport_msg->num_rx_q = num_rxq_grps *
+				      (num_req_qs ? num_req_qs :
+				       dflt_singleq_rxqs);
+	}
 }
 
 /**
@@ -335,7 +760,15 @@ void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg,
  */
 void iecm_vport_calc_num_q_groups(struct iecm_vport *vport)
 {
-	/* stub */
+	if (iecm_is_queue_model_split(vport->txq_model))
+		vport->num_txq_grp = vport->num_txq;
+	else
+		vport->num_txq_grp = IECM_DFLT_SINGLEQ_TX_Q_GROUPS;
+
+	if (iecm_is_queue_model_split(vport->rxq_model))
+		vport->num_rxq_grp = vport->num_rxq;
+	else
+		vport->num_rxq_grp = IECM_DFLT_SINGLEQ_RX_Q_GROUPS;
 }
 EXPORT_SYMBOL(iecm_vport_calc_num_q_groups);
 
@@ -348,7 +781,15 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_groups);
 static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport,
 					 int *num_txq, int *num_rxq)
 {
-	/* stub */
+	if (iecm_is_queue_model_split(vport->txq_model))
+		*num_txq = IECM_DFLT_SPLITQ_TXQ_PER_GROUP;
+	else
+		*num_txq = vport->num_txq;
+
+	if (iecm_is_queue_model_split(vport->rxq_model))
+		*num_rxq = IECM_DFLT_SPLITQ_RXQ_PER_GROUP;
+	else
+		*num_rxq = vport->num_rxq;
 }
 
 /**
@@ -359,7 +800,10 @@ static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport,
  */
 void iecm_vport_calc_num_q_vec(struct iecm_vport *vport)
 {
-	/* stub */
+	if (iecm_is_queue_model_split(vport->txq_model))
+		vport->num_q_vectors = vport->num_txq_grp;
+	else
+		vport->num_q_vectors = vport->num_txq;
 }
 
 /**
@@ -371,7 +815,68 @@ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport)
  */
 static int iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq)
 {
-	/* stub */
+	struct iecm_itr tx_itr = { 0 };
+	int err = 0;
+	int i;
+
+	vport->txq_grps = kcalloc(vport->num_txq_grp,
+				  sizeof(*vport->txq_grps), GFP_KERNEL);
+	if (!vport->txq_grps)
+		return -ENOMEM;
+
+	tx_itr.target_itr = IECM_ITR_TX_DEF;
+	tx_itr.itr_idx = VIRTCHNL_ITR_IDX_1;
+	tx_itr.next_update = jiffies + 1;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i];
+		int j;
+
+		tx_qgrp->vport = vport;
+		tx_qgrp->num_txq = num_txq;
+		tx_qgrp->txqs = kcalloc(num_txq, sizeof(*tx_qgrp->txqs),
+					GFP_KERNEL);
+		if (!tx_qgrp->txqs) {
+			err = -ENOMEM;
+			goto err_alloc;
+		}
+
+		for (j = 0; j < tx_qgrp->num_txq; j++) {
+			struct iecm_queue *q = &tx_qgrp->txqs[j];
+
+			q->dev = &vport->adapter->pdev->dev;
+			q->desc_count = vport->txq_desc_count;
+			q->vport = vport;
+			q->txq_grp = tx_qgrp;
+			hash_init(q->sched_buf_hash);
+
+			if (!iecm_is_queue_model_split(vport->txq_model))
+				q->itr = tx_itr;
+		}
+
+		if (!iecm_is_queue_model_split(vport->txq_model))
+			continue;
+
+		tx_qgrp->complq = kcalloc(IECM_COMPLQ_PER_GROUP,
+					  sizeof(*tx_qgrp->complq),
+					  GFP_KERNEL);
+		if (!tx_qgrp->complq) {
+			err = -ENOMEM;
+			goto err_alloc;
+		}
+
+		tx_qgrp->complq->dev = &vport->adapter->pdev->dev;
+		tx_qgrp->complq->desc_count = vport->complq_desc_count;
+		tx_qgrp->complq->vport = vport;
+		tx_qgrp->complq->txq_grp = tx_qgrp;
+
+		tx_qgrp->complq->itr = tx_itr;
+	}
+
+err_alloc:
+	if (err)
+		iecm_txq_group_rel(vport);
+	return err;
 }
 
 /**
@@ -383,7 +888,115 @@ static int iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq)
  */
 static int iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq)
 {
-	/* stub */
+	struct iecm_itr rx_itr = {0};
+	struct iecm_queue *q;
+	int i, err = 0;
+
+	vport->rxq_grps = kcalloc(vport->num_rxq_grp,
+				  sizeof(struct iecm_rxq_group), GFP_KERNEL);
+	if (!vport->rxq_grps)
+		return -ENOMEM;
+
+	rx_itr.target_itr = IECM_ITR_RX_DEF;
+	rx_itr.itr_idx = VIRTCHNL_ITR_IDX_0;
+	rx_itr.next_update = jiffies + 1;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+		int j;
+
+		rx_qgrp->vport = vport;
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			rx_qgrp->splitq.num_rxq_sets = num_rxq;
+			rx_qgrp->splitq.rxq_sets =
+				kcalloc(num_rxq,
+					sizeof(struct iecm_rxq_set),
+					GFP_KERNEL);
+			if (!rx_qgrp->splitq.rxq_sets) {
+				err = -ENOMEM;
+				goto err_alloc;
+			}
+
+			rx_qgrp->splitq.bufq_sets =
+				kcalloc(IECM_BUFQS_PER_RXQ_SET,
+					sizeof(struct iecm_bufq_set),
+					GFP_KERNEL);
+			if (!rx_qgrp->splitq.bufq_sets) {
+				err = -ENOMEM;
+				goto err_alloc;
+			}
+
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				int swq_size = sizeof(struct iecm_sw_queue);
+
+				q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+				q->dev = &vport->adapter->pdev->dev;
+				q->desc_count = vport->bufq_desc_count;
+				q->vport = vport;
+				q->rxq_grp = rx_qgrp;
+				q->idx = j;
+				q->rx_buf_size = IECM_RX_BUF_2048;
+				q->rsc_low_watermark = IECM_LOW_WATERMARK;
+				q->rx_buf_stride = IECM_RX_BUF_STRIDE;
+				q->itr = rx_itr;
+
+				if (vport->rx_hsplit_en) {
+					q->rx_hsplit_en = vport->rx_hsplit_en;
+					q->rx_hbuf_size = IECM_HDR_BUF_SIZE;
+				}
+
+				rx_qgrp->splitq.bufq_sets[j].num_refillqs =
+					num_rxq;
+				rx_qgrp->splitq.bufq_sets[j].refillqs =
+					kcalloc(num_rxq, swq_size, GFP_KERNEL);
+				if (!rx_qgrp->splitq.bufq_sets[j].refillqs) {
+					err = -ENOMEM;
+					goto err_alloc;
+				}
+			}
+		} else {
+			rx_qgrp->singleq.num_rxq = num_rxq;
+			rx_qgrp->singleq.rxqs = kcalloc(num_rxq,
+							sizeof(struct iecm_queue),
+							GFP_KERNEL);
+			if (!rx_qgrp->singleq.rxqs)  {
+				err = -ENOMEM;
+				goto err_alloc;
+			}
+		}
+
+		for (j = 0; j < num_rxq; j++) {
+			if (iecm_is_queue_model_split(vport->rxq_model)) {
+				q = &rx_qgrp->splitq.rxq_sets[j].rxq;
+				rx_qgrp->splitq.rxq_sets[j].refillq0 =
+				      &rx_qgrp->splitq.bufq_sets[0].refillqs[j];
+				rx_qgrp->splitq.rxq_sets[j].refillq1 =
+				      &rx_qgrp->splitq.bufq_sets[1].refillqs[j];
+
+				if (vport->rx_hsplit_en) {
+					q->rx_hsplit_en = vport->rx_hsplit_en;
+					q->rx_hbuf_size = IECM_HDR_BUF_SIZE;
+				}
+
+			} else {
+				q = &rx_qgrp->singleq.rxqs[j];
+			}
+			q->dev = &vport->adapter->pdev->dev;
+			q->desc_count = vport->rxq_desc_count;
+			q->vport = vport;
+			q->rxq_grp = rx_qgrp;
+			q->idx = (i * num_rxq) + j;
+			q->rx_buf_size = IECM_RX_BUF_2048;
+			q->rsc_low_watermark = IECM_LOW_WATERMARK;
+			q->rx_max_pkt_size = vport->netdev->mtu +
+					     IECM_PACKET_HDR_PAD;
+			q->itr = rx_itr;
+		}
+	}
+err_alloc:
+	if (err)
+		iecm_rxq_group_rel(vport);
+	return err;
 }
 
 /**
@@ -394,7 +1007,20 @@ static int iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq)
  */
 static int iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport)
 {
-	/* stub */
+	int num_txq, num_rxq;
+	int err;
+
+	iecm_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq);
+
+	err = iecm_txq_group_alloc(vport, num_txq);
+	if (err)
+		goto err_out;
+
+	err = iecm_rxq_group_alloc(vport, num_rxq);
+err_out:
+	if (err)
+		iecm_vport_queue_grp_rel_all(vport);
+	return err;
 }
 
 /**
@@ -406,7 +1032,28 @@ static int iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport)
  */
 int iecm_vport_queues_alloc(struct iecm_vport *vport)
 {
-	/* stub */
+	int err;
+
+	err = iecm_vport_queue_grp_alloc_all(vport);
+	if (err)
+		goto err_out;
+
+	err = iecm_tx_desc_alloc_all(vport);
+	if (err)
+		goto err_out;
+
+	err = iecm_rx_desc_alloc_all(vport);
+	if (err)
+		goto err_out;
+
+	err = iecm_vport_init_fast_path_txqs(vport);
+	if (err)
+		goto err_out;
+
+	return 0;
+err_out:
+	iecm_vport_queues_rel(vport);
+	return err;
 }
 
 /**
@@ -1794,7 +2441,16 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_vec);
  */
 int iecm_config_rss(struct iecm_vport *vport)
 {
-	/* stub */
+	int err = iecm_send_get_set_rss_key_msg(vport, false);
+
+	if (!err)
+		err = vport->adapter->dev_ops.vc_ops.get_set_rss_lut(vport,
+								     false);
+	if (!err)
+		err = vport->adapter->dev_ops.vc_ops.get_set_rss_hash(vport,
+								      false);
+
+	return err;
 }
 
 /**
@@ -1806,7 +2462,20 @@ int iecm_config_rss(struct iecm_vport *vport)
  */
 void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list)
 {
-	/* stub */
+	int i, j, k = 0;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++)
+				qid_list[k++] =
+					rx_qgrp->splitq.rxq_sets[j].rxq.q_id;
+		} else {
+			for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+				qid_list[k++] = rx_qgrp->singleq.rxqs[j].q_id;
+		}
+	}
 }
 
 /**
@@ -1818,7 +2487,13 @@ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list)
  */
 void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list)
 {
-	/* stub */
+	int num_lut_segs, lut_seg, i, k = 0;
+
+	num_lut_segs = vport->adapter->rss_data.rss_lut_size / vport->num_rxq;
+	for (lut_seg = 0; lut_seg < num_lut_segs; lut_seg++) {
+		for (i = 0; i < vport->num_rxq; i++)
+			vport->adapter->rss_data.rss_lut[k++] = qid_list[i];
+	}
 }
 
 /**
@@ -1829,7 +2504,64 @@ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list)
  */
 int iecm_init_rss(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	u16 *qid_list;
+
+	adapter->rss_data.rss_key = kzalloc(adapter->rss_data.rss_key_size,
+					    GFP_KERNEL);
+	if (!adapter->rss_data.rss_key)
+		return -ENOMEM;
+	adapter->rss_data.rss_lut = kzalloc(adapter->rss_data.rss_lut_size,
+					    GFP_KERNEL);
+	if (!adapter->rss_data.rss_lut) {
+		kfree(adapter->rss_data.rss_key);
+		adapter->rss_data.rss_key = NULL;
+		return -ENOMEM;
+	}
+
+	/* Initialize default rss key */
+	netdev_rss_key_fill((void *)adapter->rss_data.rss_key,
+			    adapter->rss_data.rss_key_size);
+
+	/* Initialize default rss lut */
+	if (adapter->rss_data.rss_lut_size % vport->num_rxq) {
+		u16 dflt_qid;
+		int i;
+
+		/* Set all entries to a default RX queue if the algorithm below
+		 * won't fill all entries
+		 */
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			dflt_qid =
+				vport->rxq_grps[0].splitq.rxq_sets[0].rxq.q_id;
+		else
+			dflt_qid =
+				vport->rxq_grps[0].singleq.rxqs[0].q_id;
+
+		for (i = 0; i < adapter->rss_data.rss_lut_size; i++)
+			adapter->rss_data.rss_lut[i] = dflt_qid;
+	}
+
+	qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL);
+	if (!qid_list) {
+		kfree(adapter->rss_data.rss_lut);
+		adapter->rss_data.rss_lut = NULL;
+		kfree(adapter->rss_data.rss_key);
+		adapter->rss_data.rss_key = NULL;
+		return -ENOMEM;
+	}
+
+	iecm_get_rx_qid_list(vport, qid_list);
+
+	/* Fill the default RSS lut values*/
+	iecm_fill_dflt_rss_lut(vport, qid_list);
+
+	kfree(qid_list);
+
+	 /* Initialize default rss HASH */
+	adapter->rss_data.rss_hash = IECM_DEFAULT_RSS_HASH_EXPANDED;
+
+	return iecm_config_rss(vport);
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index b8cf8eb15052..36b284aa8be2 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -663,7 +663,19 @@ int iecm_send_destroy_vport_msg(struct iecm_vport *vport)
  */
 int iecm_send_enable_vport_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_vport v_id;
+	int err;
+
+	v_id.vport_id = vport->vport_id;
+
+	err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_ENABLE_VPORT,
+			       sizeof(v_id), (u8 *)&v_id);
+	if (err)
+		return err;
+
+	return iecm_wait_for_event(adapter, IECM_VC_ENA_VPORT,
+				   IECM_VC_ENA_VPORT_ERR);
 }
 
 /**
@@ -1839,7 +1851,27 @@ EXPORT_SYMBOL(iecm_vc_core_init);
 static void iecm_vport_init(struct iecm_vport *vport,
 			    __always_unused int vport_id)
 {
-	/* stub */
+	struct virtchnl_create_vport *vport_msg;
+
+	vport_msg = (struct virtchnl_create_vport *)
+				vport->adapter->vport_params_recvd[0];
+	vport->txq_model = vport_msg->txq_model;
+	vport->rxq_model = vport_msg->rxq_model;
+	vport->vport_type = (u16)vport_msg->vport_type;
+	vport->vport_id = vport_msg->vport_id;
+	vport->adapter->rss_data.rss_key_size = min_t(u16, NETDEV_RSS_KEY_LEN,
+						      vport_msg->rss_key_size);
+	vport->adapter->rss_data.rss_lut_size = vport_msg->rss_lut_size;
+	ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr);
+	vport->max_mtu = IECM_MAX_MTU;
+
+	iecm_vport_set_hsplit(vport, NULL);
+
+	init_waitqueue_head(&vport->sw_marker_wq);
+	iecm_vport_init_num_qs(vport, vport_msg);
+	iecm_vport_calc_num_q_desc(vport);
+	iecm_vport_calc_num_q_groups(vport);
+	iecm_vport_calc_num_q_vec(vport);
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 10/15] iecm: Deinit vport
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (8 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 09/15] iecm: Init and allocate vport Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 17:33 ` [net-next v5 11/15] iecm: Add splitq TX/RX Tony Nguyen
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement vport take down and release its queue
resources.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/iecm/iecm_lib.c    |  28 ++-
 drivers/net/ethernet/intel/iecm/iecm_txrx.c   | 218 ++++++++++++++++--
 .../net/ethernet/intel/iecm/iecm_virtchnl.c   |  14 +-
 3 files changed, 244 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index 8c5e90f275c3..16f92dc4c77a 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -395,7 +395,26 @@ struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev)
  */
 static void iecm_vport_stop(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+
+	if (adapter->state <= __IECM_DOWN)
+		return;
+	adapter->dev_ops.vc_ops.irq_map_unmap(vport, false);
+	adapter->dev_ops.vc_ops.disable_queues(vport);
+	/* Normally we ask for queues in create_vport, but if we're changing
+	 * number of requested queues we do a delete then add instead of
+	 * deleting and reallocating the vport.
+	 */
+	if (test_and_clear_bit(__IECM_DEL_QUEUES,
+			       vport->adapter->flags))
+		iecm_send_delete_queues_msg(vport);
+	netif_carrier_off(vport->netdev);
+	netif_tx_disable(vport->netdev);
+	adapter->link_up = false;
+	iecm_vport_intr_deinit(vport);
+	if (adapter->dev_ops.vc_ops.disable_vport)
+		adapter->dev_ops.vc_ops.disable_vport(vport);
+	adapter->state = __IECM_DOWN;
 }
 
 /**
@@ -410,7 +429,11 @@ static void iecm_vport_stop(struct iecm_vport *vport)
  */
 static int iecm_stop(struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_netdev_priv *np = netdev_priv(netdev);
+
+	iecm_vport_stop(np->vport);
+
+	return 0;
 }
 
 /**
@@ -506,6 +529,7 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id)
 
 	/* fill vport slot in the adapter struct */
 	adapter->vports[adapter->next_vport] = vport;
+
 	if (iecm_cfg_netdev(vport))
 		goto cfg_netdev_fail;
 
diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
index 781942f934df..5a7e31790f97 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -43,7 +43,23 @@ void iecm_get_stats64(struct net_device *netdev,
  */
 void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf)
 {
-	/* stub */
+	if (tx_buf->skb) {
+		dev_kfree_skb_any(tx_buf->skb);
+		if (dma_unmap_len(tx_buf, len))
+			dma_unmap_single(tx_q->dev,
+					 dma_unmap_addr(tx_buf, dma),
+					 dma_unmap_len(tx_buf, len),
+					 DMA_TO_DEVICE);
+	} else if (dma_unmap_len(tx_buf, len)) {
+		dma_unmap_page(tx_q->dev,
+			       dma_unmap_addr(tx_buf, dma),
+			       dma_unmap_len(tx_buf, len),
+			       DMA_TO_DEVICE);
+	}
+
+	tx_buf->next_to_watch = NULL;
+	tx_buf->skb = NULL;
+	dma_unmap_len_set(tx_buf, len, 0);
 }
 
 /**
@@ -52,7 +68,26 @@ void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf)
  */
 static void iecm_tx_buf_rel_all(struct iecm_queue *txq)
 {
-	/* stub */
+	u16 i;
+
+	/* Buffers already cleared, nothing to do */
+	if (!txq->tx_buf)
+		return;
+
+	/* Free all the Tx buffer sk_buffs */
+	for (i = 0; i < txq->desc_count; i++)
+		iecm_tx_buf_rel(txq, &txq->tx_buf[i]);
+
+	kfree(txq->tx_buf);
+	txq->tx_buf = NULL;
+
+	if (txq->buf_stack.bufs) {
+		for (i = 0; i < txq->buf_stack.size; i++) {
+			iecm_tx_buf_rel(txq, txq->buf_stack.bufs[i]);
+			kfree(txq->buf_stack.bufs[i]);
+		}
+		kfree(txq->buf_stack.bufs);
+	}
 }
 
 /**
@@ -64,7 +99,17 @@ static void iecm_tx_buf_rel_all(struct iecm_queue *txq)
  */
 static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq)
 {
-	/* stub */
+	if (bufq)
+		iecm_tx_buf_rel_all(txq);
+
+	if (txq->desc_ring) {
+		dmam_free_coherent(txq->dev, txq->size,
+				   txq->desc_ring, txq->dma);
+		txq->desc_ring = NULL;
+		txq->next_to_alloc = 0;
+		txq->next_to_use = 0;
+		txq->next_to_clean = 0;
+	}
 }
 
 /**
@@ -75,7 +120,24 @@ static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq)
  */
 static void iecm_tx_desc_rel_all(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_queue *txq;
+	int i, j;
+
+	if (!vport->txq_grps)
+		return;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
+			if (vport->txq_grps[i].txqs) {
+				txq = &vport->txq_grps[i].txqs[j];
+				iecm_tx_desc_rel(txq, true);
+			}
+		}
+		if (iecm_is_queue_model_split(vport->txq_model)) {
+			txq = vport->txq_grps[i].complq;
+			iecm_tx_desc_rel(txq, false);
+		}
+	}
 }
 
 /**
@@ -218,7 +280,21 @@ static int iecm_tx_desc_alloc_all(struct iecm_vport *vport)
 static void iecm_rx_buf_rel(struct iecm_queue *rxq,
 			    struct iecm_rx_buf *rx_buf)
 {
-	/* stub */
+	struct device *dev = rxq->dev;
+
+	if (!rx_buf->page)
+		return;
+
+	if (rx_buf->skb) {
+		dev_kfree_skb_any(rx_buf->skb);
+		rx_buf->skb = NULL;
+	}
+
+	dma_unmap_page(dev, rx_buf->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+	__free_pages(rx_buf->page, 0);
+
+	rx_buf->page = NULL;
+	rx_buf->page_offset = 0;
 }
 
 /**
@@ -227,7 +303,23 @@ static void iecm_rx_buf_rel(struct iecm_queue *rxq,
  */
 static void iecm_rx_buf_rel_all(struct iecm_queue *rxq)
 {
-	/* stub */
+	u16 i;
+
+	/* queue already cleared, nothing to do */
+	if (!rxq->rx_buf.buf)
+		return;
+
+	/* Free all the bufs allocated and given to HW on Rx queue */
+	for (i = 0; i < rxq->desc_count; i++) {
+		iecm_rx_buf_rel(rxq, &rxq->rx_buf.buf[i]);
+		if (rxq->rx_hsplit_en)
+			iecm_rx_buf_rel(rxq, &rxq->rx_buf.hdr_buf[i]);
+	}
+
+	kfree(rxq->rx_buf.buf);
+	rxq->rx_buf.buf = NULL;
+	kfree(rxq->rx_buf.hdr_buf);
+	rxq->rx_buf.hdr_buf = NULL;
 }
 
 /**
@@ -241,7 +333,25 @@ static void iecm_rx_buf_rel_all(struct iecm_queue *rxq)
 static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq,
 			     enum virtchnl_queue_model q_model)
 {
-	/* stub */
+	if (!rxq)
+		return;
+
+	if (!bufq && iecm_is_queue_model_split(q_model) && rxq->skb) {
+		dev_kfree_skb_any(rxq->skb);
+		rxq->skb = NULL;
+	}
+
+	if (bufq || !iecm_is_queue_model_split(q_model))
+		iecm_rx_buf_rel_all(rxq);
+
+	if (rxq->desc_ring) {
+		dmam_free_coherent(rxq->dev, rxq->size,
+				   rxq->desc_ring, rxq->dma);
+		rxq->desc_ring = NULL;
+		rxq->next_to_alloc = 0;
+		rxq->next_to_clean = 0;
+		rxq->next_to_use = 0;
+	}
 }
 
 /**
@@ -252,7 +362,49 @@ static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq,
  */
 static void iecm_rx_desc_rel_all(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_rxq_group *rx_qgrp;
+	struct iecm_queue *q;
+	int i, j, num_rxq;
+
+	if (!vport->rxq_grps)
+		return;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		rx_qgrp = &vport->rxq_grps[i];
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			if (rx_qgrp->splitq.rxq_sets) {
+				num_rxq = rx_qgrp->splitq.num_rxq_sets;
+				for (j = 0; j < num_rxq; j++) {
+					q = &rx_qgrp->splitq.rxq_sets[j].rxq;
+					iecm_rx_desc_rel(q, false,
+							 vport->rxq_model);
+				}
+			}
+
+			if (!rx_qgrp->splitq.bufq_sets)
+				continue;
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				struct iecm_bufq_set *bufq_set =
+					&rx_qgrp->splitq.bufq_sets[j];
+
+				q = &bufq_set->bufq;
+				iecm_rx_desc_rel(q, true, vport->rxq_model);
+				if (!bufq_set->refillqs)
+					continue;
+				kfree(bufq_set->refillqs);
+				bufq_set->refillqs = NULL;
+			}
+		} else {
+			if (rx_qgrp->singleq.rxqs) {
+				for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) {
+					q = &rx_qgrp->singleq.rxqs[j];
+					iecm_rx_desc_rel(q, false,
+							 vport->rxq_model);
+				}
+			}
+		}
+	}
 }
 
 /**
@@ -585,7 +737,18 @@ static int iecm_rx_desc_alloc_all(struct iecm_vport *vport)
  */
 static void iecm_txq_group_rel(struct iecm_vport *vport)
 {
-	/* stub */
+	if (vport->txq_grps) {
+		int i;
+
+		for (i = 0; i < vport->num_txq_grp; i++) {
+			kfree(vport->txq_grps[i].txqs);
+			vport->txq_grps[i].txqs = NULL;
+			kfree(vport->txq_grps[i].complq);
+			vport->txq_grps[i].complq = NULL;
+		}
+		kfree(vport->txq_grps);
+		vport->txq_grps = NULL;
+	}
 }
 
 /**
@@ -594,7 +757,25 @@ static void iecm_txq_group_rel(struct iecm_vport *vport)
  */
 static void iecm_rxq_group_rel(struct iecm_vport *vport)
 {
-	/* stub */
+	if (vport->rxq_grps) {
+		int i;
+
+		for (i = 0; i < vport->num_rxq_grp; i++) {
+			struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+			if (iecm_is_queue_model_split(vport->rxq_model)) {
+				kfree(rx_qgrp->splitq.rxq_sets);
+				rx_qgrp->splitq.rxq_sets = NULL;
+				kfree(rx_qgrp->splitq.bufq_sets);
+				rx_qgrp->splitq.bufq_sets = NULL;
+			} else {
+				kfree(rx_qgrp->singleq.rxqs);
+				vport->rxq_grps[i].singleq.rxqs = NULL;
+			}
+		}
+		kfree(vport->rxq_grps);
+		vport->rxq_grps = NULL;
+	}
 }
 
 /**
@@ -603,7 +784,8 @@ static void iecm_rxq_group_rel(struct iecm_vport *vport)
  */
 static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport)
 {
-	/* stub */
+	iecm_txq_group_rel(vport);
+	iecm_rxq_group_rel(vport);
 }
 
 /**
@@ -614,7 +796,12 @@ static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport)
  */
 void iecm_vport_queues_rel(struct iecm_vport *vport)
 {
-	/* stub */
+	iecm_tx_desc_rel_all(vport);
+	iecm_rx_desc_rel_all(vport);
+	iecm_vport_queue_grp_rel_all(vport);
+
+	kfree(vport->txqs);
+	vport->txqs = NULL;
 }
 
 /**
@@ -2570,5 +2757,10 @@ int iecm_init_rss(struct iecm_vport *vport)
  */
 void iecm_deinit_rss(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+
+	kfree(adapter->rss_data.rss_key);
+	adapter->rss_data.rss_key = NULL;
+	kfree(adapter->rss_data.rss_lut);
+	adapter->rss_data.rss_lut = NULL;
 }
diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
index 36b284aa8be2..c6b5dcbe59c3 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
@@ -687,7 +687,19 @@ int iecm_send_enable_vport_msg(struct iecm_vport *vport)
  */
 int iecm_send_disable_vport_msg(struct iecm_vport *vport)
 {
-	/* stub */
+	struct iecm_adapter *adapter = vport->adapter;
+	struct virtchnl_vport v_id;
+	int err;
+
+	v_id.vport_id = vport->vport_id;
+
+	err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_DISABLE_VPORT,
+			       sizeof(v_id), (u8 *)&v_id);
+	if (err)
+		return err;
+
+	return iecm_wait_for_event(adapter, IECM_VC_DIS_VPORT,
+				   IECM_VC_DIS_VPORT_ERR);
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 11/15] iecm: Add splitq TX/RX
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (9 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 10/15] iecm: Deinit vport Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 20:40   ` Jakub Kicinski
  2020-08-24 17:33 ` [net-next v5 12/15] iecm: Add singleq TX/RX Tony Nguyen
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement main TX/RX flows for split queue model.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1259 ++++++++++++++++++-
 1 file changed, 1202 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
index 5a7e31790f97..2347e1f0b730 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c
@@ -13,7 +13,12 @@
 static int iecm_buf_lifo_push(struct iecm_buf_lifo *stack,
 			      struct iecm_tx_buf *buf)
 {
-	/* stub */
+	if (stack->top == stack->size)
+		return -ENOSPC;
+
+	stack->bufs[stack->top++] = buf;
+
+	return 0;
 }
 
 /**
@@ -22,7 +27,10 @@ static int iecm_buf_lifo_push(struct iecm_buf_lifo *stack,
  **/
 static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack)
 {
-	/* stub */
+	if (!stack->top)
+		return NULL;
+
+	return stack->bufs[--stack->top];
 }
 
 /**
@@ -33,7 +41,16 @@ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack)
 void iecm_get_stats64(struct net_device *netdev,
 		      struct rtnl_link_stats64 *stats)
 {
-	/* stub */
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+
+	iecm_send_get_stats_msg(vport);
+	stats->rx_packets = vport->netstats.rx_packets;
+	stats->tx_packets = vport->netstats.tx_packets;
+	stats->rx_bytes = vport->netstats.rx_bytes;
+	stats->tx_bytes = vport->netstats.tx_bytes;
+	stats->tx_errors = vport->netstats.tx_errors;
+	stats->rx_dropped = vport->netstats.rx_dropped;
+	stats->tx_dropped = vport->netstats.tx_dropped;
 }
 
 /**
@@ -1253,7 +1270,16 @@ int iecm_vport_queues_alloc(struct iecm_vport *vport)
 static struct iecm_queue *
 iecm_tx_find_q(struct iecm_vport *vport, int q_id)
 {
-	/* stub */
+	int i;
+
+	for (i = 0; i < vport->num_txq; i++) {
+		struct iecm_queue *tx_q = vport->txqs[i];
+
+		if (tx_q->q_id == q_id)
+			return tx_q;
+	}
+
+	return NULL;
 }
 
 /**
@@ -1262,7 +1288,22 @@ iecm_tx_find_q(struct iecm_vport *vport, int q_id)
  */
 static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q)
 {
-	/* stub */
+	struct iecm_vport *vport = tx_q->vport;
+	bool drain_complete = true;
+	int i;
+
+	clear_bit(__IECM_Q_SW_MARKER, tx_q->flags);
+	/* Hardware must write marker packets to all queues associated with
+	 * completion queues. So check if all queues received marker packets
+	 */
+	for (i = 0; i < vport->num_txq; i++) {
+		if (test_bit(__IECM_Q_SW_MARKER, vport->txqs[i]->flags))
+			drain_complete = false;
+	}
+	if (drain_complete) {
+		set_bit(__IECM_VPORT_SW_MARKER, vport->flags);
+		wake_up(&vport->sw_marker_wq);
+	}
 }
 
 /**
@@ -1277,7 +1318,30 @@ static struct iecm_tx_queue_stats
 iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf,
 			 int napi_budget)
 {
-	/* stub */
+	struct iecm_tx_queue_stats cleaned = {0};
+	struct netdev_queue *nq;
+
+	/* update the statistics for this packet */
+	cleaned.bytes = tx_buf->bytecount;
+	cleaned.packets = tx_buf->gso_segs;
+
+	/* free the skb */
+	napi_consume_skb(tx_buf->skb, napi_budget);
+	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+	netdev_tx_completed_queue(nq, cleaned.packets,
+				  cleaned.bytes);
+
+	/* unmap skb header data */
+	dma_unmap_single(tx_q->dev,
+			 dma_unmap_addr(tx_buf, dma),
+			 dma_unmap_len(tx_buf, len),
+			 DMA_TO_DEVICE);
+
+	/* clear tx_buf data */
+	tx_buf->skb = NULL;
+	dma_unmap_len_set(tx_buf, len, 0);
+
+	return cleaned;
 }
 
 /**
@@ -1289,7 +1353,33 @@ iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf,
 static int
 iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf)
 {
-	/* stub */
+	struct iecm_adapter *adapter = txq->vport->adapter;
+	struct iecm_tx_buf *shadow_buf;
+
+	shadow_buf = iecm_buf_lifo_pop(&txq->buf_stack);
+	if (!shadow_buf) {
+		dev_err(&adapter->pdev->dev,
+			"No out-of-order TX buffers left!\n");
+		return -ENOMEM;
+	}
+
+	/* Store buffer params in shadow buffer */
+	shadow_buf->skb = tx_buf->skb;
+	shadow_buf->bytecount = tx_buf->bytecount;
+	shadow_buf->gso_segs = tx_buf->gso_segs;
+	shadow_buf->dma = tx_buf->dma;
+	shadow_buf->len = tx_buf->len;
+	shadow_buf->compl_tag = tx_buf->compl_tag;
+
+	/* Add buffer to buf_hash table to be freed
+	 * later
+	 */
+	hash_add(txq->sched_buf_hash, &shadow_buf->hlist,
+		 shadow_buf->compl_tag);
+
+	memset(tx_buf, 0, sizeof(struct iecm_tx_buf));
+
+	return 0;
 }
 
 /**
@@ -1312,7 +1402,91 @@ static struct iecm_tx_queue_stats
 iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget,
 		     bool descs_only)
 {
-	/* stub */
+	union iecm_tx_flex_desc *next_pending_desc = NULL;
+	struct iecm_tx_queue_stats cleaned_stats = {0};
+	union iecm_tx_flex_desc *tx_desc;
+	s16 ntc = tx_q->next_to_clean;
+	struct iecm_tx_buf *tx_buf;
+
+	tx_desc = IECM_FLEX_TX_DESC(tx_q, ntc);
+	next_pending_desc = IECM_FLEX_TX_DESC(tx_q, end);
+	tx_buf = &tx_q->tx_buf[ntc];
+	ntc -= tx_q->desc_count;
+
+	while (tx_desc != next_pending_desc) {
+		union iecm_tx_flex_desc *eop_desc =
+			(union iecm_tx_flex_desc *)tx_buf->next_to_watch;
+
+		/* clear next_to_watch to prevent false hangs */
+		tx_buf->next_to_watch = NULL;
+
+		if (descs_only) {
+			if (iecm_stash_flow_sch_buffers(tx_q, tx_buf))
+				goto tx_splitq_clean_out;
+
+			while (tx_desc != eop_desc) {
+				tx_buf++;
+				tx_desc++;
+				ntc++;
+				if (unlikely(!ntc)) {
+					ntc -= tx_q->desc_count;
+					tx_buf = tx_q->tx_buf;
+					tx_desc = IECM_FLEX_TX_DESC(tx_q, 0);
+				}
+
+				if (dma_unmap_len(tx_buf, len)) {
+					if (iecm_stash_flow_sch_buffers(tx_q,
+									tx_buf))
+						goto tx_splitq_clean_out;
+				}
+			}
+		} else {
+			struct iecm_tx_queue_stats buf_stats = {0};
+
+			buf_stats = iecm_tx_splitq_clean_buf(tx_q, tx_buf,
+							     napi_budget);
+
+			/* update the statistics for this packet */
+			cleaned_stats.bytes += buf_stats.bytes;
+			cleaned_stats.packets += buf_stats.packets;
+
+			/* unmap remaining buffers */
+			while (tx_desc != eop_desc) {
+				tx_buf++;
+				tx_desc++;
+				ntc++;
+				if (unlikely(!ntc)) {
+					ntc -= tx_q->desc_count;
+					tx_buf = tx_q->tx_buf;
+					tx_desc = IECM_FLEX_TX_DESC(tx_q, 0);
+				}
+
+				/* unmap any remaining paged data */
+				if (dma_unmap_len(tx_buf, len)) {
+					dma_unmap_page(tx_q->dev,
+						       dma_unmap_addr(tx_buf, dma),
+						       dma_unmap_len(tx_buf, len),
+						       DMA_TO_DEVICE);
+					dma_unmap_len_set(tx_buf, len, 0);
+				}
+			}
+		}
+
+		tx_buf++;
+		tx_desc++;
+		ntc++;
+		if (unlikely(!ntc)) {
+			ntc -= tx_q->desc_count;
+			tx_buf = tx_q->tx_buf;
+			tx_desc = IECM_FLEX_TX_DESC(tx_q, 0);
+		}
+	}
+
+tx_splitq_clean_out:
+	ntc += tx_q->desc_count;
+	tx_q->next_to_clean = ntc;
+
+	return cleaned_stats;
 }
 
 /**
@@ -1327,7 +1501,34 @@ static struct iecm_tx_queue_stats
 iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag,
 			    u8 *desc_ts, int budget)
 {
-	/* stub */
+	struct iecm_tx_queue_stats cleaned_stats = {0};
+	struct hlist_node *tmp_buf = NULL;
+	struct iecm_tx_buf *tx_buf = NULL;
+
+	/* Buffer completion */
+	hash_for_each_possible_safe(txq->sched_buf_hash, tx_buf, tmp_buf,
+				    hlist, compl_tag) {
+		if (tx_buf->compl_tag != compl_tag)
+			continue;
+
+		if (likely(tx_buf->skb)) {
+			cleaned_stats = iecm_tx_splitq_clean_buf(txq, tx_buf,
+								 budget);
+		} else if (dma_unmap_len(tx_buf, len)) {
+			dma_unmap_page(txq->dev,
+				       dma_unmap_addr(tx_buf, dma),
+				       dma_unmap_len(tx_buf, len),
+				       DMA_TO_DEVICE);
+			dma_unmap_len_set(tx_buf, len, 0);
+		}
+
+		/* Push shadow buf back onto stack */
+		iecm_buf_lifo_push(&txq->buf_stack, tx_buf);
+
+		hash_del(&tx_buf->hlist);
+	}
+
+	return cleaned_stats;
 }
 
 /**
@@ -1340,7 +1541,109 @@ iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag,
 static bool
 iecm_tx_clean_complq(struct iecm_queue *complq, int budget)
 {
-	/* stub */
+	struct iecm_splitq_tx_compl_desc *tx_desc;
+	struct iecm_vport *vport = complq->vport;
+	s16 ntc = complq->next_to_clean;
+	unsigned int complq_budget;
+
+	complq_budget = vport->compln_clean_budget;
+	tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, ntc);
+	ntc -= complq->desc_count;
+
+	do {
+		struct iecm_tx_queue_stats cleaned_stats = {0};
+		bool descs_only = false;
+		struct iecm_queue *tx_q;
+		u16 compl_tag, hw_head;
+		int tx_qid;
+		u8 ctype;	/* completion type */
+		u16 gen;
+
+		/* if the descriptor isn't done, no work yet to do */
+		gen = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+		      IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S;
+		if (test_bit(__IECM_Q_GEN_CHK, complq->flags) != gen)
+			break;
+
+		/* Find necessary info of TX queue to clean buffers */
+		tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+			 IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S;
+		tx_q = iecm_tx_find_q(vport, tx_qid);
+		if (!tx_q) {
+			dev_err(&complq->vport->adapter->pdev->dev,
+				"TxQ #%d not found\n", tx_qid);
+			goto fetch_next_desc;
+		}
+
+		/* Determine completion type */
+		ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+			IECM_TXD_COMPLQ_COMPL_TYPE_M) >>
+			IECM_TXD_COMPLQ_COMPL_TYPE_S;
+		switch (ctype) {
+		case IECM_TXD_COMPLT_RE:
+			hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head);
+
+			cleaned_stats = iecm_tx_splitq_clean(tx_q, hw_head,
+							     budget,
+							     descs_only);
+			break;
+		case IECM_TXD_COMPLT_RS:
+			if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) {
+				compl_tag =
+				le16_to_cpu(tx_desc->q_head_compl_tag.compl_tag);
+
+				cleaned_stats =
+					iecm_tx_clean_flow_sch_bufs(tx_q,
+								    compl_tag,
+								    tx_desc->ts,
+								    budget);
+			} else {
+				hw_head =
+				le16_to_cpu(tx_desc->q_head_compl_tag.q_head);
+
+				cleaned_stats = iecm_tx_splitq_clean(tx_q,
+								     hw_head,
+								     budget,
+								     false);
+			}
+
+			break;
+		case IECM_TXD_COMPLT_SW_MARKER:
+			iecm_tx_handle_sw_marker(tx_q);
+			break;
+		default:
+			dev_err(&tx_q->vport->adapter->pdev->dev,
+				"Unknown TX completion type: %d\n",
+				ctype);
+			goto fetch_next_desc;
+		}
+
+		tx_q->itr.stats.tx.packets += cleaned_stats.packets;
+		tx_q->itr.stats.tx.bytes += cleaned_stats.bytes;
+		u64_stats_update_begin(&tx_q->stats_sync);
+		tx_q->q_stats.tx.packets += cleaned_stats.packets;
+		tx_q->q_stats.tx.bytes += cleaned_stats.bytes;
+		u64_stats_update_end(&tx_q->stats_sync);
+
+fetch_next_desc:
+		tx_desc++;
+		ntc++;
+		if (unlikely(!ntc)) {
+			ntc -= complq->desc_count;
+			tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, 0);
+			change_bit(__IECM_Q_GEN_CHK, complq->flags);
+		}
+
+		prefetch(tx_desc);
+
+		/* update budget accounting */
+		complq_budget--;
+	} while (likely(complq_budget));
+
+	ntc += complq->desc_count;
+	complq->next_to_clean = ntc;
+
+	return !!complq_budget;
 }
 
 /**
@@ -1356,7 +1659,12 @@ iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc,
 			 struct iecm_tx_splitq_params *parms,
 			 u16 td_cmd, u16 size)
 {
-	/* stub */
+	desc->q.qw1.cmd_dtype =
+		cpu_to_le16(parms->dtype & IECM_FLEX_TXD_QW1_DTYPE_M);
+	desc->q.qw1.cmd_dtype |=
+		cpu_to_le16((td_cmd << IECM_FLEX_TXD_QW1_CMD_S) &
+			    IECM_FLEX_TXD_QW1_CMD_M);
+	desc->q.qw1.buf_size = cpu_to_le16((u16)size);
 }
 
 /**
@@ -1372,7 +1680,13 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc,
 			       struct iecm_tx_splitq_params *parms,
 			       u16 td_cmd, u16 size)
 {
-	/* stub */
+	desc->flow.qw1.cmd_dtype = (u16)parms->dtype | td_cmd;
+	desc->flow.qw1.rxr_bufsize = cpu_to_le16((u16)size);
+	desc->flow.qw1.compl_tag = cpu_to_le16(parms->compl_tag);
+
+	desc->flow.qw1.ts[0] = parms->offload.desc_ts & 0xff;
+	desc->flow.qw1.ts[1] = (parms->offload.desc_ts >> 8) & 0xff;
+	desc->flow.qw1.ts[2] = (parms->offload.desc_ts >> 16) & 0xff;
 }
 
 /**
@@ -1385,7 +1699,19 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc,
 static int
 __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
 {
-	/* stub */
+	netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx);
+
+	/* Memory barrier before checking head and tail */
+	smp_mb();
+
+	/* Check again in a case another CPU has just made room available. */
+	if (likely(IECM_DESC_UNUSED(tx_q) < size))
+		return -EBUSY;
+
+	/* A reprieve! - use start_subqueue because it doesn't call schedule */
+	netif_start_subqueue(tx_q->vport->netdev, tx_q->idx);
+
+	return 0;
 }
 
 /**
@@ -1397,7 +1723,10 @@ __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
  */
 int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
 {
-	/* stub */
+	if (likely(IECM_DESC_UNUSED(tx_q) >= size))
+		return 0;
+
+	return __iecm_tx_maybe_stop(tx_q, size);
 }
 
 /**
@@ -1407,7 +1736,23 @@ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size)
  */
 void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val)
 {
-	/* stub */
+	struct netdev_queue *nq;
+
+	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+	tx_q->next_to_use = val;
+
+	iecm_tx_maybe_stop(tx_q, IECM_TX_DESC_NEEDED);
+
+	/* Force memory writes to complete before letting h/w
+	 * know there are new descriptors to fetch.  (Only
+	 * applicable for weak-ordered memory model archs,
+	 * such as IA-64).
+	 */
+	wmb();
+
+	/* notify HW of packet */
+	if (netif_xmit_stopped(nq) || !netdev_xmit_more())
+		writel_relaxed(val, tx_q->tail);
 }
 
 /**
@@ -1440,7 +1785,7 @@ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val)
  */
 static unsigned int __iecm_tx_desc_count_required(unsigned int size)
 {
-	/* stub */
+	return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR;
 }
 
 /**
@@ -1451,13 +1796,26 @@ static unsigned int __iecm_tx_desc_count_required(unsigned int size)
  */
 unsigned int iecm_tx_desc_count_required(struct sk_buff *skb)
 {
-	/* stub */
+	const skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+	unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+	unsigned int count = 0, size = skb_headlen(skb);
+
+	for (;;) {
+		count += __iecm_tx_desc_count_required(size);
+
+		if (!nr_frags--)
+			break;
+
+		size = skb_frag_size(frag++);
+	}
+
+	return count;
 }
 
 /**
  * iecm_tx_splitq_map - Build the Tx flex descriptor
  * @tx_q: queue to send buffer on
- * @off: pointer to offload params struct
+ * @parms: pointer to splitq params struct
  * @first: first buffer info buffer to use
  *
  * This function loops over the skb data pointed to by *first
@@ -1466,10 +1824,130 @@ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb)
  */
 static void
 iecm_tx_splitq_map(struct iecm_queue *tx_q,
-		   struct iecm_tx_offload_params *off,
+		   struct iecm_tx_splitq_params *parms,
 		   struct iecm_tx_buf *first)
 {
-	/* stub */
+	union iecm_tx_flex_desc *tx_desc;
+	unsigned int data_len, size;
+	struct iecm_tx_buf *tx_buf;
+	u16 i = tx_q->next_to_use;
+	struct netdev_queue *nq;
+	struct sk_buff *skb;
+	skb_frag_t *frag;
+	u16 td_cmd = 0;
+	dma_addr_t dma;
+
+	skb = first->skb;
+
+	td_cmd = parms->offload.td_cmd;
+	parms->compl_tag = tx_q->tx_buf_key;
+
+	data_len = skb->data_len;
+	size = skb_headlen(skb);
+
+	tx_desc = IECM_FLEX_TX_DESC(tx_q, i);
+
+	dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+
+	tx_buf = first;
+
+	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+		unsigned int max_data = IECM_TX_MAX_DESC_DATA_ALIGNED;
+
+		if (dma_mapping_error(tx_q->dev, dma))
+			goto dma_error;
+
+		/* record length, and DMA address */
+		dma_unmap_len_set(tx_buf, len, size);
+		dma_unmap_addr_set(tx_buf, dma, dma);
+
+		/* align size to end of page */
+		max_data += -dma & (IECM_TX_MAX_READ_REQ_SIZE - 1);
+
+		/* buf_addr is in same location for both desc types */
+		tx_desc->q.buf_addr = cpu_to_le64(dma);
+
+		/* account for data chunks larger than the hardware
+		 * can handle
+		 */
+		while (unlikely(size > IECM_TX_MAX_DESC_DATA)) {
+			parms->splitq_build_ctb(tx_desc, parms, td_cmd, size);
+
+			tx_desc++;
+			i++;
+
+			if (i == tx_q->desc_count) {
+				tx_desc = IECM_FLEX_TX_DESC(tx_q, 0);
+				i = 0;
+			}
+
+			dma += max_data;
+			size -= max_data;
+
+			max_data = IECM_TX_MAX_DESC_DATA_ALIGNED;
+			/* buf_addr is in same location for both desc types */
+			tx_desc->q.buf_addr = cpu_to_le64(dma);
+		}
+
+		if (likely(!data_len))
+			break;
+		parms->splitq_build_ctb(tx_desc, parms, td_cmd, size);
+		tx_desc++;
+		i++;
+
+		if (i == tx_q->desc_count) {
+			tx_desc = IECM_FLEX_TX_DESC(tx_q, 0);
+			i = 0;
+		}
+
+		size = skb_frag_size(frag);
+		data_len -= size;
+
+		dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
+				       DMA_TO_DEVICE);
+
+		tx_buf->compl_tag = parms->compl_tag;
+		tx_buf = &tx_q->tx_buf[i];
+	}
+
+	/* record bytecount for BQL */
+	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+	netdev_tx_sent_queue(nq, first->bytecount);
+
+	/* record SW timestamp if HW timestamp is not available */
+	skb_tx_timestamp(first->skb);
+
+	/* write last descriptor with RS and EOP bits */
+	td_cmd |= parms->eop_cmd;
+	parms->splitq_build_ctb(tx_desc, parms, td_cmd, size);
+	i++;
+	if (i == tx_q->desc_count)
+		i = 0;
+
+	/* set next_to_watch value indicating a packet is present */
+	first->next_to_watch = tx_desc;
+	tx_buf->compl_tag = parms->compl_tag++;
+
+	iecm_tx_buf_hw_update(tx_q, i);
+
+	/* Update TXQ Completion Tag key for next buffer */
+	tx_q->tx_buf_key = parms->compl_tag;
+
+	return;
+
+dma_error:
+	/* clear DMA mappings for failed tx_buf map */
+	for (;;) {
+		tx_buf = &tx_q->tx_buf[i];
+		iecm_tx_buf_rel(tx_q, tx_buf);
+		if (tx_buf == first)
+			break;
+		if (i == 0)
+			i = tx_q->desc_count;
+		i--;
+	}
+
+	tx_q->next_to_use = i;
 }
 
 /**
@@ -1485,7 +1963,79 @@ iecm_tx_splitq_map(struct iecm_queue *tx_q,
 static int iecm_tso(struct iecm_tx_buf *first,
 		    struct iecm_tx_offload_params *off)
 {
-	/* stub */
+	struct sk_buff *skb = first->skb;
+	union {
+		struct iphdr *v4;
+		struct ipv6hdr *v6;
+		unsigned char *hdr;
+	} ip;
+	union {
+		struct tcphdr *tcp;
+		struct udphdr *udp;
+		unsigned char *hdr;
+	} l4;
+	u32 paylen, l4_start;
+	int err;
+
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
+
+	if (!skb_is_gso(skb))
+		return 0;
+
+	err = skb_cow_head(skb, 0);
+	if (err < 0)
+		return err;
+
+	ip.hdr = skb_network_header(skb);
+	l4.hdr = skb_transport_header(skb);
+
+	/* initialize outer IP header fields */
+	if (ip.v4->version == 4) {
+		ip.v4->tot_len = 0;
+		ip.v4->check = 0;
+	} else {
+		ip.v6->payload_len = 0;
+	}
+
+	/* determine offset of transport header */
+	l4_start = l4.hdr - skb->data;
+
+	/* remove payload length from checksum */
+	paylen = skb->len - l4_start;
+
+	switch (skb_shinfo(skb)->gso_type) {
+	case SKB_GSO_TCPV4:
+	case SKB_GSO_TCPV6:
+		csum_replace_by_diff(&l4.tcp->check,
+				     (__force __wsum)htonl(paylen));
+
+		/* compute length of segmentation header */
+		off->tso_hdr_len = (l4.tcp->doff * 4) + l4_start;
+		break;
+	case SKB_GSO_UDP_L4:
+		csum_replace_by_diff(&l4.udp->check,
+				     (__force __wsum)htonl(paylen));
+		/* compute length of segmentation header */
+		off->tso_hdr_len = sizeof(struct udphdr) + l4_start;
+		l4.udp->len =
+			htons(skb_shinfo(skb)->gso_size +
+			      sizeof(struct udphdr));
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	off->tso_len = skb->len - off->tso_hdr_len;
+	off->mss = skb_shinfo(skb)->gso_size;
+
+	/* update gso_segs and bytecount */
+	first->gso_segs = skb_shinfo(skb)->gso_segs;
+	first->bytecount += (first->gso_segs - 1) * off->tso_hdr_len;
+
+	first->tx_flags |= IECM_TX_FLAGS_TSO;
+
+	return 0;
 }
 
 /**
@@ -1498,7 +2048,84 @@ static int iecm_tso(struct iecm_tx_buf *first,
 static netdev_tx_t
 iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
 {
-	/* stub */
+	struct iecm_tx_splitq_params tx_parms = { NULL, 0, 0, 0, {0} };
+	struct iecm_tx_buf *first;
+	unsigned int count;
+
+	count = iecm_tx_desc_count_required(skb);
+
+	/* need: 1 descriptor per page * PAGE_SIZE/IECM_MAX_DATA_PER_TXD,
+	 *       + 1 desc for skb_head_len/IECM_MAX_DATA_PER_TXD,
+	 *       + 4 desc gap to avoid the cache line where head is,
+	 *       + 1 desc for context descriptor,
+	 * otherwise try next time
+	 */
+	if (iecm_tx_maybe_stop(tx_q, count + IECM_TX_DESCS_PER_CACHE_LINE +
+			       IECM_TX_DESCS_FOR_CTX)) {
+		return NETDEV_TX_BUSY;
+	}
+
+	/* record the location of the first descriptor for this packet */
+	first = &tx_q->tx_buf[tx_q->next_to_use];
+	first->skb = skb;
+	first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+	first->gso_segs = 1;
+	first->tx_flags = 0;
+
+	if (iecm_tso(first, &tx_parms.offload) < 0) {
+		/* If tso returns an error, drop the packet */
+		dev_kfree_skb_any(skb);
+		return NETDEV_TX_OK;
+	}
+
+	if (first->tx_flags & IECM_TX_FLAGS_TSO) {
+		/* If TSO is needed, set up context desc */
+		union iecm_flex_tx_ctx_desc *ctx_desc;
+		int i = tx_q->next_to_use;
+
+		/* grab the next descriptor */
+		ctx_desc = IECM_FLEX_TX_CTX_DESC(tx_q, i);
+		i++;
+		tx_q->next_to_use = (i < tx_q->desc_count) ? i : 0;
+
+		ctx_desc->tso.qw1.cmd_dtype |=
+				cpu_to_le16(IECM_TX_DESC_DTYPE_FLEX_TSO_CTX |
+					    IECM_TX_FLEX_CTX_DESC_CMD_TSO);
+		ctx_desc->tso.qw0.flex_tlen =
+				cpu_to_le32(tx_parms.offload.tso_len &
+					    IECM_TXD_FLEX_CTX_TLEN_M);
+		ctx_desc->tso.qw0.mss_rt =
+				cpu_to_le16(tx_parms.offload.mss &
+					    IECM_TXD_FLEX_CTX_MSS_RT_M);
+		ctx_desc->tso.qw0.hdr_len = tx_parms.offload.tso_hdr_len;
+	}
+
+	if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) {
+		s64 ts_ns = first->skb->skb_mstamp_ns;
+
+		tx_parms.offload.desc_ts =
+			ts_ns >> IECM_TW_TIME_STAMP_GRAN_512_DIV_S;
+
+		tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
+		tx_parms.splitq_build_ctb = iecm_tx_splitq_build_flow_desc;
+		tx_parms.eop_cmd =
+			IECM_TXD_FLEX_FLOW_CMD_EOP | IECM_TXD_FLEX_FLOW_CMD_RE;
+
+		if (skb->ip_summed == CHECKSUM_PARTIAL)
+			tx_parms.offload.td_cmd |= IECM_TXD_FLEX_FLOW_CMD_CS_EN;
+
+	} else {
+		tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_DATA;
+		tx_parms.splitq_build_ctb = iecm_tx_splitq_build_ctb;
+		tx_parms.eop_cmd = IECM_TX_DESC_CMD_EOP | IECM_TX_DESC_CMD_RS;
+
+		if (skb->ip_summed == CHECKSUM_PARTIAL)
+			tx_parms.offload.td_cmd |= IECM_TX_FLEX_DESC_CMD_CS_EN;
+	}
+
+	iecm_tx_splitq_map(tx_q, &tx_parms, first);
+
+	return NETDEV_TX_OK;
 }
 
 /**
@@ -1511,7 +2138,18 @@ iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
 netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb,
 				 struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	struct iecm_queue *tx_q;
+
+	tx_q = vport->txqs[skb->queue_mapping];
+
+	/* hardware can't handle really short frames, hardware padding works
+	 * beyond this point
+	 */
+	if (skb_put_padto(skb, IECM_TX_MIN_LEN))
+		return NETDEV_TX_OK;
+
+	return iecm_tx_splitq_frame(skb, tx_q);
 }
 
 /**
@@ -1526,7 +2164,18 @@ netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb,
 static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport,
 					       u16 ptype)
 {
-	/* stub */
+	struct iecm_rx_ptype_decoded decoded = vport->rx_ptype_lkup[ptype];
+
+	if (!decoded.known)
+		return PKT_HASH_TYPE_NONE;
+	if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4)
+		return PKT_HASH_TYPE_L4;
+	if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3)
+		return PKT_HASH_TYPE_L3;
+	if (decoded.outer_ip == IECM_RX_PTYPE_OUTER_L2)
+		return PKT_HASH_TYPE_L2;
+
+	return PKT_HASH_TYPE_NONE;
 }
 
 /**
@@ -1540,7 +2189,17 @@ static void
 iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb,
 	     struct iecm_flex_rx_desc *rx_desc, u16 ptype)
 {
-	/* stub */
+	u32 hash;
+
+	if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXHASH))
+		return;
+
+	hash = rx_desc->status_err1 |
+	       (rx_desc->fflags1 << 8) |
+	       (rx_desc->ts_low << 16) |
+	       (rx_desc->ff2_mirrid_hash2.hash2 << 24);
+
+	skb_set_hash(skb, hash, iecm_ptype_to_htype(rxq->vport, ptype));
 }
 
 /**
@@ -1556,7 +2215,62 @@ static void
 iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb,
 	     struct iecm_flex_rx_desc *rx_desc, u16 ptype)
 {
-	/* stub */
+	struct iecm_rx_ptype_decoded decoded;
+	u8 rx_status_0_qw1, rx_status_0_qw0;
+	bool ipv4, ipv6;
+
+	/* Start with CHECKSUM_NONE and by default csum_level = 0 */
+	skb->ip_summed = CHECKSUM_NONE;
+
+	/* check if Rx checksum is enabled */
+	if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXCSUM))
+		return;
+
+	rx_status_0_qw1 = rx_desc->status_err0_qw1;
+	/* check if HW has decoded the packet and checksum */
+	if (!(rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_L3L4P_S)))
+		return;
+
+	decoded = rxq->vport->rx_ptype_lkup[ptype];
+	if (!(decoded.known && decoded.outer_ip))
+		return;
+
+	ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+	       (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4);
+	ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+	       (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6);
+
+	if (ipv4 && (rx_status_0_qw1 &
+		     (BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |
+		      BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S))))
+		goto checksum_fail;
+
+	rx_status_0_qw0 = rx_desc->status_err0_qw0;
+	if (ipv6 && (rx_status_0_qw0 &
+		     (BIT(IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S))))
+		return;
+
+	/* check for L4 errors and handle packets that were not able to be
+	 * checksummed
+	 */
+	if (rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S))
+		goto checksum_fail;
+
+	/* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */
+	switch (decoded.inner_prot) {
+	case IECM_RX_PTYPE_INNER_PROT_ICMP:
+	case IECM_RX_PTYPE_INNER_PROT_TCP:
+	case IECM_RX_PTYPE_INNER_PROT_UDP:
+	case IECM_RX_PTYPE_INNER_PROT_SCTP:
+		skb->ip_summed = CHECKSUM_UNNECESSARY;
+		rxq->q_stats.rx.csum_unnecessary++;
+	default:
+		break;
+	}
+	return;
+
+checksum_fail:
+	rxq->q_stats.rx.csum_err++;
 }
 
 /**
@@ -1572,19 +2286,74 @@ iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb,
 static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb,
 			struct iecm_flex_rx_desc *rx_desc, u16 ptype)
 {
-	/* stub */
-}
+	struct iecm_rx_ptype_decoded decoded;
+	u16 rsc_segments, rsc_payload_len;
+	struct ipv6hdr *ipv6h;
+	struct tcphdr *tcph;
+	struct iphdr *ipv4h;
+	bool ipv4, ipv6;
+	u16 hdr_len;
+
+	rsc_payload_len = le16_to_cpu(rx_desc->fmd1_misc.rscseglen);
+	if (!rsc_payload_len)
+		goto rsc_err;
+
+	decoded = rxq->vport->rx_ptype_lkup[ptype];
+	if (!(decoded.known && decoded.outer_ip))
+		goto rsc_err;
+
+	ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+		(decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4);
+	ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+		(decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6);
+
+	if (!(ipv4 ^ ipv6))
+		goto rsc_err;
+
+	if (ipv4)
+		hdr_len = ETH_HLEN + sizeof(struct tcphdr) +
+			  sizeof(struct iphdr);
+	else
+		hdr_len = ETH_HLEN + sizeof(struct tcphdr) +
+			  sizeof(struct ipv6hdr);
 
-/**
- * iecm_rx_hwtstamp - check for an RX timestamp and pass up
- * the stack
- * @rx_desc: pointer to Rx descriptor containing timestamp
- * @skb: skb to put timestamp in
- */
-static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc,
-			     struct sk_buff __maybe_unused *skb)
-{
-	/* stub */
+	rsc_segments = DIV_ROUND_UP(skb->len - hdr_len, rsc_payload_len);
+
+	NAPI_GRO_CB(skb)->count = rsc_segments;
+	skb_shinfo(skb)->gso_size = rsc_payload_len;
+
+	skb_reset_network_header(skb);
+
+	if (ipv4) {
+		ipv4h = ip_hdr(skb);
+		skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+
+		/* Reset and set transport header offset in skb */
+		skb_set_transport_header(skb, sizeof(struct iphdr));
+		tcph = tcp_hdr(skb);
+
+		/* Compute the TCP pseudo header checksum*/
+		tcph->check =
+			~tcp_v4_check(skb->len - skb_transport_offset(skb),
+				      ipv4h->saddr, ipv4h->daddr, 0);
+	} else {
+		ipv6h = ipv6_hdr(skb);
+		skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+		skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+		tcph = tcp_hdr(skb);
+		tcph->check =
+			~tcp_v6_check(skb->len - skb_transport_offset(skb),
+				      &ipv6h->saddr, &ipv6h->daddr, 0);
+	}
+
+	tcp_gro_complete(skb);
+
+	/* Map Rx qid to the skb*/
+	skb_record_rx_queue(skb, rxq->q_id);
+
+	return true;
+rsc_err:
+	return false;
 }
 
 /**
@@ -1601,7 +2370,24 @@ static bool
 iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
 			   struct iecm_flex_rx_desc *rx_desc)
 {
-	/* stub */
+	bool err = false;
+	u16 rx_ptype;
+	bool rsc;
+
+	rx_ptype = le16_to_cpu(rx_desc->ptype_err_fflags0) &
+		   IECM_RXD_FLEX_PTYPE_M;
+
+	/* modifies the skb - consumes the enet header */
+	skb->protocol = eth_type_trans(skb, rxq->vport->netdev);
+	iecm_rx_csum(rxq, skb, rx_desc, rx_ptype);
+	/* process RSS/hash */
+	iecm_rx_hash(rxq, skb, rx_desc, rx_ptype);
+
+	rsc = le16_to_cpu(rx_desc->hdrlen_flags) & IECM_RXD_FLEX_RSC_M;
+	if (rsc)
+		err = iecm_rx_rsc(rxq, skb, rx_desc, rx_ptype);
+
+	return err;
 }
 
 /**
@@ -1614,7 +2400,7 @@ iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
  */
 void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb)
 {
-	/* stub */
+	napi_gro_receive(&rxq->q_vector->napi, skb);
 }
 
 /**
@@ -1623,7 +2409,7 @@ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb)
  */
 static bool iecm_rx_page_is_reserved(struct page *page)
 {
-	/* stub */
+	return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
 }
 
 /**
@@ -1639,7 +2425,13 @@ static bool iecm_rx_page_is_reserved(struct page *page)
 static void
 iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size)
 {
-	/* stub */
+#if (PAGE_SIZE < 8192)
+	/* flip page offset to other buffer */
+	rx_buf->page_offset ^= size;
+#else
+	/* move offset up to the next cache line */
+	rx_buf->page_offset += size;
+#endif
 }
 
 /**
@@ -1653,7 +2445,35 @@ iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size)
  */
 static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf)
 {
-	/* stub */
+#if (PAGE_SIZE >= 8192)
+	unsigned int last_offset = PAGE_SIZE - IECM_RX_BUF_2048;
+#endif
+	unsigned int pagecnt_bias = rx_buf->pagecnt_bias;
+	struct page *page = rx_buf->page;
+
+	/* avoid re-using remote pages */
+	if (unlikely(iecm_rx_page_is_reserved(page)))
+		return false;
+
+#if (PAGE_SIZE < 8192)
+	/* if we are only owner of page we can reuse it */
+	if (unlikely((page_count(page) - pagecnt_bias) > 1))
+		return false;
+#else
+	if (rx_buf->page_offset > last_offset)
+		return false;
+#endif /* PAGE_SIZE < 8192) */
+
+	/* If we have drained the page fragment pool we need to update
+	 * the pagecnt_bias and page count so that we fully restock the
+	 * number of references the driver holds.
+	 */
+	if (unlikely(pagecnt_bias == 1)) {
+		page_ref_add(page, USHRT_MAX - 1);
+		rx_buf->pagecnt_bias = USHRT_MAX;
+	}
+
+	return true;
 }
 
 /**
@@ -1669,7 +2489,17 @@ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf)
 void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb,
 		      unsigned int size)
 {
-	/* stub */
+#if (PAGE_SIZE >= 8192)
+	unsigned int truesize = SKB_DATA_ALIGN(size);
+#else
+	unsigned int truesize = IECM_RX_BUF_2048;
+#endif
+
+	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page,
+			rx_buf->page_offset, size, truesize);
+
+	/* page is being used so we must update the page offset */
+	iecm_rx_buf_adjust_pg_offset(rx_buf, truesize);
 }
 
 /**
@@ -1684,7 +2514,22 @@ void iecm_rx_reuse_page(struct iecm_queue *rx_bufq,
 			bool hsplit,
 			struct iecm_rx_buf *old_buf)
 {
-	/* stub */
+	u16 ntu = rx_bufq->next_to_use;
+	struct iecm_rx_buf *new_buf;
+
+	if (hsplit)
+		new_buf = &rx_bufq->rx_buf.hdr_buf[ntu];
+	else
+		new_buf = &rx_bufq->rx_buf.buf[ntu];
+
+	/* Transfer page from old buffer to new buffer.
+	 * Move each member individually to avoid possible store
+	 * forwarding stalls and unnecessary copy of skb.
+	 */
+	new_buf->dma = old_buf->dma;
+	new_buf->page = old_buf->page;
+	new_buf->page_offset = old_buf->page_offset;
+	new_buf->pagecnt_bias = old_buf->pagecnt_bias;
 }
 
 /**
@@ -1701,7 +2546,15 @@ static void
 iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf,
 		     const unsigned int size)
 {
-	/* stub */
+	prefetch(rx_buf->page);
+
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(dev, rx_buf->dma,
+				      rx_buf->page_offset, size,
+				      DMA_FROM_DEVICE);
+
+	/* We have pulled a buffer for use, so decrement pagecnt_bias */
+	rx_buf->pagecnt_bias--;
 }
 
 /**
@@ -1718,7 +2571,52 @@ struct sk_buff *
 iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf,
 		      unsigned int size)
 {
-	/* stub */
+	void *va = page_address(rx_buf->page) + rx_buf->page_offset;
+	unsigned int headlen;
+	struct sk_buff *skb;
+
+	/* prefetch first cache line of first page */
+	prefetch(va);
+#if L1_CACHE_BYTES < 128
+	prefetch((u8 *)va + L1_CACHE_BYTES);
+#endif /* L1_CACHE_BYTES */
+	/* allocate a skb to store the frags */
+	skb = __napi_alloc_skb(&rxq->q_vector->napi, IECM_RX_HDR_SIZE,
+			       GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_record_rx_queue(skb, rxq->idx);
+
+	/* Determine available headroom for copy */
+	headlen = size;
+	if (headlen > IECM_RX_HDR_SIZE)
+		headlen = eth_get_headlen(skb->dev, va, IECM_RX_HDR_SIZE);
+
+	/* align pull length to size of long to optimize memcpy performance */
+	memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));
+
+	/* if we exhaust the linear part then add what is left as a frag */
+	size -= headlen;
+	if (size) {
+#if (PAGE_SIZE >= 8192)
+		unsigned int truesize = SKB_DATA_ALIGN(size);
+#else
+		unsigned int truesize = IECM_RX_BUF_2048;
+#endif
+		skb_add_rx_frag(skb, 0, rx_buf->page,
+				rx_buf->page_offset + headlen, size, truesize);
+		/* buffer is used by skb, update page_offset */
+		iecm_rx_buf_adjust_pg_offset(rx_buf, truesize);
+	} else {
+		/* buffer is unused, reset bias back to rx_buf; data was copied
+		 * onto skb's linear part so there's no need for adjusting
+		 * page offset and we can reuse this buffer as-is
+		 */
+		rx_buf->pagecnt_bias++;
+	}
+
+	return skb;
 }
 
 /**
@@ -1735,7 +2633,11 @@ iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf,
  */
 bool iecm_rx_cleanup_headers(struct sk_buff *skb)
 {
-	/* stub */
+	/* if eth_skb_pad returns an error the skb was freed */
+	if (eth_skb_pad(skb))
+		return true;
+
+	return false;
 }
 
 /**
@@ -1748,7 +2650,7 @@ bool iecm_rx_cleanup_headers(struct sk_buff *skb)
 static bool
 iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits)
 {
-	/* stub */
+	return !!(stat_err_field & stat_err_bits);
 }
 
 /**
@@ -1761,7 +2663,13 @@ iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits)
 static bool
 iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc)
 {
-	/* stub */
+	/* if we are the last buffer then there is nothing else to do */
+#define IECM_RXD_EOF BIT(IECM_RX_FLEX_DESC_STATUS0_EOF_S)
+	if (likely(iecm_rx_splitq_test_staterr(rx_desc->status_err0_qw1,
+					       IECM_RXD_EOF)))
+		return false;
+
+	return true;
 }
 
 /**
@@ -1778,7 +2686,24 @@ iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc)
 bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit,
 			 struct iecm_rx_buf *rx_buf)
 {
-	/* stub */
+	bool recycled = false;
+
+	if (iecm_rx_can_reuse_page(rx_buf)) {
+		/* hand second half of page back to the queue */
+		iecm_rx_reuse_page(rx_bufq, hsplit, rx_buf);
+		recycled = true;
+	} else {
+		/* we are not reusing the buffer so unmap it */
+		dma_unmap_page_attrs(rx_bufq->dev, rx_buf->dma, PAGE_SIZE,
+				     DMA_FROM_DEVICE, IECM_RX_DMA_ATTR);
+		__page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias);
+	}
+
+	/* clear contents of buffer_info */
+	rx_buf->page = NULL;
+	rx_buf->skb = NULL;
+
+	return recycled;
 }
 
 /**
@@ -1794,7 +2719,19 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq,
 				    struct iecm_rx_buf *hdr_buf,
 				    struct iecm_rx_buf *rx_buf)
 {
-	/* stub */
+	u16 ntu = rx_bufq->next_to_use;
+	bool recycled = false;
+
+	if (likely(hdr_buf))
+		recycled = iecm_rx_recycle_buf(rx_bufq, true, hdr_buf);
+	if (likely(rx_buf))
+		recycled = iecm_rx_recycle_buf(rx_bufq, false, rx_buf);
+
+	/* update, and store next to alloc if the buffer was recycled */
+	if (recycled) {
+		ntu++;
+		rx_bufq->next_to_use = (ntu < rx_bufq->desc_count) ? ntu : 0;
+	}
 }
 
 /**
@@ -1803,7 +2740,14 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq,
  */
 static void iecm_rx_bump_ntc(struct iecm_queue *q)
 {
-	/* stub */
+	u16 ntc = q->next_to_clean + 1;
+	/* fetch, update, and store next to clean */
+	if (ntc < q->desc_count) {
+		q->next_to_clean = ntc;
+	} else {
+		q->next_to_clean = 0;
+		change_bit(__IECM_Q_GEN_CHK, q->flags);
+	}
 }
 
 /**
@@ -1820,7 +2764,158 @@ static void iecm_rx_bump_ntc(struct iecm_queue *q)
  */
 static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget)
 {
-	/* stub */
+	unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+	u16 cleaned_count[IECM_BUFQS_PER_RXQ_SET] = {0};
+	struct iecm_queue *rx_bufq = NULL;
+	struct sk_buff *skb = rxq->skb;
+	bool failure = false;
+	int i;
+
+	/* Process Rx packets bounded by budget */
+	while (likely(total_rx_pkts < (unsigned int)budget)) {
+		struct iecm_flex_rx_desc *splitq_flex_rx_desc;
+		union iecm_rx_desc *rx_desc;
+		struct iecm_rx_buf *hdr_buf = NULL;
+		struct iecm_rx_buf *rx_buf = NULL;
+		unsigned int pkt_len = 0;
+		unsigned int hdr_len = 0;
+		u16 gen_id, buf_id;
+		u8 stat_err0_qw0;
+		u8 stat_err_bits;
+		 /* Header buffer overflow only valid for header split */
+		bool hbo = false;
+		int bufq_id;
+
+		/* get the Rx desc from Rx queue based on 'next_to_clean' */
+		rx_desc = IECM_RX_DESC(rxq, rxq->next_to_clean);
+		splitq_flex_rx_desc = (struct iecm_flex_rx_desc *)rx_desc;
+
+		/* This memory barrier is needed to keep us from reading
+		 * any other fields out of the rx_desc
+		 */
+		dma_rmb();
+
+		/* if the descriptor isn't done, no work yet to do */
+		gen_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id);
+		gen_id = (gen_id & IECM_RXD_FLEX_GEN_M) >> IECM_RXD_FLEX_GEN_S;
+		if (test_bit(__IECM_Q_GEN_CHK, rxq->flags) != gen_id)
+			break;
+
+		pkt_len = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id) &
+			  IECM_RXD_FLEX_LEN_PBUF_M;
+
+		hbo = splitq_flex_rx_desc->status_err0_qw1 &
+		      BIT(IECM_RX_FLEX_DESC_STATUS0_HBO_S);
+
+		if (unlikely(hbo)) {
+			rxq->q_stats.rx.hsplit_hbo++;
+			goto bypass_hsplit;
+		}
+
+		hdr_len =
+			le16_to_cpu(splitq_flex_rx_desc->hdrlen_flags) &
+			IECM_RXD_FLEX_LEN_HDR_M;
+
+bypass_hsplit:
+		bufq_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id);
+		bufq_id = (bufq_id & IECM_RXD_FLEX_BUFQ_ID_M) >>
+			  IECM_RXD_FLEX_BUFQ_ID_S;
+		/* retrieve buffer from the rxq */
+		rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq;
+
+		buf_id = le16_to_cpu(splitq_flex_rx_desc->fmd0_bufid.buf_id);
+
+		if (pkt_len) {
+			rx_buf = &rx_bufq->rx_buf.buf[buf_id];
+			iecm_rx_get_buf_page(rx_bufq->dev, rx_buf, pkt_len);
+		}
+
+		if (hdr_len) {
+			hdr_buf = &rx_bufq->rx_buf.hdr_buf[buf_id];
+			iecm_rx_get_buf_page(rx_bufq->dev, hdr_buf,
+					     hdr_len);
+
+			skb = iecm_rx_construct_skb(rxq, hdr_buf, hdr_len);
+		}
+
+		if (skb && pkt_len)
+			iecm_rx_add_frag(rx_buf, skb, pkt_len);
+		else if (pkt_len)
+			skb = iecm_rx_construct_skb(rxq, rx_buf, pkt_len);
+
+		/* exit if we failed to retrieve a buffer */
+		if (!skb) {
+			/* If we fetched a buffer, but didn't use it
+			 * undo pagecnt_bias decrement
+			 */
+			if (rx_buf)
+				rx_buf->pagecnt_bias++;
+			break;
+		}
+
+		iecm_rx_splitq_put_bufs(rx_bufq, hdr_buf, rx_buf);
+		iecm_rx_bump_ntc(rxq);
+		cleaned_count[bufq_id]++;
+
+		/* skip if it is non EOP desc */
+		if (iecm_rx_splitq_is_non_eop(splitq_flex_rx_desc))
+			continue;
+
+		stat_err_bits = BIT(IECM_RX_FLEX_DESC_STATUS0_RXE_S);
+		stat_err0_qw0 = splitq_flex_rx_desc->status_err0_qw0;
+		if (unlikely(iecm_rx_splitq_test_staterr(stat_err0_qw0,
+							 stat_err_bits))) {
+			dev_kfree_skb_any(skb);
+			skb = NULL;
+			continue;
+		}
+
+		/* correct empty headers and pad skb if needed (to make valid
+		 * Ethernet frame
+		 */
+		if (iecm_rx_cleanup_headers(skb)) {
+			skb = NULL;
+			continue;
+		}
+
+		/* probably a little skewed due to removing CRC */
+		total_rx_bytes += skb->len;
+
+		/* protocol */
+		if (unlikely(iecm_rx_process_skb_fields(rxq, skb,
+							splitq_flex_rx_desc))) {
+			dev_kfree_skb_any(skb);
+			skb = NULL;
+			continue;
+		}
+
+		/* send completed skb up the stack */
+		iecm_rx_skb(rxq, skb);
+		skb = NULL;
+
+		/* update budget accounting */
+		total_rx_pkts++;
+	}
+	for (i = 0; i < IECM_BUFQS_PER_RXQ_SET; i++) {
+		if (cleaned_count[i]) {
+			rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[i].bufq;
+			failure = iecm_rx_buf_hw_alloc_all(rx_bufq,
+							   cleaned_count[i]) ||
+				  failure;
+		}
+	}
+
+	rxq->skb = skb;
+	u64_stats_update_begin(&rxq->stats_sync);
+	rxq->q_stats.rx.packets += total_rx_pkts;
+	rxq->q_stats.rx.bytes += total_rx_bytes;
+	u64_stats_update_end(&rxq->stats_sync);
+
+	rxq->itr.stats.rx.packets += total_rx_pkts;
+	rxq->itr.stats.rx.bytes += total_rx_bytes;
+
+	/* guarantee a trip back through this routine if there was a failure */
+	return failure ? budget : (int)total_rx_pkts;
 }
 
 /**
@@ -2375,7 +3470,15 @@ iecm_vport_intr_napi_ena_all(struct iecm_vport *vport)
 static inline bool
 iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget)
 {
-	/* stub */
+	bool clean_complete = true;
+	int i, budget_per_q;
+
+	budget_per_q = max(budget / q_vec->num_txq, 1);
+	for (i = 0; i < q_vec->num_txq; i++) {
+		if (!iecm_tx_clean_complq(q_vec->tx[i], budget_per_q))
+			clean_complete = false;
+	}
+	return clean_complete;
 }
 
 /**
@@ -2390,7 +3493,22 @@ static inline bool
 iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget,
 			 int *cleaned)
 {
-	/* stub */
+	bool clean_complete = true;
+	int pkts_cleaned_per_q;
+	int i, budget_per_q;
+
+	budget_per_q = max(budget / q_vec->num_rxq, 1);
+	for (i = 0; i < q_vec->num_rxq; i++) {
+		pkts_cleaned_per_q  = iecm_rx_splitq_clean(q_vec->rx[i],
+							   budget_per_q);
+		/* if we clean as many as budgeted, we must not
+		 * be done
+		 */
+		if (pkts_cleaned_per_q >= budget_per_q)
+			clean_complete = false;
+		*cleaned += pkts_cleaned_per_q;
+	}
+	return clean_complete;
 }
 
 /**
@@ -2400,7 +3518,34 @@ iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget,
  */
 static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
 {
-	/* stub */
+	struct iecm_q_vector *q_vector =
+				container_of(napi, struct iecm_q_vector, napi);
+	bool clean_complete;
+	int work_done = 0;
+
+	clean_complete = iecm_tx_splitq_clean_all(q_vector, budget);
+
+	/* Handle case where we are called by netpoll with a budget of 0 */
+	if (budget <= 0)
+		return budget;
+
+	/* We attempt to distribute budget to each Rx queue fairly, but don't
+	 * allow the budget to go below 1 because that would exit polling early.
+	 */
+	clean_complete |= iecm_rx_splitq_clean_all(q_vector, budget,
+						   &work_done);
+
+	/* If work not completed, return budget and polling will return */
+	if (!clean_complete)
+		return budget;
+
+	/* Exit the polling mode, but don't re-enable interrupts if stack might
+	 * poll us due to busy-polling
+	 */
+	if (likely(napi_complete_done(napi, work_done)))
+		iecm_vport_intr_update_itr_ena_irq(q_vector);
+
+	return min_t(int, work_done, budget - 1);
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 12/15] iecm: Add singleq TX/RX
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (10 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 11/15] iecm: Add splitq TX/RX Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 17:33 ` [net-next v5 13/15] iecm: Add ethtool Tony Nguyen
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement legacy single queue model for TX/RX flows.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../ethernet/intel/iecm/iecm_singleq_txrx.c   | 670 +++++++++++++++++-
 1 file changed, 652 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
index 063c35274f38..9a63b089288f 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c
@@ -17,7 +17,11 @@ static __le64
 iecm_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size,
 			   u64 td_tag)
 {
-	/* stub */
+	return cpu_to_le64(IECM_TX_DESC_DTYPE_DATA |
+			   (td_cmd    << IECM_TXD_QW1_CMD_S) |
+			   (td_offset << IECM_TXD_QW1_OFFSET_S) |
+			   ((u64)size << IECM_TXD_QW1_TX_BUF_SZ_S) |
+			   (td_tag    << IECM_TXD_QW1_L2TAG1_S));
 }
 
 /**
@@ -31,7 +35,93 @@ static
 int iecm_tx_singleq_csum(struct iecm_tx_buf *first,
 			 struct iecm_tx_offload_params *off)
 {
-	/* stub */
+	u32 l4_len = 0, l3_len = 0, l2_len = 0;
+	struct sk_buff *skb = first->skb;
+	union {
+		struct iphdr *v4;
+		struct ipv6hdr *v6;
+		unsigned char *hdr;
+	} ip;
+	union {
+		struct tcphdr *tcp;
+		unsigned char *hdr;
+	} l4;
+	__be16 frag_off, protocol;
+	unsigned char *exthdr;
+	u32 offset, cmd = 0;
+	u8 l4_proto = 0;
+
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
+
+	if (skb->encapsulation)
+		return -1;
+
+	ip.hdr = skb_network_header(skb);
+	l4.hdr = skb_transport_header(skb);
+
+	/* compute outer L2 header size */
+	l2_len = ip.hdr - skb->data;
+	offset = (l2_len / 2) << IECM_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable IP checksum offloads */
+	protocol = vlan_get_protocol(skb);
+	if (protocol == htons(ETH_P_IP)) {
+		l4_proto = ip.v4->protocol;
+		/* the stack computes the IP header already, the only time we
+		 * need the hardware to recompute it is in the case of TSO.
+		 */
+		if (first->tx_flags & IECM_TX_FLAGS_TSO)
+			cmd |= IECM_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		else
+			cmd |= IECM_TX_DESC_CMD_IIPT_IPV4;
+
+	} else if (protocol == htons(ETH_P_IPV6)) {
+		cmd |= IECM_TX_DESC_CMD_IIPT_IPV6;
+		exthdr = ip.hdr + sizeof(struct ipv6hdr);
+		l4_proto = ip.v6->nexthdr;
+		if (l4.hdr != exthdr)
+			ipv6_skip_exthdr(skb, exthdr - skb->data, &l4_proto,
+					 &frag_off);
+	} else {
+		return -1;
+	}
+
+	/* compute inner L3 header size */
+	l3_len = l4.hdr - ip.hdr;
+	offset |= (l3_len / 4) << IECM_TX_DESC_LEN_IPLEN_S;
+
+	/* Enable L4 checksum offloads */
+	switch (l4_proto) {
+	case IPPROTO_TCP:
+		/* enable checksum offloads */
+		cmd |= IECM_TX_DESC_CMD_L4T_EOFT_TCP;
+		l4_len = l4.tcp->doff;
+		offset |= l4_len << IECM_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case IPPROTO_UDP:
+		/* enable UDP checksum offload */
+		cmd |= IECM_TX_DESC_CMD_L4T_EOFT_UDP;
+		l4_len = (sizeof(struct udphdr) >> 2);
+		offset |= l4_len << IECM_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case IPPROTO_SCTP:
+		/* enable SCTP checksum offload */
+		cmd |= IECM_TX_DESC_CMD_L4T_EOFT_SCTP;
+		l4_len = sizeof(struct sctphdr) >> 2;
+		offset |= l4_len << IECM_TX_DESC_LEN_L4_LEN_S;
+		break;
+
+	default:
+		if (first->tx_flags & IECM_TX_FLAGS_TSO)
+			return -1;
+		skb_checksum_help(skb);
+		return 0;
+	}
+
+	off->td_cmd |= cmd;
+	off->hdr_offsets |= offset;
+	return 1;
 }
 
 /**
@@ -48,7 +138,125 @@ static void
 iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first,
 		    struct iecm_tx_offload_params *offloads)
 {
-	/* stub */
+	u32 offsets = offloads->hdr_offsets;
+	struct iecm_base_tx_desc *tx_desc;
+	u64  td_cmd = offloads->td_cmd;
+	unsigned int data_len, size;
+	struct iecm_tx_buf *tx_buf;
+	u16 i = tx_q->next_to_use;
+	struct netdev_queue *nq;
+	struct sk_buff *skb;
+	skb_frag_t *frag;
+	dma_addr_t dma;
+
+	skb = first->skb;
+
+	data_len = skb->data_len;
+	size = skb_headlen(skb);
+
+	tx_desc = IECM_BASE_TX_DESC(tx_q, i);
+
+	dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+
+	tx_buf = first;
+
+	/* write each descriptor with CRC bit */
+	if (tx_q->vport->adapter->dev_ops.crc_enable)
+		tx_q->vport->adapter->dev_ops.crc_enable(&td_cmd);
+
+	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+		unsigned int max_data = IECM_TX_MAX_DESC_DATA_ALIGNED;
+
+		if (dma_mapping_error(tx_q->dev, dma))
+			goto dma_error;
+
+		/* record length, and DMA address */
+		dma_unmap_len_set(tx_buf, len, size);
+		dma_unmap_addr_set(tx_buf, dma, dma);
+
+		/* align size to end of page */
+		max_data += -dma & (IECM_TX_MAX_READ_REQ_SIZE - 1);
+		tx_desc->buf_addr = cpu_to_le64(dma);
+
+		/* account for data chunks larger than the hardware
+		 * can handle
+		 */
+		while (unlikely(size > IECM_TX_MAX_DESC_DATA)) {
+			tx_desc->qw1 = iecm_tx_singleq_build_ctob(td_cmd,
+								  offsets,
+								  size, 0);
+			tx_desc++;
+			i++;
+
+			if (i == tx_q->desc_count) {
+				tx_desc = IECM_BASE_TX_DESC(tx_q, 0);
+				i = 0;
+			}
+
+			dma += max_data;
+			size -= max_data;
+
+			max_data = IECM_TX_MAX_DESC_DATA_ALIGNED;
+			tx_desc->buf_addr = cpu_to_le64(dma);
+		}
+
+		if (likely(!data_len))
+			break;
+		tx_desc->qw1 = iecm_tx_singleq_build_ctob(td_cmd, offsets,
+							  size, 0);
+		tx_desc++;
+		i++;
+
+		if (i == tx_q->desc_count) {
+			tx_desc = IECM_BASE_TX_DESC(tx_q, 0);
+			i = 0;
+		}
+
+		size = skb_frag_size(frag);
+		data_len -= size;
+
+		dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
+				       DMA_TO_DEVICE);
+
+		tx_buf = &tx_q->tx_buf[i];
+	}
+
+	/* record bytecount for BQL */
+	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+	netdev_tx_sent_queue(nq, first->bytecount);
+
+	/* record SW timestamp if HW timestamp is not available */
+	skb_tx_timestamp(first->skb);
+
+	/* write last descriptor with RS and EOP bits */
+	td_cmd |= (u64)(IECM_TX_DESC_CMD_EOP | IECM_TX_DESC_CMD_RS);
+
+	tx_desc->qw1 = iecm_tx_singleq_build_ctob(td_cmd, offsets, size, 0);
+
+	i++;
+	if (i == tx_q->desc_count)
+		i = 0;
+
+	/* set next_to_watch value indicating a packet is present */
+	first->next_to_watch = tx_desc;
+
+	iecm_tx_buf_hw_update(tx_q, i);
+
+	return;
+
+dma_error:
+	/* clear DMA mappings for failed tx_buf map */
+	for (;;) {
+		tx_buf = &tx_q->tx_buf[i];
+		iecm_tx_buf_rel(tx_q, tx_buf);
+		if (tx_buf == first)
+			break;
+		if (i == 0)
+			i = tx_q->desc_count;
+		i--;
+	}
+
+	tx_q->next_to_use = i;
 }
 
 /**
@@ -61,7 +269,42 @@ iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first,
 static netdev_tx_t
 iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
 {
-	/* stub */
+	struct iecm_tx_offload_params offload = {0};
+	struct iecm_tx_buf *first;
+	unsigned int count;
+	int csum;
+
+	count = iecm_tx_desc_count_required(skb);
+
+	/* need: 1 descriptor per page * PAGE_SIZE/IECM_MAX_DATA_PER_TXD,
+	 *       + 1 desc for skb_head_len/IECM_MAX_DATA_PER_TXD,
+	 *       + 4 desc gap to avoid the cache line where head is,
+	 *       + 1 desc for context descriptor,
+	 * otherwise try next time
+	 */
+	if (iecm_tx_maybe_stop(tx_q, count + IECM_TX_DESCS_PER_CACHE_LINE +
+			       IECM_TX_DESCS_FOR_CTX)) {
+		return NETDEV_TX_BUSY;
+	}
+
+	/* record the location of the first descriptor for this packet */
+	first = &tx_q->tx_buf[tx_q->next_to_use];
+	first->skb = skb;
+	first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+	first->gso_segs = 1;
+	first->tx_flags = 0;
+
+	csum = iecm_tx_singleq_csum(first, &offload);
+	if (csum < 0)
+		goto out_drop;
+
+	iecm_tx_singleq_map(tx_q, first, &offload);
+
+	return NETDEV_TX_OK;
+
+out_drop:
+	dev_kfree_skb_any(skb);
+	return NETDEV_TX_OK;
 }
 
 /**
@@ -74,7 +317,18 @@ iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q)
 netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb,
 				  struct net_device *netdev)
 {
-	/* stub */
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	struct iecm_queue *tx_q;
+
+	tx_q = vport->txqs[skb->queue_mapping];
+
+	/* hardware can't handle really short frames, hardware padding works
+	 * beyond this point
+	 */
+	if (skb_put_padto(skb, IECM_TX_MIN_LEN))
+		return NETDEV_TX_OK;
+
+	return iecm_tx_singleq_frame(skb, tx_q);
 }
 
 /**
@@ -85,7 +339,98 @@ netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb,
  */
 static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget)
 {
-	/* stub */
+	unsigned int budget = tx_q->vport->compln_clean_budget;
+	unsigned int total_bytes = 0, total_pkts = 0;
+	struct iecm_base_tx_desc *tx_desc;
+	s16 ntc = tx_q->next_to_clean;
+	struct iecm_tx_buf *tx_buf;
+	struct netdev_queue *nq;
+
+	tx_desc = IECM_BASE_TX_DESC(tx_q, ntc);
+	tx_buf = &tx_q->tx_buf[ntc];
+	ntc -= tx_q->desc_count;
+
+	do {
+		struct iecm_base_tx_desc *eop_desc = tx_buf->next_to_watch;
+
+		/* if next_to_watch is not set then no work pending */
+		if (!eop_desc)
+			break;
+
+		/* prevent any other reads prior to eop_desc */
+		smp_rmb();
+
+		/* if the descriptor isn't done, no work yet to do */
+		if (!(eop_desc->qw1 &
+		      cpu_to_le64(IECM_TX_DESC_DTYPE_DESC_DONE)))
+			break;
+
+		/* clear next_to_watch to prevent false hangs */
+		tx_buf->next_to_watch = NULL;
+
+		/* update the statistics for this packet */
+		total_bytes += tx_buf->bytecount;
+		total_pkts += tx_buf->gso_segs;
+
+		/* free the skb */
+		napi_consume_skb(tx_buf->skb, napi_budget);
+
+		/* unmap skb header data */
+		dma_unmap_single(tx_q->dev,
+				 dma_unmap_addr(tx_buf, dma),
+				 dma_unmap_len(tx_buf, len),
+				 DMA_TO_DEVICE);
+
+		/* clear tx_buf data */
+		tx_buf->skb = NULL;
+		dma_unmap_len_set(tx_buf, len, 0);
+
+		/* unmap remaining buffers */
+		while (tx_desc != eop_desc) {
+			tx_buf++;
+			tx_desc++;
+			ntc++;
+			if (unlikely(!ntc)) {
+				ntc -= tx_q->desc_count;
+				tx_buf = tx_q->tx_buf;
+				tx_desc = IECM_BASE_TX_DESC(tx_q, 0);
+			}
+
+			/* unmap any remaining paged data */
+			if (dma_unmap_len(tx_buf, len)) {
+				dma_unmap_page(tx_q->dev,
+					       dma_unmap_addr(tx_buf, dma),
+					       dma_unmap_len(tx_buf, len),
+					       DMA_TO_DEVICE);
+				dma_unmap_len_set(tx_buf, len, 0);
+			}
+		}
+
+		tx_buf++;
+		tx_desc++;
+		ntc++;
+		if (unlikely(!ntc)) {
+			ntc -= tx_q->desc_count;
+			tx_buf = tx_q->tx_buf;
+			tx_desc = IECM_BASE_TX_DESC(tx_q, 0);
+		}
+		/* update budget */
+		budget--;
+	} while (likely(budget));
+
+	ntc += tx_q->desc_count;
+	tx_q->next_to_clean = ntc;
+	nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+	netdev_tx_completed_queue(nq, total_pkts, total_bytes);
+	tx_q->itr.stats.tx.packets += total_pkts;
+	tx_q->itr.stats.tx.bytes += total_bytes;
+
+	u64_stats_update_begin(&tx_q->stats_sync);
+	tx_q->q_stats.tx.packets += total_pkts;
+	tx_q->q_stats.tx.bytes += total_bytes;
+	u64_stats_update_end(&tx_q->stats_sync);
+
+	return !!budget;
 }
 
 /**
@@ -98,7 +443,16 @@ static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget)
 static inline bool
 iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget)
 {
-	/* stub */
+	bool clean_complete = true;
+	int i, budget_per_q;
+
+	budget_per_q = max(budget / q_vec->num_txq, 1);
+	for (i = 0; i < q_vec->num_txq; i++) {
+		if (!iecm_tx_singleq_clean(q_vec->tx[i], budget_per_q))
+			clean_complete = false;
+	}
+
+	return clean_complete;
 }
 
 /**
@@ -116,7 +470,8 @@ static bool
 iecm_rx_singleq_test_staterr(struct iecm_singleq_base_rx_desc *rx_desc,
 			     const u64 stat_err_bits)
 {
-	/* stub */
+	return !!(rx_desc->qword1.status_error_ptype_len &
+		  cpu_to_le64(stat_err_bits));
 }
 
 /**
@@ -129,7 +484,15 @@ static bool iecm_rx_singleq_is_non_eop(struct iecm_queue *rxq,
 				       struct iecm_singleq_base_rx_desc
 				       *rx_desc, struct sk_buff *skb)
 {
-	/* stub */
+	/* if we are the last buffer then there is nothing else to do */
+ #define IECM_RXD_EOF BIT(IECM_RX_BASE_DESC_STATUS_EOF_S)
+	if (likely(iecm_rx_singleq_test_staterr(rx_desc, IECM_RXD_EOF)))
+		return false;
+
+	/* place skb in next buffer to be received */
+	rxq->rx_buf.buf[rxq->next_to_clean].skb = skb;
+
+	return true;
 }
 
 /**
@@ -145,7 +508,63 @@ static void iecm_rx_singleq_csum(struct iecm_queue *rxq, struct sk_buff *skb,
 				 struct iecm_singleq_base_rx_desc *rx_desc,
 				 u8 ptype)
 {
-	/* stub */
+	u64 qw1 = le64_to_cpu(rx_desc->qword1.status_error_ptype_len);
+	struct iecm_rx_ptype_decoded decoded;
+	bool ipv4, ipv6;
+	u32 rx_status;
+	u8 rx_error;
+
+	/* Start with CHECKSUM_NONE and by default csum_level = 0 */
+	skb->ip_summed = CHECKSUM_NONE;
+	skb_checksum_none_assert(skb);
+
+	/* check if Rx checksum is enabled */
+	if (!(rxq->vport->netdev->features & NETIF_F_RXCSUM))
+		return;
+
+	rx_status = ((qw1 & IECM_RXD_QW1_STATUS_M) >> IECM_RXD_QW1_STATUS_S);
+	rx_error = ((qw1 & IECM_RXD_QW1_ERROR_M) >> IECM_RXD_QW1_ERROR_S);
+
+	/* check if HW has decoded the packet and checksum */
+	if (!(rx_status & BIT(IECM_RX_BASE_DESC_STATUS_L3L4P_S)))
+		return;
+
+	decoded = rxq->vport->rx_ptype_lkup[ptype];
+	if (!(decoded.known && decoded.outer_ip))
+		return;
+
+	ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+	       (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4);
+	ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) &&
+	       (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6);
+
+	if (ipv4 && (rx_error & (BIT(IECM_RX_BASE_DESC_ERROR_IPE_S) |
+				 BIT(IECM_RX_BASE_DESC_ERROR_EIPE_S))))
+		goto checksum_fail;
+	else if (ipv6 && (rx_status &
+		 (BIT(IECM_RX_BASE_DESC_STATUS_IPV6EXADD_S))))
+		goto checksum_fail;
+
+	/* check for L4 errors and handle packets that were not able to be
+	 * checksummed due to arrival speed
+	 */
+	if (rx_error & BIT(IECM_RX_BASE_DESC_ERROR_L3L4E_S))
+		goto checksum_fail;
+
+	/* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */
+	switch (decoded.inner_prot) {
+	case IECM_RX_PTYPE_INNER_PROT_ICMP:
+	case IECM_RX_PTYPE_INNER_PROT_TCP:
+	case IECM_RX_PTYPE_INNER_PROT_UDP:
+	case IECM_RX_PTYPE_INNER_PROT_SCTP:
+		skb->ip_summed = CHECKSUM_UNNECESSARY;
+	default:
+		break;
+	}
+	return;
+
+checksum_fail:
+	rxq->q_stats.rx.csum_err++;
 }
 
 /**
@@ -165,7 +584,10 @@ iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
 				   struct iecm_singleq_base_rx_desc *rx_desc,
 				   u8 ptype)
 {
-	/* stub */
+	/* modifies the skb - consumes the enet header */
+	skb->protocol = eth_type_trans(skb, rxq->vport->netdev);
+
+	iecm_rx_singleq_csum(rxq, skb, rx_desc, ptype);
 }
 
 /**
@@ -178,7 +600,44 @@ iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb,
 bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q,
 				      u16 cleaned_count)
 {
-	/* stub */
+	struct iecm_singleq_rx_buf_desc *singleq_rx_desc = NULL;
+	u16 nta = rx_q->next_to_alloc;
+	struct iecm_rx_buf *buf;
+
+	/* do nothing if no valid netdev defined */
+	if (!rx_q->vport->netdev || !cleaned_count)
+		return false;
+
+	singleq_rx_desc = IECM_SINGLEQ_RX_BUF_DESC(rx_q, nta);
+	buf = &rx_q->rx_buf.buf[nta];
+
+	do {
+		if (!iecm_rx_buf_hw_alloc(rx_q, buf))
+			break;
+
+		/* Refresh the desc even if buffer_addrs didn't change
+		 * because each write-back erases this info.
+		 */
+		singleq_rx_desc->pkt_addr =
+			cpu_to_le64(buf->dma + buf->page_offset);
+		singleq_rx_desc->hdr_addr = 0;
+		singleq_rx_desc++;
+
+		buf++;
+		nta++;
+		if (unlikely(nta == rx_q->desc_count)) {
+			singleq_rx_desc = IECM_SINGLEQ_RX_BUF_DESC(rx_q, 0);
+			buf = rx_q->rx_buf.buf;
+			nta = 0;
+		}
+
+		cleaned_count--;
+	} while (cleaned_count);
+
+	if (rx_q->next_to_alloc != nta)
+		iecm_rx_buf_hw_update(rx_q, nta);
+
+	return !!cleaned_count;
 }
 
 /**
@@ -192,7 +651,16 @@ bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q,
 static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq,
 				    struct iecm_rx_buf *rx_buf)
 {
-	/* stub */
+	u16 ntu = rx_bufq->next_to_use;
+	bool recycled = false;
+
+	recycled = iecm_rx_recycle_buf(rx_bufq, false, rx_buf);
+
+	/* update, and store next to alloc if the buffer was recycled */
+	if (recycled) {
+		ntu++;
+		rx_bufq->next_to_use = (ntu < rx_bufq->desc_count) ? ntu : 0;
+	}
 }
 
 /**
@@ -201,7 +669,12 @@ static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq,
  */
 static void iecm_singleq_rx_bump_ntc(struct iecm_queue *q)
 {
-	/* stub */
+	u16 ntc = q->next_to_clean + 1;
+	/* fetch, update, and store next to clean */
+	if (ntc < q->desc_count)
+		q->next_to_clean = ntc;
+	else
+		q->next_to_clean = 0;
 }
 
 /**
@@ -217,7 +690,17 @@ static struct sk_buff *
 iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf,
 			     const unsigned int size)
 {
-	/* stub */
+	prefetch(rx_buf->page);
+
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(dev, rx_buf->dma,
+				      rx_buf->page_offset, size,
+				      DMA_FROM_DEVICE);
+
+	/* We have pulled a buffer for use, so decrement pagecnt_bias */
+	rx_buf->pagecnt_bias--;
+
+	return rx_buf->skb;
 }
 
 /**
@@ -229,7 +712,116 @@ iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf,
  */
 static int iecm_rx_singleq_clean(struct iecm_queue *rx_q, int budget)
 {
-	/* stub */
+	struct iecm_singleq_base_rx_desc *singleq_base_rx_desc;
+	unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+	u16 cleaned_count = 0;
+	bool failure = false;
+
+	/* Process Rx packets bounded by budget */
+	while (likely(total_rx_pkts < (unsigned int)budget)) {
+		union iecm_rx_desc *rx_desc;
+		struct sk_buff *skb = NULL;
+		struct iecm_rx_buf *rx_buf;
+		unsigned int size;
+		u8 rx_ptype;
+		u64 qword;
+
+		/* get the Rx desc from Rx queue based on 'next_to_clean' */
+		rx_desc = IECM_RX_DESC(rx_q, rx_q->next_to_clean);
+		singleq_base_rx_desc = (struct iecm_singleq_base_rx_desc *)
+					rx_desc;
+		/* status_error_ptype_len will always be zero for unused
+		 * descriptors because it's cleared in cleanup, and overlaps
+		 * with hdr_addr which is always zero because packet split
+		 * isn't used, if the hardware wrote DD then the length will be
+		 * non-zero
+		 */
+		qword =
+		le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
+
+		/* This memory barrier is needed to keep us from reading
+		 * any other fields out of the rx_desc
+		 */
+		dma_rmb();
+#define IECM_RXD_DD BIT(IECM_RX_BASE_DESC_STATUS_DD_S)
+		if (!iecm_rx_singleq_test_staterr(singleq_base_rx_desc,
+						  IECM_RXD_DD))
+			break;
+
+		size = (qword & IECM_RXD_QW1_LEN_PBUF_M) >>
+		       IECM_RXD_QW1_LEN_PBUF_S;
+		if (!size)
+			break;
+
+		rx_buf = &rx_q->rx_buf.buf[rx_q->next_to_clean];
+		skb = iecm_singleq_rx_get_buf_page(rx_q->dev, rx_buf, size);
+
+		if (skb)
+			iecm_rx_add_frag(rx_buf, skb, size);
+		else
+			skb = iecm_rx_construct_skb(rx_q, rx_buf, size);
+
+		/* exit if we failed to retrieve a buffer */
+		if (!skb) {
+			rx_buf->pagecnt_bias++;
+			break;
+		}
+
+		iecm_singleq_rx_put_buf(rx_q, rx_buf);
+		iecm_singleq_rx_bump_ntc(rx_q);
+
+		cleaned_count++;
+
+		/* skip if it is non EOP desc */
+		if (iecm_rx_singleq_is_non_eop(rx_q, singleq_base_rx_desc,
+					       skb))
+			continue;
+
+#define IECM_RXD_ERR_S BIT(IECM_RXD_QW1_ERROR_S)
+		if (unlikely(iecm_rx_singleq_test_staterr(singleq_base_rx_desc,
+							  IECM_RXD_ERR_S))) {
+			dev_kfree_skb_any(skb);
+			skb = NULL;
+			continue;
+		}
+
+		/* correct empty headers and pad skb if needed (to make valid
+		 * ethernet frame
+		 */
+		if (iecm_rx_cleanup_headers(skb)) {
+			skb = NULL;
+			continue;
+		}
+
+		/* probably a little skewed due to removing CRC */
+		total_rx_bytes += skb->len;
+
+		rx_ptype = (qword & IECM_RXD_QW1_PTYPE_M) >>
+				IECM_RXD_QW1_PTYPE_S;
+
+		/* protocol */
+		iecm_rx_singleq_process_skb_fields(rx_q, skb,
+						   singleq_base_rx_desc,
+						   rx_ptype);
+
+		/* send completed skb up the stack */
+		iecm_rx_skb(rx_q, skb);
+
+		/* update budget accounting */
+		total_rx_pkts++;
+	}
+	if (cleaned_count)
+		failure = iecm_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count);
+
+	rx_q->itr.stats.rx.packets += total_rx_pkts;
+	rx_q->itr.stats.rx.bytes += total_rx_bytes;
+	u64_stats_update_begin(&rx_q->stats_sync);
+	rx_q->q_stats.rx.packets += total_rx_pkts;
+	rx_q->q_stats.rx.bytes += total_rx_bytes;
+	u64_stats_update_end(&rx_q->stats_sync);
+
+	/* guarantee a trip back through this routine if there was a failure */
+	return failure ? budget : (int)total_rx_pkts;
 }
 
 /**
@@ -244,7 +836,22 @@ static inline bool
 iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget,
 			  int *cleaned)
 {
-	/* stub */
+	bool clean_complete = true;
+	int pkts_cleaned_per_q;
+	int budget_per_q, i;
+
+	budget_per_q = max(budget / q_vec->num_rxq, 1);
+	for (i = 0; i < q_vec->num_rxq; i++) {
+		pkts_cleaned_per_q = iecm_rx_singleq_clean(q_vec->rx[0],
+							   budget_per_q);
+
+		/* if we clean as many as budgeted, we must not be done */
+		if (pkts_cleaned_per_q >= budget_per_q)
+			clean_complete = false;
+		*cleaned += pkts_cleaned_per_q;
+	}
+
+	return clean_complete;
 }
 
 /**
@@ -254,5 +861,32 @@ iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget,
  */
 int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget)
 {
-	/* stub */
+	struct iecm_q_vector *q_vector =
+				container_of(napi, struct iecm_q_vector, napi);
+	bool clean_complete;
+	int work_done = 0;
+
+	clean_complete = iecm_tx_singleq_clean_all(q_vector, budget);
+
+	/* Handle case where we are called by netpoll with a budget of 0 */
+	if (budget <= 0)
+		return budget;
+
+	/* We attempt to distribute budget to each Rx queue fairly, but don't
+	 * allow the budget to go below 1 because that would exit polling early.
+	 */
+	clean_complete |= iecm_rx_singleq_clean_all(q_vector, budget,
+						    &work_done);
+
+	/* If work not completed, return budget and polling will return */
+	if (!clean_complete)
+		return budget;
+
+	/* Exit the polling mode, but don't re-enable interrupts if stack might
+	 * poll us due to busy-polling
+	 */
+	if (likely(napi_complete_done(napi, work_done)))
+		iecm_vport_intr_update_itr_ena_irq(q_vector);
+
+	return min_t(int, work_done, budget - 1);
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 13/15] iecm: Add ethtool
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (11 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 12/15] iecm: Add singleq TX/RX Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 21:44   ` Michal Kubecek
  2020-08-24 17:33 ` [net-next v5 14/15] iecm: Add iecm to the kernel build system Tony Nguyen
  2020-08-24 17:33 ` [net-next v5 15/15] idpf: Introduce idpf driver Tony Nguyen
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

Implement ethtool interface for the common module.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../net/ethernet/intel/iecm/iecm_ethtool.c    | 1034 ++++++++++++++++-
 drivers/net/ethernet/intel/iecm/iecm_lib.c    |  143 ++-
 2 files changed, 1173 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c
index a6532592f2f4..9b23541f81c4 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c
@@ -3,6 +3,1038 @@
 
 #include <linux/net/intel/iecm.h>
 
+/**
+ * iecm_get_rxnfc - command to get RX flow classification rules
+ * @netdev: network interface device structure
+ * @cmd: ethtool rxnfc command
+ * @rule_locs: pointer to store rule locations
+ *
+ * Returns Success if the command is supported.
+ */
+static int iecm_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
+			  u32 __always_unused *rule_locs)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	int ret = -EOPNOTSUPP;
+
+	switch (cmd->cmd) {
+	case ETHTOOL_GRXRINGS:
+		cmd->data = vport->num_rxq;
+		ret = 0;
+		break;
+	case ETHTOOL_GRXFH:
+		netdev_info(netdev, "RSS hash info is not available\n");
+		break;
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+/**
+ * iecm_get_rxfh_key_size - get the RSS hash key size
+ * @netdev: network interface device structure
+ *
+ * Returns the table size.
+ */
+static u32 iecm_get_rxfh_key_size(struct net_device *netdev)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+
+	if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) {
+		dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n");
+		return 0;
+	}
+
+	return vport->adapter->rss_data.rss_key_size;
+}
+
+/**
+ * iecm_get_rxfh_indir_size - get the Rx flow hash indirection table size
+ * @netdev: network interface device structure
+ *
+ * Returns the table size.
+ */
+static u32 iecm_get_rxfh_indir_size(struct net_device *netdev)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+
+	if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) {
+		dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n");
+		return 0;
+	}
+
+	return vport->adapter->rss_data.rss_lut_size;
+}
+
+/**
+ * iecm_find_virtual_qid - Finds the virtual RX qid from the absolute RX qid
+ * @vport: virtual port structure
+ * @qid_list: List of the RX qid's
+ * @abs_rx_qid: absolute RX qid
+ *
+ * Returns the virtual RX QID.
+ */
+static u32 iecm_find_virtual_qid(struct iecm_vport *vport, u16 *qid_list,
+				 u32 abs_rx_qid)
+{
+	u32 i;
+
+	for (i = 0; i < vport->num_rxq; i++)
+		if ((u32)qid_list[i] == abs_rx_qid)
+			break;
+	return i;
+}
+
+/**
+ * iecm_get_rxfh - get the Rx flow hash indirection table
+ * @netdev: network interface device structure
+ * @indir: indirection table
+ * @key: hash key
+ * @hfunc: hash function in use
+ *
+ * Reads the indirection table directly from the hardware. Always returns 0.
+ */
+static int iecm_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+			 u8 *hfunc)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	struct iecm_adapter *adapter;
+	u16 i, *qid_list;
+	u32 abs_qid;
+
+	adapter = vport->adapter;
+
+	if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) {
+		dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n");
+		return 0;
+	}
+
+	if (adapter->state != __IECM_UP)
+		return 0;
+
+	if (hfunc)
+		*hfunc = ETH_RSS_HASH_TOP;
+
+	if (key)
+		memcpy(key, adapter->rss_data.rss_key,
+		       adapter->rss_data.rss_key_size);
+
+	qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL);
+	if (!qid_list)
+		return -ENOMEM;
+
+	iecm_get_rx_qid_list(vport, qid_list);
+
+	if (indir)
+		/* Each 32 bits pointed by 'indir' is stored with a lut entry */
+		for (i = 0; i < adapter->rss_data.rss_lut_size; i++) {
+			abs_qid = (u32)adapter->rss_data.rss_lut[i];
+			indir[i] = iecm_find_virtual_qid(vport, qid_list,
+							 abs_qid);
+		}
+
+	kfree(qid_list);
+
+	return 0;
+}
+
+/**
+ * iecm_set_rxfh - set the Rx flow hash indirection table
+ * @netdev: network interface device structure
+ * @indir: indirection table
+ * @key: hash key
+ * @hfunc: hash function to use
+ *
+ * Returns -EINVAL if the table specifies an invalid queue id, otherwise
+ * returns 0 after programming the table.
+ */
+static int iecm_set_rxfh(struct net_device *netdev, const u32 *indir,
+			 const u8 *key, const u8 hfunc)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	struct iecm_adapter *adapter;
+	u16 *qid_list;
+	u16 lut;
+
+	adapter = vport->adapter;
+
+	if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) {
+		dev_info(&adapter->pdev->dev, "RSS is not supported on this device\n");
+		return 0;
+	}
+	if (adapter->state != __IECM_UP)
+		return 0;
+
+	if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
+		return -EOPNOTSUPP;
+
+	if (key)
+		memcpy(adapter->rss_data.rss_key, key,
+		       adapter->rss_data.rss_key_size);
+
+	qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL);
+	if (!qid_list)
+		return -ENOMEM;
+
+	iecm_get_rx_qid_list(vport, qid_list);
+
+	if (indir) {
+		for (lut = 0; lut < adapter->rss_data.rss_lut_size; lut++) {
+			int index = indir[lut];
+
+			adapter->rss_data.rss_lut[lut] = qid_list[index];
+		}
+	}
+
+	kfree(qid_list);
+
+	return iecm_config_rss(vport);
+}
+
+/**
+ * iecm_get_channels: get the number of channels supported by the device
+ * @netdev: network interface device structure
+ * @ch: channel information structure
+ *
+ * Report maximum of TX and RX. Report one extra channel to match our mailbox
+ * Queue.
+ */
+static void iecm_get_channels(struct net_device *netdev,
+			      struct ethtool_channels *ch)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	unsigned int combined;
+
+	combined = min(vport->num_txq, vport->num_rxq);
+
+	/* Report maximum channels */
+	ch->max_combined = IECM_MAX_Q;
+
+	ch->max_other = IECM_MAX_NONQ;
+	ch->other_count = IECM_MAX_NONQ;
+
+	ch->combined_count = combined;
+	ch->rx_count = vport->num_rxq - combined;
+	ch->tx_count = vport->num_txq - combined;
+}
+
+/**
+ * iecm_set_channels: set the new channel count
+ * @netdev: network interface device structure
+ * @ch: channel information structure
+ *
+ * Negotiate a new number of channels with CP. Returns 0 on success, negative
+ * on failure.
+ */
+static int iecm_set_channels(struct net_device *netdev,
+			     struct ethtool_channels *ch)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	int num_req_q = ch->combined_count;
+
+	if (num_req_q == max(vport->num_txq, vport->num_rxq))
+		return 0;
+
+	vport->adapter->config_data.num_req_qs = num_req_q;
+
+	return iecm_initiate_soft_reset(vport, __IECM_SR_Q_CHANGE);
+}
+
+/**
+ * iecm_get_ringparam - Get ring parameters
+ * @netdev: network interface device structure
+ * @ring: ethtool ringparam structure
+ *
+ * Returns current ring parameters. TX and RX rings are reported separately,
+ * but the number of rings is not reported.
+ */
+static void iecm_get_ringparam(struct net_device *netdev,
+			       struct ethtool_ringparam *ring)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+
+	ring->rx_max_pending = IECM_MAX_RXQ_DESC;
+	ring->tx_max_pending = IECM_MAX_TXQ_DESC;
+	ring->rx_pending = vport->rxq_desc_count;
+	ring->tx_pending = vport->txq_desc_count;
+}
+
+/**
+ * iecm_set_ringparam - Set ring parameters
+ * @netdev: network interface device structure
+ * @ring: ethtool ringparam structure
+ *
+ * Sets ring parameters. TX and RX rings are controlled separately, but the
+ * number of rings is not specified, so all rings get the same settings.
+ */
+static int iecm_set_ringparam(struct net_device *netdev,
+			      struct ethtool_ringparam *ring)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	u32 new_rx_count, new_tx_count;
+
+	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
+		return -EINVAL;
+
+	new_tx_count = ALIGN(ring->tx_pending, IECM_REQ_DESC_MULTIPLE);
+	new_rx_count = ALIGN(ring->rx_pending, IECM_REQ_DESC_MULTIPLE);
+
+	/* if nothing to do return success */
+	if (new_tx_count == vport->txq_desc_count &&
+	    new_rx_count == vport->rxq_desc_count)
+		return 0;
+
+	vport->adapter->config_data.num_req_txq_desc = new_tx_count;
+	vport->adapter->config_data.num_req_rxq_desc = new_rx_count;
+
+	return iecm_initiate_soft_reset(vport, __IECM_SR_Q_DESC_CHANGE);
+}
+
+/**
+ * struct iecm_stats - definition for an ethtool statistic
+ * @stat_string: statistic name to display in ethtool -S output
+ * @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64)
+ * @stat_offset: offsetof() the stat from a base pointer
+ *
+ * This structure defines a statistic to be added to the ethtool stats buffer.
+ * It defines a statistic as offset from a common base pointer. Stats should
+ * be defined in constant arrays using the IECM_STAT macro, with every element
+ * of the array using the same _type for calculating the sizeof_stat and
+ * stat_offset.
+ *
+ * The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or
+ * sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from
+ * the iecm_add_ethtool_stat() helper function.
+ *
+ * The @stat_string is interpreted as a format string, allowing formatted
+ * values to be inserted while looping over multiple structures for a given
+ * statistics array. Thus, every statistic string in an array should have the
+ * same type and number of format specifiers, to be formatted by variadic
+ * arguments to the iecm_add_stat_string() helper function.
+ */
+struct iecm_stats {
+	char stat_string[ETH_GSTRING_LEN];
+	int sizeof_stat;
+	int stat_offset;
+};
+
+/* Helper macro to define an iecm_stat structure with proper size and type.
+ * Use this when defining constant statistics arrays. Note that @_type expects
+ * only a type name and is used multiple times.
+ */
+#define IECM_STAT(_type, _name, _stat) { \
+	.stat_string = _name, \
+	.sizeof_stat = sizeof_field(_type, _stat), \
+	.stat_offset = offsetof(_type, _stat) \
+}
+
+/* Helper macro for defining some statistics related to queues */
+#define IECM_QUEUE_STAT(_name, _stat) \
+	IECM_STAT(struct iecm_queue, _name, _stat)
+
+/* Stats associated with a Tx queue */
+static const struct iecm_stats iecm_gstrings_tx_queue_stats[] = {
+	IECM_QUEUE_STAT("packets", q_stats.tx.packets),
+	IECM_QUEUE_STAT("bytes", q_stats.tx.bytes),
+};
+
+/* Stats associated with an Rx queue */
+static const struct iecm_stats iecm_gstrings_rx_queue_stats[] = {
+	IECM_QUEUE_STAT("packets", q_stats.rx.packets),
+	IECM_QUEUE_STAT("bytes", q_stats.rx.bytes),
+	IECM_QUEUE_STAT("csum_complete", q_stats.rx.csum_complete),
+	IECM_QUEUE_STAT("csum_unnecessary", q_stats.rx.csum_unnecessary),
+	IECM_QUEUE_STAT("csum_err", q_stats.rx.csum_err),
+	IECM_QUEUE_STAT("hsplit", q_stats.rx.hsplit),
+	IECM_QUEUE_STAT("hsplit_buf_overflow", q_stats.rx.hsplit_hbo),
+};
+
+#define IECM_TX_QUEUE_STATS_LEN		ARRAY_SIZE(iecm_gstrings_tx_queue_stats)
+#define IECM_RX_QUEUE_STATS_LEN		ARRAY_SIZE(iecm_gstrings_rx_queue_stats)
+
+/**
+ * __iecm_add_stat_strings - copy stat strings into ethtool buffer
+ * @p: ethtool supplied buffer
+ * @stats: stat definitions array
+ * @size: size of the stats array
+ * @type: stat type
+ * @idx: stat index
+ *
+ * Format and copy the strings described by stats into the buffer pointed at
+ * by p.
+ */
+static void __iecm_add_stat_strings(u8 **p, const struct iecm_stats stats[],
+				    const unsigned int size, const char *type,
+				    unsigned int idx)
+{
+	unsigned int i;
+
+	for (i = 0; i < size; i++) {
+		snprintf((char *)*p, ETH_GSTRING_LEN,
+			 "%.2s-%10u.%.17s", type, idx, stats[i].stat_string);
+		*p += ETH_GSTRING_LEN;
+	}
+}
+
+/**
+ * iecm_add_stat_strings - copy stat strings into ethtool buffer
+ * @p: ethtool supplied buffer
+ * @stats: stat definitions array
+ * @type: stat type
+ * @idx: stat idx
+ *
+ * Format and copy the strings described by the const static stats value into
+ * the buffer pointed at by p.
+ *
+ * The parameter @stats is evaluated twice, so parameters with side effects
+ * should be avoided. Additionally, stats must be an array such that
+ * ARRAY_SIZE can be called on it.
+ */
+#define iecm_add_stat_strings(p, stats, type, idx) \
+	__iecm_add_stat_strings(p, stats, ARRAY_SIZE(stats), type, idx)
+
+/**
+ * iecm_get_stat_strings - Get stat strings
+ * @netdev: network interface device structure
+ * @data: buffer for string data
+ *
+ * Builds the statistics string table
+ */
+static void iecm_get_stat_strings(struct net_device __always_unused *netdev,
+				  u8 *data)
+{
+	unsigned int i;
+
+	/* It's critical that we always report a constant number of strings and
+	 * that the strings are reported in the same order regardless of how
+	 * many queues are actually in use.
+	 */
+	for (i = 0; i < IECM_MAX_Q; i++)
+		iecm_add_stat_strings(&data, iecm_gstrings_tx_queue_stats,
+				      "tx", i);
+	for (i = 0; i < IECM_MAX_Q; i++)
+		iecm_add_stat_strings(&data, iecm_gstrings_rx_queue_stats,
+				      "rx", i);
+}
+
+/**
+ * iecm_get_strings - Get string set
+ * @netdev: network interface device structure
+ * @sset: id of string set
+ * @data: buffer for string data
+ *
+ * Builds string tables for various string sets
+ */
+static void iecm_get_strings(struct net_device *netdev, u32 sset, u8 *data)
+{
+	switch (sset) {
+	case ETH_SS_STATS:
+		iecm_get_stat_strings(netdev, data);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * iecm_get_sset_count - Get length of string set
+ * @netdev: network interface device structure
+ * @sset: id of string set
+ *
+ * Reports size of various string tables.
+ */
+static int iecm_get_sset_count(struct net_device __always_unused *netdev, int sset)
+{
+	if (sset == ETH_SS_STATS)
+		/* This size reported back here *must* be constant throughout
+		 * the lifecycle of the netdevice, i.e. we must report the
+		 * maximum length even for queues that don't technically exist.
+		 * This is due to the fact that this userspace API uses three
+		 * separate ioctl calls to get stats data but has no way to
+		 * communicate back to userspace when that size has changed,
+		 * which can typically happen as a result of changing number of
+		 * queues. If the number/order of stats change in the middle of
+		 * this call chain it will lead to userspace crashing/accessing
+		 * bad data through buffer under/overflow.
+		 */
+		return (IECM_TX_QUEUE_STATS_LEN * IECM_MAX_Q) +
+			(IECM_RX_QUEUE_STATS_LEN * IECM_MAX_Q);
+	else
+		return -EINVAL;
+}
+
+/**
+ * iecm_add_one_ethtool_stat - copy the stat into the supplied buffer
+ * @data: location to store the stat value
+ * @pstat: old stat pointer to copy from
+ * @stat: the stat definition
+ *
+ * Copies the stat data defined by the pointer and stat structure pair into
+ * the memory supplied as data. Used to implement iecm_add_ethtool_stats and
+ * iecm_add_queue_stats. If the pointer is null, data will be zero'd.
+ */
+static void
+iecm_add_one_ethtool_stat(u64 *data, void *pstat,
+			  const struct iecm_stats *stat)
+{
+	char *p;
+
+	if (!pstat) {
+		/* ensure that the ethtool data buffer is zero'd for any stats
+		 * which don't have a valid pointer.
+		 */
+		*data = 0;
+		return;
+	}
+
+	p = (char *)pstat + stat->stat_offset;
+	switch (stat->sizeof_stat) {
+	case sizeof(u64):
+		*data = *((u64 *)p);
+		break;
+	case sizeof(u32):
+		*data = *((u32 *)p);
+		break;
+	case sizeof(u16):
+		*data = *((u16 *)p);
+		break;
+	case sizeof(u8):
+		*data = *((u8 *)p);
+		break;
+	default:
+		WARN_ONCE(1, "unexpected stat size for %s",
+			  stat->stat_string);
+		*data = 0;
+	}
+}
+
+/**
+ * iecm_add_queue_stats - copy queue statistics into supplied buffer
+ * @data: ethtool stats buffer
+ * @q: the queue to copy
+ *
+ * Queue statistics must be copied while protected by
+ * u64_stats_fetch_begin_irq, so we can't directly use iecm_add_ethtool_stats.
+ * Assumes that queue stats are defined in iecm_gstrings_queue_stats. If the
+ * queue pointer is null, zero out the queue stat values and update the data
+ * pointer. Otherwise safely copy the stats from the queue into the supplied
+ * buffer and update the data pointer when finished.
+ *
+ * This function expects to be called while under rcu_read_lock().
+ */
+static void
+iecm_add_queue_stats(u64 **data, struct iecm_queue *q)
+{
+	const struct iecm_stats *stats;
+	unsigned int start;
+	unsigned int size;
+	unsigned int i;
+
+	if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) {
+		size = IECM_RX_QUEUE_STATS_LEN;
+		stats = iecm_gstrings_rx_queue_stats;
+	} else {
+		size = IECM_TX_QUEUE_STATS_LEN;
+		stats = iecm_gstrings_tx_queue_stats;
+	}
+
+	/* To avoid invalid statistics values, ensure that we keep retrying
+	 * the copy until we get a consistent value according to
+	 * u64_stats_fetch_retry_irq. But first, make sure our queue is
+	 * non-null before attempting to access its syncp.
+	 */
+	do {
+		start = u64_stats_fetch_begin_irq(&q->stats_sync);
+		for (i = 0; i < size; i++)
+			iecm_add_one_ethtool_stat(&(*data)[i], q, &stats[i]);
+	} while (u64_stats_fetch_retry_irq(&q->stats_sync, start));
+
+	/* Once we successfully copy the stats in, update the data pointer */
+	*data += size;
+}
+
+/**
+ * iecm_add_empty_queue_stats - Add stats for a non-existent queue
+ * @data: pointer to data buffer
+ * @qtype: type of data queue
+ *
+ * We must report a constant length of stats back to userspace regardless of
+ * how many queues are actually in use because stats collection happens over
+ * three separate ioctls and there's no way to notify userspace the size
+ * changed between those calls. This adds empty to data to the stats since we
+ * don't have a real queue to refer to for this stats slot.
+ */
+static void
+iecm_add_empty_queue_stats(u64 **data, enum virtchnl_queue_type qtype)
+{
+	unsigned int i;
+	int stats_len;
+
+	if (qtype == VIRTCHNL_QUEUE_TYPE_RX)
+		stats_len = IECM_RX_QUEUE_STATS_LEN;
+	else
+		stats_len = IECM_TX_QUEUE_STATS_LEN;
+
+	for (i = 0; i < stats_len; i++)
+		(*data)[i] = 0;
+	*data += stats_len;
+}
+
+/**
+ * iecm_get_ethtool_stats - report device statistics
+ * @netdev: network interface device structure
+ * @stats: ethtool statistics structure
+ * @data: pointer to data buffer
+ *
+ * All statistics are added to the data buffer as an array of u64.
+ */
+static void iecm_get_ethtool_stats(struct net_device *netdev,
+				   struct ethtool_stats __always_unused *stats,
+				   u64 *data)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	enum virtchnl_queue_type qtype;
+	unsigned int total = 0;
+	unsigned int i, j;
+
+	if (vport->adapter->state != __IECM_UP)
+		return;
+
+	rcu_read_lock();
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		struct iecm_txq_group *txq_grp = &vport->txq_grps[i];
+
+		qtype = VIRTCHNL_QUEUE_TYPE_TX;
+
+		for (j = 0; j < txq_grp->num_txq; j++, total++) {
+			struct iecm_queue *txq = &txq_grp->txqs[j];
+
+			if (!txq)
+				iecm_add_empty_queue_stats(&data, qtype);
+			else
+				iecm_add_queue_stats(&data, txq);
+		}
+	}
+	/* It is critical we provide a constant number of stats back to
+	 * userspace regardless of how many queues are actually in use because
+	 * there is no way to inform userspace the size has changed between
+	 * ioctl calls. This will fill in any missing stats with zero.
+	 */
+	for (; total < IECM_MAX_Q; total++)
+		iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_TX);
+	total = 0;
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i];
+		int num_rxq;
+
+		qtype = VIRTCHNL_QUEUE_TYPE_RX;
+
+		if (iecm_is_queue_model_split(vport->rxq_model))
+			num_rxq = rxq_grp->splitq.num_rxq_sets;
+		else
+			num_rxq = rxq_grp->singleq.num_rxq;
+
+		for (j = 0; j < num_rxq; j++, total++) {
+			struct iecm_queue *rxq;
+
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				rxq = &rxq_grp->splitq.rxq_sets[j].rxq;
+			else
+				rxq = &rxq_grp->singleq.rxqs[j];
+			if (!rxq)
+				iecm_add_empty_queue_stats(&data, qtype);
+			else
+				iecm_add_queue_stats(&data, rxq);
+		}
+	}
+	for (; total < IECM_MAX_Q; total++)
+		iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_RX);
+	rcu_read_unlock();
+}
+
+/**
+ * iecm_find_rxq - find rxq from q index
+ * @vport: virtual port associated to queue
+ * @q_num: q index used to find queue
+ *
+ * returns pointer to Rx queue
+ */
+static struct iecm_queue *
+iecm_find_rxq(struct iecm_vport *vport, int q_num)
+{
+	struct iecm_queue *rxq;
+	int q_grp, q_idx;
+
+	if (iecm_is_queue_model_split(vport->rxq_model)) {
+		q_grp = q_num / IECM_DFLT_SPLITQ_RXQ_PER_GROUP;
+		q_idx = q_num % IECM_DFLT_SPLITQ_RXQ_PER_GROUP;
+
+		rxq = &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx].rxq;
+	} else {
+		q_grp = q_num / IECM_DFLT_SINGLEQ_RXQ_PER_GROUP;
+		q_idx = q_num % IECM_DFLT_SINGLEQ_RXQ_PER_GROUP;
+
+		rxq = &vport->rxq_grps[q_grp].singleq.rxqs[q_idx];
+	}
+
+	return rxq;
+}
+
+/**
+ * iecm_find_txq - find txq from q index
+ * @vport: virtual port associated to queue
+ * @q_num: q index used to find queue
+ *
+ * returns pointer to Tx queue
+ */
+static struct iecm_queue *
+iecm_find_txq(struct iecm_vport *vport, int q_num)
+{
+	struct iecm_queue *txq;
+
+	if (iecm_is_queue_model_split(vport->txq_model)) {
+		int q_grp = q_num / IECM_DFLT_SPLITQ_TXQ_PER_GROUP;
+
+		txq = vport->txq_grps[q_grp].complq;
+	} else {
+		txq = vport->txqs[q_num];
+	}
+
+	return txq;
+}
+
+/**
+ * __iecm_get_q_coalesce - get ITR values for specific queue
+ * @ec: ethtool structure to fill with driver's coalesce settings
+ * @q: queue of Rx or Tx
+ */
+static void
+__iecm_get_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q)
+{
+	u16 itr_setting;
+	bool dyn_ena;
+
+	itr_setting = IECM_ITR_SETTING(q->itr.target_itr);
+	dyn_ena = IECM_ITR_IS_DYNAMIC(q->itr.target_itr);
+	if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) {
+		ec->use_adaptive_rx_coalesce = dyn_ena;
+		ec->rx_coalesce_usecs = itr_setting;
+	} else {
+		ec->use_adaptive_tx_coalesce = dyn_ena;
+		ec->tx_coalesce_usecs = itr_setting;
+	}
+}
+
+/**
+ * iecm_get_q_coalesce - get ITR values for specific queue
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to program the device with
+ * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int
+iecm_get_q_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec,
+		    u32 q_num)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+
+	if (vport->adapter->state != __IECM_UP)
+		return 0;
+
+	if (q_num >= vport->num_rxq && q_num >= vport->num_txq)
+		return -EINVAL;
+
+	if (q_num < vport->num_rxq) {
+		struct iecm_queue *rxq = iecm_find_rxq(vport, q_num);
+
+		__iecm_get_q_coalesce(ec, rxq);
+	}
+
+	if (q_num < vport->num_txq) {
+		struct iecm_queue *txq = iecm_find_txq(vport, q_num);
+
+		__iecm_get_q_coalesce(ec, txq);
+	}
+
+	return 0;
+}
+
+/**
+ * iecm_get_coalesce - get ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to be filled
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int
+iecm_get_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec)
+{
+	/* Return coalesce based on queue number zero */
+	return iecm_get_q_coalesce(netdev, ec, 0);
+}
+
+/**
+ * iecm_get_per_q_coalesce - get ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @q_num: queue for which the ITR values has to retrieved
+ * @ec: coalesce settings to be filled
+ *
+ * Return 0 on success, and negative on failure
+ */
+
+static int
+iecm_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
+			struct ethtool_coalesce *ec)
+{
+	return iecm_get_q_coalesce(netdev, ec, q_num);
+}
+
+/**
+ * __iecm_set_q_coalesce - set ITR values for specific queue
+ * @ec: ethtool structure from user to update ITR settings
+ * @q: queue for which ITR values has to be set
+ *
+ * Returns 0 on success, negative otherwise.
+ */
+static int
+__iecm_set_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q)
+{
+	const char *q_type_str = (q->q_type == VIRTCHNL_QUEUE_TYPE_RX)
+				  ? "Rx" : "Tx";
+	u32 use_adaptive_coalesce, coalesce_usecs;
+	struct iecm_vport *vport;
+	u16 itr_setting;
+
+	itr_setting = IECM_ITR_SETTING(q->itr.target_itr);
+	vport = q->vport;
+	if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) {
+		use_adaptive_coalesce = ec->use_adaptive_rx_coalesce;
+		coalesce_usecs = ec->rx_coalesce_usecs;
+	} else {
+		use_adaptive_coalesce = ec->use_adaptive_tx_coalesce;
+		coalesce_usecs = ec->tx_coalesce_usecs;
+	}
+
+	if (itr_setting != coalesce_usecs && use_adaptive_coalesce) {
+		netdev_info(vport->netdev, "%s ITR cannot be changed if adaptive-%s is enabled\n",
+			    q_type_str, q_type_str);
+		return -EINVAL;
+	}
+
+	if (coalesce_usecs > IECM_ITR_MAX) {
+		netdev_info(vport->netdev,
+			    "Invalid value, %d-usecs range is 0-%d\n",
+			    coalesce_usecs, IECM_ITR_MAX);
+		return -EINVAL;
+	}
+
+	if (coalesce_usecs % 2 != 0) {
+		coalesce_usecs = coalesce_usecs & 0xFFFFFFFE;
+		netdev_info(vport->netdev, "HW only supports even ITR values, ITR rounded to %d\n",
+			    coalesce_usecs);
+	}
+
+	q->itr.target_itr = coalesce_usecs;
+	if (use_adaptive_coalesce)
+		q->itr.target_itr |= IECM_ITR_DYNAMIC;
+	/* Update of static/dynamic ITR will be taken care when interrupt is
+	 * fired
+	 */
+	return 0;
+}
+
+/**
+ * iecm_set_q_coalesce - set ITR values for specific queue
+ * @vport: vport associated to the queue that need updating
+ * @ec: coalesce settings to program the device with
+ * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+ * @is_rxq: is queue type Rx
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int
+iecm_set_q_coalesce(struct iecm_vport *vport, struct ethtool_coalesce *ec,
+		    int q_num, bool is_rxq)
+{
+	struct iecm_queue *q;
+
+	if (is_rxq)
+		q = iecm_find_rxq(vport, q_num);
+	else
+		q = iecm_find_txq(vport, q_num);
+
+	if (q && __iecm_set_q_coalesce(ec, q))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * iecm_set_coalesce - set ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to program the device with
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int
+iecm_set_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	int i, err = 0;
+
+	for (i = 0; i < vport->num_txq; i++) {
+		err = iecm_set_q_coalesce(vport, ec, i, false);
+		if (err)
+			return err;
+	}
+
+	for (i = 0; i < vport->num_rxq; i++) {
+		err = iecm_set_q_coalesce(vport, ec, i, true);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
+/**
+ * iecm_set_per_q_coalesce - set ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @q_num: queue for which the ITR values has to be set
+ * @ec: coalesce settings to program the device with
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int
+iecm_set_per_q_coalesce(struct net_device *netdev, u32 q_num,
+			struct ethtool_coalesce *ec)
+{
+	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
+	int err;
+
+	err = iecm_set_q_coalesce(vport, ec, q_num, false);
+	if (!err)
+		err = iecm_set_q_coalesce(vport, ec, q_num, true);
+
+	return err;
+}
+
+/**
+ * iecm_get_msglevel - Get debug message level
+ * @netdev: network interface device structure
+ *
+ * Returns current debug message level.
+ */
+static u32 iecm_get_msglevel(struct net_device *netdev)
+{
+	struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev);
+
+	return adapter->msg_enable;
+}
+
+/**
+ * iecm_set_msglevel - Set debug message level
+ * @netdev: network interface device structure
+ * @data: message level
+ *
+ * Set current debug message level. Higher values cause the driver to
+ * be noisier.
+ */
+static void iecm_set_msglevel(struct net_device *netdev, u32 data)
+{
+	struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev);
+
+	adapter->msg_enable = data;
+}
+
+/**
+ * iecm_get_link_ksettings - Get Link Speed and Duplex settings
+ * @netdev: network interface device structure
+ * @cmd: ethtool command
+ *
+ * Reports speed/duplex settings.
+ **/
+static int iecm_get_link_ksettings(struct net_device *netdev,
+				   struct ethtool_link_ksettings *cmd)
+{
+	struct iecm_netdev_priv *np = netdev_priv(netdev);
+	struct iecm_adapter *adapter = np->vport->adapter;
+
+	ethtool_link_ksettings_zero_link_mode(cmd, supported);
+	cmd->base.autoneg = AUTONEG_DISABLE;
+	cmd->base.port = PORT_NONE;
+	/* Set speed and duplex */
+	switch (adapter->link_speed) {
+	case VIRTCHNL_LINK_SPEED_40GB:
+		cmd->base.speed = SPEED_40000;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		cmd->base.speed = SPEED_25000;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		cmd->base.speed = SPEED_20000;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		cmd->base.speed = SPEED_10000;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		cmd->base.speed = SPEED_1000;
+		break;
+	case VIRTCHNL_LINK_SPEED_100MB:
+		cmd->base.speed = SPEED_100;
+		break;
+	default:
+		break;
+	}
+	cmd->base.duplex = DUPLEX_FULL;
+
+	return 0;
+}
+
+/**
+ * iecm_get_drvinfo - Get driver info
+ * @netdev: network interface device structure
+ * @drvinfo: ethtool driver info structure
+ *
+ * Returns information about the driver and device for display to the user.
+ */
+static void iecm_get_drvinfo(struct net_device *netdev,
+			     struct ethtool_drvinfo *drvinfo)
+{
+	struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev);
+
+	strlcpy(drvinfo->driver, iecm_drv_name, 32);
+	strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 32);
+}
+
+static const struct ethtool_ops iecm_ethtool_ops = {
+	.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+				     ETHTOOL_COALESCE_USE_ADAPTIVE,
+	.get_drvinfo		= iecm_get_drvinfo,
+	.get_msglevel		= iecm_get_msglevel,
+	.set_msglevel		= iecm_set_msglevel,
+	.get_coalesce		= iecm_get_coalesce,
+	.set_coalesce		= iecm_set_coalesce,
+	.get_per_queue_coalesce	= iecm_get_per_q_coalesce,
+	.set_per_queue_coalesce	= iecm_set_per_q_coalesce,
+	.get_ethtool_stats	= iecm_get_ethtool_stats,
+	.get_strings		= iecm_get_strings,
+	.get_sset_count		= iecm_get_sset_count,
+	.get_rxnfc		= iecm_get_rxnfc,
+	.get_rxfh_key_size	= iecm_get_rxfh_key_size,
+	.get_rxfh_indir_size	= iecm_get_rxfh_indir_size,
+	.get_rxfh		= iecm_get_rxfh,
+	.set_rxfh		= iecm_set_rxfh,
+	.get_channels		= iecm_get_channels,
+	.set_channels		= iecm_set_channels,
+	.get_ringparam		= iecm_get_ringparam,
+	.set_ringparam		= iecm_set_ringparam,
+	.get_link_ksettings	= iecm_get_link_ksettings,
+};
+
 /**
  * iecm_set_ethtool_ops - Initialize ethtool ops struct
  * @netdev: network interface device structure
@@ -12,5 +1044,5 @@
  */
 void iecm_set_ethtool_ops(struct net_device *netdev)
 {
-	/* stub */
+	netdev->ethtool_ops = &iecm_ethtool_ops;
 }
diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c
index 16f92dc4c77a..1a8968c4070c 100644
--- a/drivers/net/ethernet/intel/iecm/iecm_lib.c
+++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c
@@ -840,7 +840,37 @@ static void iecm_deinit_task(struct iecm_adapter *adapter)
  */
 static int iecm_init_hard_reset(struct iecm_adapter *adapter)
 {
-	/* stub */
+	int err = 0;
+
+	/* Prepare for reset */
+	if (test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) {
+		iecm_deinit_task(adapter);
+		adapter->dev_ops.reg_ops.trigger_reset(adapter,
+						       __IECM_HR_FUNC_RESET);
+		set_bit(__IECM_UP_REQUESTED, adapter->flags);
+		clear_bit(__IECM_HR_FUNC_RESET, adapter->flags);
+	} else if (test_bit(__IECM_HR_CORE_RESET, adapter->flags)) {
+		if (adapter->state == __IECM_UP)
+			set_bit(__IECM_UP_REQUESTED, adapter->flags);
+		iecm_deinit_task(adapter);
+		clear_bit(__IECM_HR_CORE_RESET, adapter->flags);
+	} else if (test_and_clear_bit(__IECM_HR_DRV_LOAD, adapter->flags)) {
+	/* Trigger reset */
+	} else {
+		dev_err(&adapter->pdev->dev, "Unhandled hard reset cause\n");
+		err = -EBADRQC;
+		goto handle_err;
+	}
+
+	/* Reset is complete and so start building the driver resources again */
+	err = iecm_init_dflt_mbx(adapter);
+	if (err) {
+		dev_err(&adapter->pdev->dev, "Failed to initialize default mailbox: %d\n",
+			err);
+	}
+handle_err:
+	mutex_unlock(&adapter->reset_lock);
+	return err;
 }
 
 /**
@@ -869,7 +899,109 @@ static void iecm_vc_event_task(struct work_struct *work)
 int iecm_initiate_soft_reset(struct iecm_vport *vport,
 			     enum iecm_flags reset_cause)
 {
-	/* stub */
+	enum iecm_state current_state = vport->adapter->state;
+	struct iecm_adapter *adapter = vport->adapter;
+	struct iecm_vport *old_vport;
+	int err = 0;
+
+	/* make sure we do not end up in initiating multiple resets */
+	mutex_lock(&adapter->reset_lock);
+
+	/* If the system is low on memory, we can end up in bad state if we
+	 * free all the memory for queue resources and try to allocate them
+	 * again. Instead, we can pre-allocate the new resources before doing
+	 * anything and bailing if the alloc fails.
+	 *
+	 * Here we make a clone to act as a handle to old resources, then do a
+	 * new alloc. If successful then we'll stop the clone, free the old
+	 * resources, and continue with reset on new vport resources. On error
+	 * copy clone back to vport to get back to a good state and return
+	 * error.
+	 *
+	 * We also want to be careful we don't invalidate any pre-existing
+	 * pointers to vports prior to calling this.
+	 */
+	old_vport = kzalloc(sizeof(*vport), GFP_KERNEL);
+	if (!old_vport) {
+		mutex_unlock(&adapter->reset_lock);
+		return -ENOMEM;
+	}
+	memcpy(old_vport, vport, sizeof(*vport));
+
+	/* Adjust resource parameters prior to reallocating resources */
+	switch (reset_cause) {
+	case __IECM_SR_Q_CHANGE:
+		adapter->dev_ops.vc_ops.adjust_qs(vport);
+		break;
+	case __IECM_SR_Q_DESC_CHANGE:
+		/* Update queue parameters before allocating resources */
+		iecm_vport_calc_num_q_desc(vport);
+		break;
+	case __IECM_SR_Q_SCH_CHANGE:
+	case __IECM_SR_MTU_CHANGE:
+		break;
+	default:
+		dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n");
+		err = -EINVAL;
+		goto err_reset;
+	}
+
+	/* It's important we pass in the vport pointer so that when
+	 * back-pointers are setup in queue allocs, they get the right pointer.
+	 */
+	err = iecm_vport_res_alloc(vport);
+	if (err)
+		goto err_mem_alloc;
+
+	if (!test_bit(__IECM_NO_EXTENDED_CAPS, adapter->flags)) {
+		if (current_state <= __IECM_DOWN) {
+			iecm_send_delete_queues_msg(old_vport);
+		} else {
+			set_bit(__IECM_DEL_QUEUES, adapter->flags);
+			iecm_vport_stop(old_vport);
+		}
+
+		iecm_deinit_rss(old_vport);
+		err = iecm_send_add_queues_msg(vport, vport->num_txq,
+					       vport->num_complq,
+					       vport->num_rxq,
+					       vport->num_bufq);
+		if (err)
+			goto err_reset;
+	}
+
+	/* Post resource allocation reset */
+	switch (reset_cause) {
+	case __IECM_SR_Q_CHANGE:
+		iecm_intr_rel(adapter);
+		iecm_intr_req(adapter);
+		break;
+	case __IECM_SR_Q_DESC_CHANGE:
+	case __IECM_SR_Q_SCH_CHANGE:
+	case __IECM_SR_MTU_CHANGE:
+		iecm_vport_stop(old_vport);
+		break;
+	default:
+		dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n");
+		err = -EINVAL;
+		goto err_reset;
+	}
+
+	/* free the old resources */
+	iecm_vport_res_free(old_vport);
+	kfree(old_vport);
+
+	if (current_state == __IECM_UP)
+		err = iecm_vport_open(vport);
+	mutex_unlock(&adapter->reset_lock);
+	return err;
+err_reset:
+	iecm_vport_res_free(vport);
+err_mem_alloc:
+	memcpy(vport, old_vport, sizeof(*vport));
+	kfree(old_vport);
+	mutex_unlock(&adapter->reset_lock);
+	return err;
 }
 
 /**
@@ -961,6 +1093,7 @@ int iecm_probe(struct pci_dev *pdev,
 	INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task);
 	INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task);
 
+	adapter->dev_ops.reg_ops.reset_reg_init(&adapter->reset_reg);
 	mutex_lock(&adapter->reset_lock);
 	set_bit(__IECM_HR_DRV_LOAD, adapter->flags);
 	err = iecm_init_hard_reset(adapter);
@@ -1054,7 +1187,11 @@ static int iecm_open(struct net_device *netdev)
  */
 static int iecm_change_mtu(struct net_device *netdev, int new_mtu)
 {
-	/* stub */
+	struct iecm_vport *vport =  iecm_netdev_to_vport(netdev);
+
+	netdev->mtu = new_mtu;
+
+	return iecm_initiate_soft_reset(vport, __IECM_SR_MTU_CHANGE);
 }
 
 void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 14/15] iecm: Add iecm to the kernel build system
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (12 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 13/15] iecm: Add ethtool Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-24 17:33 ` [net-next v5 15/15] idpf: Introduce idpf driver Tony Nguyen
  14 siblings, 0 replies; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alice Michael, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alice Michael <alice.michael@intel.com>

This introduces iecm as a module to the kernel, and adds
relevant documentation.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../device_drivers/ethernet/intel/iecm.rst    | 93 +++++++++++++++++++
 MAINTAINERS                                   |  1 +
 drivers/net/ethernet/intel/Kconfig            | 10 ++
 drivers/net/ethernet/intel/Makefile           |  1 +
 drivers/net/ethernet/intel/iecm/Makefile      | 18 ++++
 5 files changed, 123 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/intel/iecm.rst
 create mode 100644 drivers/net/ethernet/intel/iecm/Makefile

diff --git a/Documentation/networking/device_drivers/ethernet/intel/iecm.rst b/Documentation/networking/device_drivers/ethernet/intel/iecm.rst
new file mode 100644
index 000000000000..aefa6bd1094d
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/intel/iecm.rst
@@ -0,0 +1,93 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============================
+Intel Ethernet Common Module
+============================
+
+The Intel Ethernet Common Module is meant to serve as an abstraction layer
+between device specific implementation details and common net device driver
+flows. This library provides several function hooks which allow a device driver
+to specify register addresses, control queue communications, and other device
+specific functionality.  Some functions are required to be implemented while
+others have a default implementation that is used when none is supplied by the
+device driver.  Doing this, a device driver can be written to take advantage
+of existing code while also giving the flexibility to implement device specific
+features.
+
+The common use case for this library is for a network device driver that wants
+to specify its own device specific details but also leverage the more common
+code flows found in network device drivers.
+
+Sections in this document:
+	Entry Point
+	Exit Point
+	Register Operations API
+	Virtchnl Operations API
+
+Entry Point
+~~~~~~~~~~~
+The primary entry point to the library is the iecm_probe function.  Prior to
+calling this, device drivers must have allocated an iecm_adapter struct and
+initialized it with the required API functions.  The adapter struct, along with
+the pci_dev struct and the pci_device_id struct, is provided to iecm_probe
+which finalizes device initialization and prepares the device for open.
+
+The iecm_dev_ops struct within the iecm_adapter struct is the primary vehicle
+for passing information from device drivers to the common module.  A dependent
+module must define and assign a reg_ops_init function which will assign the
+respective function pointers to initialize register values (see iecm_reg_ops
+struct).  These are required to be provided by the dependent device driver as
+no suitable default can be assumed for register addresses.
+
+The vc_ops_init function pointer and the related iecm_virtchnl_ops struct are
+optional and should only be necessary for device drivers which use a different
+method/timing for communicating across a mailbox to the hardware.  Within iecm
+is a default interface provided in the case where one is not provided by the
+device driver.
+
+Exit Point
+~~~~~~~~~~
+When the device driver is being prepared to be removed through the pci_driver
+remove callback, it is required for the device driver to call iecm_remove with
+the pci_dev struct provided.  This is required to ensure all resources are
+properly freed and returned to the operating system.
+
+Register Operations API
+~~~~~~~~~~~~~~~~~~~~~~~
+iecm_reg_ops contains three different function pointers relating to initializing
+registers for the specific net device using the library.
+
+ctlq_reg_init relates specifically to setting up registers related to control
+queue/mailbox communications.  Registers that should be defined include: head,
+tail, len, bah, bal, len_mask, len_ena_mask, and head_mask.
+
+vportq_reg_init relates to setting up queue registers.  The tail registers to
+be assigned to the iecm_queue struct for each RX/TX queue.
+
+intr_reg_init relates to any registers needed to setup interrupts.  These
+include registers needed to enable the interrupt and change ITR settings.
+
+If the initialization function finds that one or more required function
+pointers were not provided, an error will be issued and the device will be
+inoperable.
+
+
+Virtchnl Operations API
+~~~~~~~~~~~~~~~~~~~~~~~
+The virtchnl is a conduit between driver and hardware that allows device
+drivers to send and receive control messages to/from hardware.  This is
+optional to be specified as there is a general interface that can be assumed
+when using this library.  However, if a device deviates in some way to
+communicate across the mailbox correctly, this interface is provided to allow
+that.
+
+If vc_ops_init is set in the dev_ops field of the iecm_adapter struct, then it
+is assumed the device driver is providing its own interface to do virtchnl
+communications.  While providing vc_ops_init is optional, if it is provided, it
+is required that the device driver provide function pointers for those functions
+in vc_ops, with exception for the enable_vport, disable_vport, and destroy_vport
+functions which may not be required for all devices.
+
+If the initialization function finds that vc_ops_init was defined but one or
+more required function pointers were not provided, an error will be issued and
+the device will be inoperable.
diff --git a/MAINTAINERS b/MAINTAINERS
index 36ec0bd50a8f..88a52a276eee 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8753,6 +8753,7 @@ F:	Documentation/networking/device_drivers/ethernet/intel/
 F:	drivers/net/ethernet/intel/
 F:	drivers/net/ethernet/intel/*/
 F:	include/linux/avf/virtchnl.h
+F:	include/linux/net/intel/
 
 INTEL FRAMEBUFFER DRIVER (excluding 810 and 815)
 M:	Maik Broemme <mbroemme@libmpq.org>
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 5aa86318ed3e..04de64ecb06d 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -343,4 +343,14 @@ config IGC
 	  To compile this driver as a module, choose M here. The module
 	  will be called igc.
 
+config IECM
+	tristate "Intel(R) Ethernet Common Module Support"
+	default n
+	depends on PCI
+	help
+	 To compile this driver as a module, choose M here. The module
+	 will be called iecm.  MSI-X interrupt support is required
+
+	  More specific information on configuring the driver is in
+	  <file:Documentation/networking/device_drivers/ethernet/intel/iecm.rst>.
 endif # NET_VENDOR_INTEL
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index 3075290063f6..c9eba9cc5087 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -16,3 +16,4 @@ obj-$(CONFIG_IXGB) += ixgb/
 obj-$(CONFIG_IAVF) += iavf/
 obj-$(CONFIG_FM10K) += fm10k/
 obj-$(CONFIG_ICE) += ice/
+obj-$(CONFIG_IECM) += iecm/
diff --git a/drivers/net/ethernet/intel/iecm/Makefile b/drivers/net/ethernet/intel/iecm/Makefile
new file mode 100644
index 000000000000..ace32cb7472c
--- /dev/null
+++ b/drivers/net/ethernet/intel/iecm/Makefile
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright (C) 2020 Intel Corporation
+
+#
+# Makefile for the Intel(R) Ethernet Common Module
+#
+
+obj-$(CONFIG_IECM) += iecm.o
+
+iecm-y := \
+	iecm_lib.o \
+	iecm_virtchnl.o \
+	iecm_txrx.o \
+	iecm_singleq_txrx.o \
+	iecm_ethtool.o \
+	iecm_controlq.o \
+	iecm_controlq_setup.o \
+	iecm_main.o
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [net-next v5 15/15] idpf: Introduce idpf driver
  2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
                   ` (13 preceding siblings ...)
  2020-08-24 17:33 ` [net-next v5 14/15] iecm: Add iecm to the kernel build system Tony Nguyen
@ 2020-08-24 17:33 ` Tony Nguyen
  2020-08-28 10:35   ` kernel test robot
  14 siblings, 1 reply; 32+ messages in thread
From: Tony Nguyen @ 2020-08-24 17:33 UTC (permalink / raw)
  To: davem
  Cc: Alan Brady, netdev, nhorman, sassmann, jeffrey.t.kirsher,
	anthony.l.nguyen, Alice Michael, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

From: Alan Brady <alan.brady@intel.com>

Utilizes the Intel Ethernet Common Module and provides
a device specific implementation for data plane devices.

Signed-off-by: Alan Brady <alan.brady@intel.com>
Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Phani Burra <phani.r.burra@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 .../device_drivers/ethernet/intel/idpf.rst    |  47 ++++++
 drivers/net/ethernet/intel/Kconfig            |  11 ++
 drivers/net/ethernet/intel/Makefile           |   1 +
 drivers/net/ethernet/intel/idpf/Makefile      |  12 ++
 drivers/net/ethernet/intel/idpf/idpf_dev.h    |  17 ++
 drivers/net/ethernet/intel/idpf/idpf_devids.h |  10 ++
 drivers/net/ethernet/intel/idpf/idpf_main.c   | 137 ++++++++++++++++
 drivers/net/ethernet/intel/idpf/idpf_reg.c    | 152 ++++++++++++++++++
 8 files changed, 387 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/intel/idpf.rst
 create mode 100644 drivers/net/ethernet/intel/idpf/Makefile
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_dev.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_devids.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_main.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_reg.c

diff --git a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst
new file mode 100644
index 000000000000..973fa9613428
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst
@@ -0,0 +1,47 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================================================================
+Linux Base Driver for the Intel(R) Smart Network Adapter Family Series
+==================================================================
+
+Intel idpf Linux driver.
+Copyright(c) 2020 Intel Corporation.
+
+Contents
+========
+
+- Enabling the driver
+- Support
+
+The driver in this release supports Intel's Smart Network Adapter Family Series
+of products. For more information, visit Intel's support page at
+https://support.intel.com.
+
+Enabling the driver
+===================
+The driver is enabled via the standard kernel configuration system,
+using the make command::
+
+  make oldconfig/menuconfig/etc.
+
+The driver is located in the menu structure at:
+
+  -> Device Drivers
+    -> Network device support (NETDEVICES [=y])
+      -> Ethernet driver support
+        -> Intel devices
+          -> Intel(R) Smart Network Adapter Family Series Support
+
+Support
+=======
+For general information, go to the Intel support website at:
+
+https://www.intel.com/support/
+
+or the Intel Wired Networking project hosted by Sourceforge at:
+
+https://sourceforge.net/projects/e1000
+
+If an issue is identified with the released source code on a supported kernel
+with a supported adapter, email the specific information related to the issue
+to e1000-devel@lists.sf.net.
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 04de64ecb06d..6aee67276cf5 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -353,4 +353,15 @@ config IECM
 
 	  More specific information on configuring the driver is in
 	  <file:Documentation/networking/device_drivers/ethernet/intel/iecm.rst>.
+
+config IDPF
+	tristate "Intel(R) Data Plane Function Support"
+	default n
+	depends on PCI
+	help
+	 To compile this driver as a module, choose M here. The module
+	 will be called idpf.
+
+	 More specific information on configuring the driver is in
+	  <file:Documentation/networking/device_drivers/ethernet/intel/idpf.rst>.
 endif # NET_VENDOR_INTEL
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index c9eba9cc5087..3786c2269f3d 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -17,3 +17,4 @@ obj-$(CONFIG_IAVF) += iavf/
 obj-$(CONFIG_FM10K) += fm10k/
 obj-$(CONFIG_ICE) += ice/
 obj-$(CONFIG_IECM) += iecm/
+obj-$(CONFIG_IDPF) += idpf/
diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile
new file mode 100644
index 000000000000..ac6cac6c6360
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/Makefile
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright (C) 2020 Intel Corporation
+
+#
+# Makefile for the Intel(R) Data Plane Function Linux Driver
+#
+
+obj-$(CONFIG_IDPF) += idpf.o
+
+idpf-y := \
+	idpf_main.o \
+	idpf_reg.o
diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.h b/drivers/net/ethernet/intel/idpf/idpf_dev.h
new file mode 100644
index 000000000000..1da33f5120a2
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_dev.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+
+#ifndef _IDPF_DEV_H_
+#define _IDPF_DEV_H_
+
+#include <linux/net/intel/iecm.h>
+
+void idpf_intr_reg_init(struct iecm_vport *vport);
+void idpf_mb_intr_reg_init(struct iecm_adapter *adapter);
+void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg);
+void idpf_trigger_reset(struct iecm_adapter *adapter,
+			enum iecm_flags trig_cause);
+void idpf_vportq_reg_init(struct iecm_vport *vport);
+void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq);
+
+#endif /* _IDPF_DEV_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_devids.h b/drivers/net/ethernet/intel/idpf/idpf_devids.h
new file mode 100644
index 000000000000..ee373a04cb20
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_devids.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+
+#ifndef _IDPF_DEVIDS_H_
+#define _IDPF_DEVIDS_H_
+
+/* Device IDs */
+#define IDPF_DEV_ID_PF			0x1452
+
+#endif /* _IDPF_DEVIDS_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
new file mode 100644
index 000000000000..874c9b2da96d
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
@@ -0,0 +1,137 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include "idpf_dev.h"
+#include "idpf_devids.h"
+
+#define DRV_SUMMARY	"Intel(R) Data Plane Function Linux Driver"
+static const char idpf_driver_string[] = DRV_SUMMARY;
+static const char idpf_copyright[] = "Copyright (c) 2020, Intel Corporation.";
+
+MODULE_DESCRIPTION(DRV_SUMMARY);
+MODULE_LICENSE("GPL v2");
+
+/* available messaging levels are described by IECM_AVAIL_NETIF_M */
+static int debug = -1;
+module_param(debug, int, 0644);
+#ifndef CONFIG_DYNAMIC_DEBUG
+MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXXXXX)");
+#else
+MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)");
+#endif /* !CONFIG_DYNAMIC_DEBUG */
+
+/**
+ * idpf_reg_ops_init - Initialize register API function pointers
+ * @adapter: Driver specific private structure
+ */
+static void idpf_reg_ops_init(struct iecm_adapter *adapter)
+{
+	adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init;
+	adapter->dev_ops.reg_ops.vportq_reg_init = idpf_vportq_reg_init;
+	adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init;
+	adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init;
+	adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init;
+	adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset;
+}
+
+/**
+ * idpf_probe - Device initialization routine
+ * @pdev: PCI device information struct
+ * @ent: entry in idpf_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_probe(struct pci_dev *pdev,
+		      const struct pci_device_id __always_unused *ent)
+{
+	struct iecm_adapter *adapter = NULL;
+	int err;
+
+	err = pcim_enable_device(pdev);
+	if (err)
+		return err;
+
+	adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
+	if (!adapter)
+		return -ENOMEM;
+
+	adapter->dev_ops.reg_ops_init = idpf_reg_ops_init;
+	adapter->debug_msk = debug;
+
+	err = iecm_probe(pdev, ent, adapter);
+	if (err)
+		kfree(adapter);
+
+	return err;
+}
+
+/**
+ * idpf_remove - Device removal routine
+ * @pdev: PCI device information struct
+ */
+static void idpf_remove(struct pci_dev *pdev)
+{
+	struct iecm_adapter *adapter = pci_get_drvdata(pdev);
+
+	iecm_remove(pdev);
+	kfree(adapter);
+}
+
+/* idpf_pci_tbl - PCI Dev idpf ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ *   Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id idpf_pci_tbl[] = {
+	{ PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF), 0 },
+	/* required last entry */
+	{ 0, }
+};
+MODULE_DEVICE_TABLE(pci, idpf_pci_tbl);
+
+static struct pci_driver idpf_driver = {
+	.name = KBUILD_MODNAME,
+	.id_table = idpf_pci_tbl,
+	.probe = idpf_probe,
+	.remove = idpf_remove,
+	.shutdown = iecm_shutdown,
+};
+
+/**
+ * idpf_module_init - Driver registration routine
+ *
+ * idpf_module_init is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ */
+static int __init idpf_module_init(void)
+{
+	int status;
+
+	pr_info("%s", idpf_driver_string);
+	pr_info("%s\n", idpf_copyright);
+
+	status = pci_register_driver(&idpf_driver);
+	if (status)
+		pr_err("failed to register pci driver, err %d\n", status);
+
+	return status;
+}
+module_init(idpf_module_init);
+
+/**
+ * idpf_module_exit - Driver exit cleanup routine
+ *
+ * idpf_module_exit is called just before the driver is removed
+ * from memory.
+ */
+static void __exit idpf_module_exit(void)
+{
+	pci_unregister_driver(&idpf_driver);
+	pr_info("module unloaded\n");
+}
+module_exit(idpf_module_exit);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_reg.c b/drivers/net/ethernet/intel/idpf/idpf_reg.c
new file mode 100644
index 000000000000..21d5c420e626
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_reg.c
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2020 Intel Corporation */
+
+#include "idpf_dev.h"
+#include <linux/net/intel/iecm_lan_pf_regs.h>
+
+/**
+ * idpf_ctlq_reg_init - initialize default mailbox registers
+ * @cq: pointer to the array of create control queues
+ */
+void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq)
+{
+	int i;
+
+#define NUM_Q 2
+	for (i = 0; i < NUM_Q; i++) {
+		struct iecm_ctlq_create_info *ccq = cq + i;
+
+		switch (ccq->type) {
+		case IECM_CTLQ_TYPE_MAILBOX_TX:
+			/* set head and tail registers in our local struct */
+			ccq->reg.head = PF_FW_ATQH;
+			ccq->reg.tail = PF_FW_ATQT;
+			ccq->reg.len = PF_FW_ATQLEN;
+			ccq->reg.bah = PF_FW_ATQBAH;
+			ccq->reg.bal = PF_FW_ATQBAL;
+			ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
+			ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
+			ccq->reg.head_mask = PF_FW_ATQH_ATQH_M;
+			break;
+		case IECM_CTLQ_TYPE_MAILBOX_RX:
+			/* set head and tail registers in our local struct */
+			ccq->reg.head = PF_FW_ARQH;
+			ccq->reg.tail = PF_FW_ARQT;
+			ccq->reg.len = PF_FW_ARQLEN;
+			ccq->reg.bah = PF_FW_ARQBAH;
+			ccq->reg.bal = PF_FW_ARQBAL;
+			ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
+			ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
+			ccq->reg.head_mask = PF_FW_ARQH_ARQH_M;
+			break;
+		default:
+			break;
+		}
+	}
+}
+
+/**
+ * idpf_vportq_reg_init - Initialize tail registers
+ * @vport: virtual port structure
+ */
+void idpf_vportq_reg_init(struct iecm_vport *vport)
+{
+	struct iecm_hw *hw = &vport->adapter->hw;
+	struct iecm_queue *q;
+	int i, j;
+
+	for (i = 0; i < vport->num_txq_grp; i++) {
+		int num_txq = vport->txq_grps[i].num_txq;
+
+		for (j = 0; j < num_txq; j++) {
+			q = &vport->txq_grps[i].txqs[j];
+			q->tail = hw->hw_addr + PF_QTX_COMM_DBELL(q->q_id);
+		}
+	}
+
+	for (i = 0; i < vport->num_rxq_grp; i++) {
+		struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i];
+		int num_rxq;
+
+		if (iecm_is_queue_model_split(vport->rxq_model)) {
+			for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) {
+				q = &rxq_grp->splitq.bufq_sets[j].bufq;
+				q->tail = hw->hw_addr +
+					  PF_QRX_BUFFQ_TAIL(q->q_id);
+			}
+
+			num_rxq = rxq_grp->splitq.num_rxq_sets;
+		} else {
+			num_rxq = rxq_grp->singleq.num_rxq;
+		}
+
+		for (j = 0; j < num_rxq; j++) {
+			if (iecm_is_queue_model_split(vport->rxq_model))
+				q = &rxq_grp->splitq.rxq_sets[j].rxq;
+			else
+				q = &rxq_grp->singleq.rxqs[j];
+			q->tail = hw->hw_addr + PF_QRX_TAIL(q->q_id);
+		}
+	}
+}
+
+/**
+ * idpf_mb_intr_reg_init - Initialize mailbox interrupt register
+ * @adapter: adapter structure
+ */
+void idpf_mb_intr_reg_init(struct iecm_adapter *adapter)
+{
+	struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg;
+	int vidx;
+
+	vidx = adapter->mb_vector.v_idx;
+	intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx);
+	intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
+	intr->dyn_ctl_itridx_m = 0x3 << PF_GLINT_DYN_CTL_ITR_INDX_S;
+}
+
+/**
+ * idpf_intr_reg_init - Initialize interrupt registers
+ * @vport: virtual port structure
+ */
+void idpf_intr_reg_init(struct iecm_vport *vport)
+{
+	int q_idx;
+
+	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+		struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx];
+		struct iecm_intr_reg *intr = &q_vector->intr_reg;
+		u32 vidx = q_vector->v_idx;
+
+		intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx);
+		intr->dyn_ctl_clrpba_m = PF_GLINT_DYN_CTL_CLEARPBA_M;
+		intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
+		intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S;
+		intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S;
+		intr->itr = PF_GLINT_ITR(VIRTCHNL_ITR_IDX_0, vidx);
+	}
+}
+
+/**
+ * idpf_reset_reg_init - Initialize reset registers
+ * @reset_reg: struct to be filled in with reset registers
+ */
+void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg)
+{
+	reset_reg->rstat = PFGEN_RSTAT;
+	reset_reg->rstat_m = PFGEN_RSTAT_PFR_STATE_M;
+}
+
+/**
+ * idpf_trigger_reset - trigger reset
+ * @adapter: Driver specific private structure
+ * @trig_cause: Reason to trigger a reset
+ */
+void idpf_trigger_reset(struct iecm_adapter *adapter,
+			enum iecm_flags __always_unused trig_cause)
+{
+	u32 reset_reg;
+
+	reset_reg = rd32(&adapter->hw, PFGEN_CTRL);
+	wr32(&adapter->hw, PFGEN_CTRL, (reset_reg | PFGEN_CTRL_PFSWR));
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [net-next v5 01/15] virtchnl: Extend AVF ops
  2020-08-24 17:32 ` [net-next v5 01/15] virtchnl: Extend AVF ops Tony Nguyen
@ 2020-08-24 19:42   ` Jakub Kicinski
  2020-08-27 17:16     ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-24 19:42 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, Alice Michael, netdev, nhorman, sassmann,
	jeffrey.t.kirsher, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

On Mon, 24 Aug 2020 10:32:52 -0700 Tony Nguyen wrote:
> +struct virtchnl_rss_hash {
> +	u64 hash;
> +	u16 vport_id;
> +};
> +
> +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);

I've added 32bit builds to my local setup since v4 was posted - 
looks like there's a number of errors here. You can't assume u64 
forces a 64bit alignment. Best to specify the padding explicitly.

FWIW these are the errors I got but there may be more:

In file included from ../drivers/net/ethernet/intel/ice/ice.h:37,
                 from ../drivers/net/ethernet/intel/ice/ice_common.h:7,
                 from ../drivers/net/ethernet/intel/ice/ice_sched.h:7,
                 from ../drivers/net/ethernet/intel/ice/ice_sched.c:4:
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:809:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  809 | VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_get_capabilities);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:809:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_get_capabilities’ is not an integer constant
  809 | VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_get_capabilities);
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                                     ^
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:891:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  891 | VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_txq_info_v2);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:891:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_txq_info_v2’ is not an integer constant
  891 | VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_txq_info_v2);
      |                               ^~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                                     ^
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:907:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  907 | VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_config_tx_queues);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:907:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_config_tx_queues’ is not an integer constant
  907 | VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_config_tx_queues);
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                                     ^
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:937:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  937 | VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_rxq_info_v2);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:937:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_rxq_info_v2’ is not an integer constant
  937 | VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_rxq_info_v2);
      |                               ^~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                                     ^
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:952:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  952 | VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_config_rx_queues);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:952:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_config_rx_queues’ is not an integer constant
  952 | VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_config_rx_queues);
      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                                     ^
../include/linux/avf/virtchnl.h:175:36: warning: division by zero [-Wdiv-by-zero]
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
      |                                    ^
../include/linux/avf/virtchnl.h:1090:1: note: in expansion of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
 1090 | VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:1090:31: error: enumerator value for ‘virtchnl_static_assert_virtchnl_rss_hash’ is not an integer constant
 1090 | VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);
      |                               ^~~~~~~~~~~~~~~~~
../include/linux/avf/virtchnl.h:175:53: note: in definition of macro ‘VIRTCHNL_CHECK_STRUCT_LEN’
  175 |  { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 06/15] iecm: Implement mailbox functionality
  2020-08-24 17:32 ` [net-next v5 06/15] iecm: Implement mailbox functionality Tony Nguyen
@ 2020-08-24 20:35   ` Brady, Alan
  2020-08-24 20:40     ` Jakub Kicinski
  0 siblings, 1 reply; 32+ messages in thread
From: Brady, Alan @ 2020-08-24 20:35 UTC (permalink / raw)
  To: Nguyen, Anthony L, davem
  Cc: Michael, Alice, netdev, nhorman, sassmann, Kirsher, Jeffrey T,
	Nguyen, Anthony L, Burra, Phani R, Hay, Joshua A, Chittim, Madhu,
	Linga, Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse,
	Samudrala, Sridhar

> -----Original Message-----
> From: Tony Nguyen <anthony.l.nguyen@intel.com>
> Sent: Monday, August 24, 2020 10:33 AM
> To: davem@davemloft.net
> Cc: Michael, Alice <alice.michael@intel.com>; netdev@vger.kernel.org;
> nhorman@redhat.com; sassmann@redhat.com; Kirsher, Jeffrey T
> <jeffrey.t.kirsher@intel.com>; Nguyen, Anthony L
> <anthony.l.nguyen@intel.com>; Brady, Alan <alan.brady@intel.com>; Burra,
> Phani R <phani.r.burra@intel.com>; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: [net-next v5 06/15] iecm: Implement mailbox functionality
> 
> From: Alice Michael <alice.michael@intel.com>
> 
> Implement mailbox setup, take down, and commands.
> 
> Signed-off-by: Alice Michael <alice.michael@intel.com>
> Signed-off-by: Alan Brady <alan.brady@intel.com>
> Signed-off-by: Phani Burra <phani.r.burra@intel.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
> Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  .../net/ethernet/intel/iecm/iecm_controlq.c   | 498 +++++++++++++++++-
>  .../ethernet/intel/iecm/iecm_controlq_setup.c | 105 +++-
>  drivers/net/ethernet/intel/iecm/iecm_lib.c    |  94 +++-
>  .../net/ethernet/intel/iecm/iecm_virtchnl.c   | 414 ++++++++++++++-
>  4 files changed, 1082 insertions(+), 29 deletions(-)
> 
>  /**
> @@ -28,7 +36,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct
> iecm_hw *hw,
>  					    struct iecm_ctlq_info *cq,

I believe we have made a grave error and sent the wrong version.  iecm_status enum should be completely removed.  Sincere apologies for the thrash.  Will be sending another version up soon.

-Alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 06/15] iecm: Implement mailbox functionality
  2020-08-24 20:35   ` Brady, Alan
@ 2020-08-24 20:40     ` Jakub Kicinski
  0 siblings, 0 replies; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-24 20:40 UTC (permalink / raw)
  To: Brady, Alan
  Cc: Nguyen, Anthony L, davem, Michael, Alice, netdev, nhorman,
	sassmann, Kirsher, Jeffrey T, Burra, Phani R, Hay, Joshua A,
	Chittim, Madhu, Linga, Pavan Kumar, Skidmore, Donald C,
	Brandeburg, Jesse, Samudrala, Sridhar

On Mon, 24 Aug 2020 20:35:51 +0000 Brady, Alan wrote:
> > @@ -28,7 +36,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct
> > iecm_hw *hw,
> >  					    struct iecm_ctlq_info *cq,  
> 
> I believe we have made a grave error and sent the wrong version.
> iecm_status enum should be completely removed.  Sincere apologies for
> the thrash.  Will be sending another version up soon.

Ugh, I was half way through review.

Let me send my comments and you can see how many of them apply.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 11/15] iecm: Add splitq TX/RX
  2020-08-24 17:33 ` [net-next v5 11/15] iecm: Add splitq TX/RX Tony Nguyen
@ 2020-08-24 20:40   ` Jakub Kicinski
  2020-08-27 17:17     ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-24 20:40 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, Alice Michael, netdev, nhorman, sassmann,
	jeffrey.t.kirsher, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

On Mon, 24 Aug 2020 10:33:02 -0700 Tony Nguyen wrote:
>  void iecm_get_stats64(struct net_device *netdev,
>  		      struct rtnl_link_stats64 *stats)
>  {
> -	/* stub */
> +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> +
> +	iecm_send_get_stats_msg(vport);

Doesn't this call sleep? This .ndo callback can't sleep.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 04/15] iecm: Common module introduction and function stubs
  2020-08-24 17:32 ` [net-next v5 04/15] iecm: Common module introduction and function stubs Tony Nguyen
@ 2020-08-24 20:41   ` Jakub Kicinski
  2020-08-27 17:18     ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-24 20:41 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, Alice Michael, netdev, nhorman, sassmann,
	jeffrey.t.kirsher, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

On Mon, 24 Aug 2020 10:32:55 -0700 Tony Nguyen wrote:
> +static inline bool
> +iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget)
> +{
> +	/* stub */
> +}

Still a lot of static inlines throughout. Are they making any
difference? The compiler will inline static functions, anyway.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 08/15] iecm: Implement vector allocation
  2020-08-24 17:32 ` [net-next v5 08/15] iecm: Implement vector allocation Tony Nguyen
@ 2020-08-24 20:41   ` Jakub Kicinski
  2020-08-27 17:28     ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-24 20:41 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, Alice Michael, netdev, nhorman, sassmann,
	jeffrey.t.kirsher, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

On Mon, 24 Aug 2020 10:32:59 -0700 Tony Nguyen wrote:
>  static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)
>  {
> -	/* stub */
> +	int irq_num;
> +
> +	irq_num = adapter->msix_entries[0].vector;
> +	synchronize_irq(irq_num);

I don't think you need to sync irq before freeing it.

> +	free_irq(irq_num, adapter);


>  static int iecm_mb_intr_init(struct iecm_adapter *adapter)
>  {
> -	/* stub */
> +	int err = 0;
> +
> +	iecm_get_mb_vec_id(adapter);
> +	adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter);
> +	adapter->irq_mb_handler = iecm_mb_intr_clean;
> +	err = iecm_mb_intr_req_irq(adapter);
> +	return err;

return iecm_mb_intr_req_irq(adapter);

>  static void iecm_vport_intr_rel_irq(struct iecm_vport *vport)
>  {
> -	/* stub */
> +	struct iecm_adapter *adapter = vport->adapter;
> +	int vector;
> +
> +	for (vector = 0; vector < vport->num_q_vectors; vector++) {
> +		struct iecm_q_vector *q_vector = &vport->q_vectors[vector];
> +		int irq_num, vidx;
> +
> +		/* free only the IRQs that were actually requested */
> +		if (!q_vector)
> +			continue;
> +
> +		vidx = vector + vport->q_vector_base;
> +		irq_num = adapter->msix_entries[vidx].vector;
> +
> +		/* clear the affinity_mask in the IRQ descriptor */
> +		irq_set_affinity_hint(irq_num, NULL);
> +		synchronize_irq(irq_num);

here as well

> +		free_irq(irq_num, q_vector);
> +	}
>  }

>  void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)
>  {
> -	/* stub */
> +	struct iecm_q_vector *q_vector = vport->q_vectors;
> +	struct iecm_hw *hw = &vport->adapter->hw;
> +	int q_idx;
> +
> +	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++)
> +		writel_relaxed(0, hw->hw_addr +
> +				  q_vector[q_idx].intr_reg.dyn_ctl);

Why the use of _releaxed() here? is this performance-sensitive?
There is no barrier after, which makes this code fragile.

> @@ -1052,12 +1122,42 @@ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)
>  static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector,
>  					const int type, u16 itr)
>  {
> -	/* stub */
> +	u32 itr_val;
> +
> +	itr &= IECM_ITR_MASK;
> +	/* Don't clear PBA because that can cause lost interrupts that
> +	 * came in while we were cleaning/polling
> +	 */
> +	itr_val = q_vector->intr_reg.dyn_ctl_intena_m |
> +		  (type << q_vector->intr_reg.dyn_ctl_itridx_s) |
> +		  (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1));
> +
> +	return itr_val;
>  }

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 13/15] iecm: Add ethtool
  2020-08-24 17:33 ` [net-next v5 13/15] iecm: Add ethtool Tony Nguyen
@ 2020-08-24 21:44   ` Michal Kubecek
  2020-08-27 17:29     ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Michal Kubecek @ 2020-08-24 21:44 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, Alice Michael, netdev, nhorman, sassmann,
	jeffrey.t.kirsher, Alan Brady, Phani Burra, Joshua Hay,
	Madhu Chittim, Pavan Kumar Linga, Donald Skidmore,
	Jesse Brandeburg, Sridhar Samudrala

On Mon, Aug 24, 2020 at 10:33:04AM -0700, Tony Nguyen wrote:
> From: Alice Michael <alice.michael@intel.com>
> 
> Implement ethtool interface for the common module.
> 
> Signed-off-by: Alice Michael <alice.michael@intel.com>
> Signed-off-by: Alan Brady <alan.brady@intel.com>
> Signed-off-by: Phani Burra <phani.r.burra@intel.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
> Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
[...]
> +static void iecm_get_channels(struct net_device *netdev,
> +			      struct ethtool_channels *ch)
> +{
> +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> +	unsigned int combined;
> +
> +	combined = min(vport->num_txq, vport->num_rxq);
> +
> +	/* Report maximum channels */
> +	ch->max_combined = IECM_MAX_Q;
> +
> +	ch->max_other = IECM_MAX_NONQ;
> +	ch->other_count = IECM_MAX_NONQ;
> +
> +	ch->combined_count = combined;
> +	ch->rx_count = vport->num_rxq - combined;
> +	ch->tx_count = vport->num_txq - combined;
> +}

You don't set max_rx and max_tx so that they will be always reported
as 0. If vport->num_rxq != vport->num_txq, one of rx_count, tx_count
will be higher than corresponding maximum so that any "set channels"
request not touching that value will fail the sanity check in
ethnl_set_channels() or ethtool_set_channels().

> +
> +/**
> + * iecm_set_channels: set the new channel count
> + * @netdev: network interface device structure
> + * @ch: channel information structure
> + *
> + * Negotiate a new number of channels with CP. Returns 0 on success, negative
> + * on failure.
> + */
> +static int iecm_set_channels(struct net_device *netdev,
> +			     struct ethtool_channels *ch)
> +{
> +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> +	int num_req_q = ch->combined_count;
> +
> +	if (num_req_q == max(vport->num_txq, vport->num_rxq))
> +		return 0;
> +
> +	vport->adapter->config_data.num_req_qs = num_req_q;
> +
> +	return iecm_initiate_soft_reset(vport, __IECM_SR_Q_CHANGE);
> +}

In iecm_get_channels() you set combined_count to minimum of num_rxq and
num_txq but here you expect it to be the maximum. And you also
completely ignore everything else than combined_count. Can this ever
work correctly if num_rxq != num_txq?

> +static int iecm_set_ringparam(struct net_device *netdev,
> +			      struct ethtool_ringparam *ring)
> +{
> +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> +	u32 new_rx_count, new_tx_count;
> +
> +	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> +		return -EINVAL;

This will be caught by the generic sanity check in ethnl_set_rings() or
ethtool_set_ringparam().

Michal

> +
> +	new_tx_count = ALIGN(ring->tx_pending, IECM_REQ_DESC_MULTIPLE);
> +	new_rx_count = ALIGN(ring->rx_pending, IECM_REQ_DESC_MULTIPLE);
> +
> +	/* if nothing to do return success */
> +	if (new_tx_count == vport->txq_desc_count &&
> +	    new_rx_count == vport->rxq_desc_count)
> +		return 0;
> +
> +	vport->adapter->config_data.num_req_txq_desc = new_tx_count;
> +	vport->adapter->config_data.num_req_rxq_desc = new_rx_count;
> +
> +	return iecm_initiate_soft_reset(vport, __IECM_SR_Q_DESC_CHANGE);
> +}

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 01/15] virtchnl: Extend AVF ops
  2020-08-24 19:42   ` Jakub Kicinski
@ 2020-08-27 17:16     ` Brady, Alan
  2020-09-10 21:06       ` Brady, Alan
  0 siblings, 1 reply; 32+ messages in thread
From: Brady, Alan @ 2020-08-27 17:16 UTC (permalink / raw)
  To: Jakub Kicinski, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Monday, August 24, 2020 12:43 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: Re: [net-next v5 01/15] virtchnl: Extend AVF ops
> 
> On Mon, 24 Aug 2020 10:32:52 -0700 Tony Nguyen wrote:
> > +struct virtchnl_rss_hash {
> > +	u64 hash;
> > +	u16 vport_id;
> > +};
> > +
> > +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);
> 
> I've added 32bit builds to my local setup since v4 was posted - looks like there's
> a number of errors here. You can't assume u64 forces a 64bit alignment. Best to
> specify the padding explicitly.
> 
> FWIW these are the errors I got but there may be more:
> 

It seems like these are triggering on old messages too, curious why this wasn't caught sooner.  Will fix, thanks.

-Alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 11/15] iecm: Add splitq TX/RX
  2020-08-24 20:40   ` Jakub Kicinski
@ 2020-08-27 17:17     ` Brady, Alan
  0 siblings, 0 replies; 32+ messages in thread
From: Brady, Alan @ 2020-08-27 17:17 UTC (permalink / raw)
  To: Jakub Kicinski, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Monday, August 24, 2020 1:41 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: Re: [net-next v5 11/15] iecm: Add splitq TX/RX
> 
> On Mon, 24 Aug 2020 10:33:02 -0700 Tony Nguyen wrote:
> >  void iecm_get_stats64(struct net_device *netdev,
> >  		      struct rtnl_link_stats64 *stats)  {
> > -	/* stub */
> > +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> > +
> > +	iecm_send_get_stats_msg(vport);
> 
> Doesn't this call sleep? This .ndo callback can't sleep.

Will fix

-Alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 04/15] iecm: Common module introduction and function stubs
  2020-08-24 20:41   ` Jakub Kicinski
@ 2020-08-27 17:18     ` Brady, Alan
  0 siblings, 0 replies; 32+ messages in thread
From: Brady, Alan @ 2020-08-27 17:18 UTC (permalink / raw)
  To: Jakub Kicinski, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Monday, August 24, 2020 1:41 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: Re: [net-next v5 04/15] iecm: Common module introduction and
> function stubs
> 
> On Mon, 24 Aug 2020 10:32:55 -0700 Tony Nguyen wrote:
> > +static inline bool
> > +iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget) {
> > +	/* stub */
> > +}
> 
> Still a lot of static inlines throughout. Are they making any difference? The
> compiler will inline static functions, anyway.

We haven't profiled the inlines to verify they're doing anything, perhaps just some misguided preemptive performance attempt.  We can remove them.  Will fix.

-alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 08/15] iecm: Implement vector allocation
  2020-08-24 20:41   ` Jakub Kicinski
@ 2020-08-27 17:28     ` Brady, Alan
  2020-08-27 18:19       ` Jakub Kicinski
  0 siblings, 1 reply; 32+ messages in thread
From: Brady, Alan @ 2020-08-27 17:28 UTC (permalink / raw)
  To: Jakub Kicinski, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Monday, August 24, 2020 1:42 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: Re: [net-next v5 08/15] iecm: Implement vector allocation
> 
> On Mon, 24 Aug 2020 10:32:59 -0700 Tony Nguyen wrote:
> >  static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)  {
> > -	/* stub */
> > +	int irq_num;
> > +
> > +	irq_num = adapter->msix_entries[0].vector;
> > +	synchronize_irq(irq_num);
> 
> I don't think you need to sync irq before freeing it.
> 

I see other non-Intel drivers syncing before disable/free and Intel drivers have historically done it, not that that's necessarily correct, but are you certain?

> > +	free_irq(irq_num, adapter);
> 
> 
> >  static int iecm_mb_intr_init(struct iecm_adapter *adapter)  {
> > -	/* stub */
> > +	int err = 0;
> > +
> > +	iecm_get_mb_vec_id(adapter);
> > +	adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter);
> > +	adapter->irq_mb_handler = iecm_mb_intr_clean;
> > +	err = iecm_mb_intr_req_irq(adapter);
> > +	return err;
> 
> return iecm_mb_intr_req_irq(adapter);
> 
> >  static void iecm_vport_intr_rel_irq(struct iecm_vport *vport)  {
> > -	/* stub */
> > +	struct iecm_adapter *adapter = vport->adapter;
> > +	int vector;
> > +
> > +	for (vector = 0; vector < vport->num_q_vectors; vector++) {
> > +		struct iecm_q_vector *q_vector = &vport->q_vectors[vector];
> > +		int irq_num, vidx;
> > +
> > +		/* free only the IRQs that were actually requested */
> > +		if (!q_vector)
> > +			continue;
> > +
> > +		vidx = vector + vport->q_vector_base;
> > +		irq_num = adapter->msix_entries[vidx].vector;
> > +
> > +		/* clear the affinity_mask in the IRQ descriptor */
> > +		irq_set_affinity_hint(irq_num, NULL);
> > +		synchronize_irq(irq_num);
> 
> here as well
> 
> > +		free_irq(irq_num, q_vector);
> > +	}
> >  }
> 
> >  void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport)  {
> > -	/* stub */
> > +	struct iecm_q_vector *q_vector = vport->q_vectors;
> > +	struct iecm_hw *hw = &vport->adapter->hw;
> > +	int q_idx;
> > +
> > +	for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++)
> > +		writel_relaxed(0, hw->hw_addr +
> > +				  q_vector[q_idx].intr_reg.dyn_ctl);
> 
> Why the use of _releaxed() here? is this performance-sensitive?
> There is no barrier after, which makes this code fragile.
> 

_relaxed not required here, will fix.

-Alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 13/15] iecm: Add ethtool
  2020-08-24 21:44   ` Michal Kubecek
@ 2020-08-27 17:29     ` Brady, Alan
  0 siblings, 0 replies; 32+ messages in thread
From: Brady, Alan @ 2020-08-27 17:29 UTC (permalink / raw)
  To: Michal Kubecek, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Michal Kubecek <mkubecek@suse.cz>
> Sent: Monday, August 24, 2020 2:45 PM
> To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: Re: [net-next v5 13/15] iecm: Add ethtool
> 
> On Mon, Aug 24, 2020 at 10:33:04AM -0700, Tony Nguyen wrote:
> > From: Alice Michael <alice.michael@intel.com>
> >
> > Implement ethtool interface for the common module.
> >
> > Signed-off-by: Alice Michael <alice.michael@intel.com>
> > Signed-off-by: Alan Brady <alan.brady@intel.com>
> > Signed-off-by: Phani Burra <phani.r.burra@intel.com>
> > Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> > Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
> > Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
> > Reviewed-by: Donald Skidmore <donald.c.skidmore@intel.com>
> > Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> > Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> > Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > ---
> [...]
> > +static void iecm_get_channels(struct net_device *netdev,
> > +			      struct ethtool_channels *ch) {
> > +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> > +	unsigned int combined;
> > +
> > +	combined = min(vport->num_txq, vport->num_rxq);
> > +
> > +	/* Report maximum channels */
> > +	ch->max_combined = IECM_MAX_Q;
> > +
> > +	ch->max_other = IECM_MAX_NONQ;
> > +	ch->other_count = IECM_MAX_NONQ;
> > +
> > +	ch->combined_count = combined;
> > +	ch->rx_count = vport->num_rxq - combined;
> > +	ch->tx_count = vport->num_txq - combined; }
> 
> You don't set max_rx and max_tx so that they will be always reported as 0. If
> vport->num_rxq != vport->num_txq, one of rx_count, tx_count will be higher
> than corresponding maximum so that any "set channels"
> request not touching that value will fail the sanity check in
> ethnl_set_channels() or ethtool_set_channels().

Will fix.

> > +
> > +/**
> > + * iecm_set_channels: set the new channel count
> > + * @netdev: network interface device structure
> > + * @ch: channel information structure
> > + *
> > + * Negotiate a new number of channels with CP. Returns 0 on success,
> > +negative
> > + * on failure.
> > + */
> > +static int iecm_set_channels(struct net_device *netdev,
> > +			     struct ethtool_channels *ch)
> > +{
> > +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> > +	int num_req_q = ch->combined_count;
> > +
> > +	if (num_req_q == max(vport->num_txq, vport->num_rxq))
> > +		return 0;
> > +
> > +	vport->adapter->config_data.num_req_qs = num_req_q;
> > +
> > +	return iecm_initiate_soft_reset(vport, __IECM_SR_Q_CHANGE); }
> 
> In iecm_get_channels() you set combined_count to minimum of num_rxq and
> num_txq but here you expect it to be the maximum. And you also completely
> ignore everything else than combined_count. Can this ever work correctly if
> num_rxq != num_txq?
> 

We will refactor this to make more sense.

> > +static int iecm_set_ringparam(struct net_device *netdev,
> > +			      struct ethtool_ringparam *ring) {
> > +	struct iecm_vport *vport = iecm_netdev_to_vport(netdev);
> > +	u32 new_rx_count, new_tx_count;
> > +
> > +	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> > +		return -EINVAL;
> 
> This will be caught by the generic sanity check in ethnl_set_rings() or
> ethtool_set_ringparam().
> 
> Michal
> 

Will fix, thanks

-Alan



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 08/15] iecm: Implement vector allocation
  2020-08-27 17:28     ` Brady, Alan
@ 2020-08-27 18:19       ` Jakub Kicinski
  0 siblings, 0 replies; 32+ messages in thread
From: Jakub Kicinski @ 2020-08-27 18:19 UTC (permalink / raw)
  To: Brady, Alan
  Cc: Nguyen, Anthony L, davem, Michael, Alice, netdev, nhorman,
	sassmann, Kirsher, Jeffrey T, Burra, Phani R, Hay, Joshua A,
	Chittim, Madhu, Linga, Pavan Kumar, Skidmore, Donald C,
	Brandeburg, Jesse, Samudrala, Sridhar

On Thu, 27 Aug 2020 17:28:29 +0000 Brady, Alan wrote:
> > On Mon, 24 Aug 2020 10:32:59 -0700 Tony Nguyen wrote:  
> > >  static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter)  {
> > > -	/* stub */
> > > +	int irq_num;
> > > +
> > > +	irq_num = adapter->msix_entries[0].vector;
> > > +	synchronize_irq(irq_num);  
> > 
> > I don't think you need to sync irq before freeing it.
> 
> I see other non-Intel drivers syncing before disable/free and Intel
> drivers have historically done it, not that that's necessarily
> correct, but are you certain?

/**
 *	free_irq - free an interrupt allocated with request_irq
 *	@irq: Interrupt line to free
 *	@dev_id: Device identity to free
 *
 *	Remove an interrupt handler. The handler is removed and if the
 *	interrupt line is no longer in use by any driver it is disabled.
 *	On a shared IRQ the caller must ensure the interrupt is disabled
 *	on the card it drives before calling this function. The function
 *	does not return until any executing interrupts for this IRQ
 *	have completed.
 *
 *	This function must not be called from interrupt context.
 *
 *	Returns the devname argument passed to request_irq.
 */

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 15/15] idpf: Introduce idpf driver
  2020-08-24 17:33 ` [net-next v5 15/15] idpf: Introduce idpf driver Tony Nguyen
@ 2020-08-28 10:35   ` kernel test robot
  0 siblings, 0 replies; 32+ messages in thread
From: kernel test robot @ 2020-08-28 10:35 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3429 bytes --]

Hi Tony,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Tony-Nguyen/100GbE-Intel-Wired-LAN-Driver-Updates-2020-08-24/20200825-013928
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 7611cbb900b4b9a6fe5eca12fb12645c0576a015
config: ia64-randconfig-r036-20200828 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_phdrs':
   arch/ia64/kernel/elfcore.c:38: undefined reference to `dump_emit'
   ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_data':
   arch/ia64/kernel/elfcore.c:55: undefined reference to `dump_emit'
   ia64-linux-ld: drivers/net/ethernet/intel/idpf/idpf_main.o: in function `idpf_remove':
>> drivers/net/ethernet/intel/idpf/idpf_main.c:78: undefined reference to `iecm_remove'
   ia64-linux-ld: drivers/net/ethernet/intel/idpf/idpf_main.o: in function `idpf_probe':
>> drivers/net/ethernet/intel/idpf/idpf_main.c:63: undefined reference to `iecm_probe'
>> ia64-linux-ld: drivers/net/ethernet/intel/idpf/idpf_main.o:(.data.rel+0x40): undefined reference to `iecm_shutdown'

# https://github.com/0day-ci/linux/commit/c9fd45bb89dd13d9cdfb1e034178f4e9646d43e7
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Tony-Nguyen/100GbE-Intel-Wired-LAN-Driver-Updates-2020-08-24/20200825-013928
git checkout c9fd45bb89dd13d9cdfb1e034178f4e9646d43e7
vim +78 drivers/net/ethernet/intel/idpf/idpf_main.c

    38	
    39	/**
    40	 * idpf_probe - Device initialization routine
    41	 * @pdev: PCI device information struct
    42	 * @ent: entry in idpf_pci_tbl
    43	 *
    44	 * Returns 0 on success, negative on failure
    45	 */
    46	static int idpf_probe(struct pci_dev *pdev,
    47			      const struct pci_device_id __always_unused *ent)
    48	{
    49		struct iecm_adapter *adapter = NULL;
    50		int err;
    51	
    52		err = pcim_enable_device(pdev);
    53		if (err)
    54			return err;
    55	
    56		adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
    57		if (!adapter)
    58			return -ENOMEM;
    59	
    60		adapter->dev_ops.reg_ops_init = idpf_reg_ops_init;
    61		adapter->debug_msk = debug;
    62	
  > 63		err = iecm_probe(pdev, ent, adapter);
    64		if (err)
    65			kfree(adapter);
    66	
    67		return err;
    68	}
    69	
    70	/**
    71	 * idpf_remove - Device removal routine
    72	 * @pdev: PCI device information struct
    73	 */
    74	static void idpf_remove(struct pci_dev *pdev)
    75	{
    76		struct iecm_adapter *adapter = pci_get_drvdata(pdev);
    77	
  > 78		iecm_remove(pdev);
    79		kfree(adapter);
    80	}
    81	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 21419 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [net-next v5 01/15] virtchnl: Extend AVF ops
  2020-08-27 17:16     ` Brady, Alan
@ 2020-09-10 21:06       ` Brady, Alan
  2020-09-11 15:08         ` Jakub Kicinski
  0 siblings, 1 reply; 32+ messages in thread
From: Brady, Alan @ 2020-09-10 21:06 UTC (permalink / raw)
  To: Jakub Kicinski, Nguyen, Anthony L
  Cc: davem, Michael, Alice, netdev, nhorman, sassmann, Kirsher,
	Jeffrey T, Burra, Phani R, Hay, Joshua A, Chittim, Madhu, Linga,
	Pavan Kumar, Skidmore, Donald C, Brandeburg, Jesse, Samudrala,
	Sridhar

> -----Original Message-----
> From: Brady, Alan
> Sent: Thursday, August 27, 2020 10:16 AM
> To: Jakub Kicinski <kuba@kernel.org>; Nguyen, Anthony L
> <anthony.l.nguyen@intel.com>
> Cc: davem@davemloft.net; Michael, Alice <Alice.Michael@intel.com>;
> netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Burra, Phani R
> <phani.r.burra@intel.com>; Hay, Joshua A <joshua.a.hay@intel.com>; Chittim,
> Madhu <madhu.chittim@intel.com>; Linga, Pavan Kumar
> <Pavan.Kumar.Linga@intel.com>; Skidmore, Donald C
> <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> <sridhar.samudrala@intel.com>
> Subject: RE: [net-next v5 01/15] virtchnl: Extend AVF ops
> 
> > -----Original Message-----
> > From: Jakub Kicinski <kuba@kernel.org>
> > Sent: Monday, August 24, 2020 12:43 PM
> > To: Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> > Cc: davem@davemloft.net; Michael, Alice <alice.michael@intel.com>;
> > netdev@vger.kernel.org; nhorman@redhat.com; sassmann@redhat.com;
> > Kirsher, Jeffrey T <jeffrey.t.kirsher@intel.com>; Brady, Alan
> > <alan.brady@intel.com>; Burra, Phani R <phani.r.burra@intel.com>; Hay,
> > Joshua A <joshua.a.hay@intel.com>; Chittim, Madhu
> > <madhu.chittim@intel.com>; Linga, Pavan Kumar
> > <pavan.kumar.linga@intel.com>; Skidmore, Donald C
> > <donald.c.skidmore@intel.com>; Brandeburg, Jesse
> > <jesse.brandeburg@intel.com>; Samudrala, Sridhar
> > <sridhar.samudrala@intel.com>
> > Subject: Re: [net-next v5 01/15] virtchnl: Extend AVF ops
> >
> > On Mon, 24 Aug 2020 10:32:52 -0700 Tony Nguyen wrote:
> > > +struct virtchnl_rss_hash {
> > > +	u64 hash;
> > > +	u16 vport_id;
> > > +};
> > > +
> > > +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash);
> >
> > I've added 32bit builds to my local setup since v4 was posted - looks
> > like there's a number of errors here. You can't assume u64 forces a
> > 64bit alignment. Best to specify the padding explicitly.
> >
> > FWIW these are the errors I got but there may be more:
> >
> 
> It seems like these are triggering on old messages too, curious why this wasn't
> caught sooner.  Will fix, thanks.
> 
> -Alan

I managed to get a 32-bit build environment setup and found that we do indeed have alignment issues there on 32 bit systems for some of the new ops we added with the series.  However, I think I'm still missing something as it looks like you have errors triggering on much more than I found and I'm suspecting there might be a compile option I'm missing or perhaps my GCC version is older than yours.  E.g., I found issues in virtchnl_txq_info_v2, virtchnl_rxq_info_v2, virtchnl_config_rx_queues, and virtchnl_rss_hash.  It appears you have compile issues in virtchnl_get_capabilities (among others) however which did not trigger on mine.  Manual inspection indicates that it _should_ be triggering a failure and that your setup is more correct than mine.  I'm guessing some extra padding is getting included in some places and causing a false positive on the other alignment issues.  Are there any hints you can provide me that might help me more accurately reproduce this?

-Alan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [net-next v5 01/15] virtchnl: Extend AVF ops
  2020-09-10 21:06       ` Brady, Alan
@ 2020-09-11 15:08         ` Jakub Kicinski
  0 siblings, 0 replies; 32+ messages in thread
From: Jakub Kicinski @ 2020-09-11 15:08 UTC (permalink / raw)
  To: Brady, Alan
  Cc: Nguyen, Anthony L, davem, Michael, Alice, netdev, nhorman,
	sassmann, Kirsher, Jeffrey T, Burra, Phani R, Hay, Joshua A,
	Chittim, Madhu, Linga, Pavan Kumar, Skidmore, Donald C,
	Brandeburg, Jesse, Samudrala, Sridhar

On Thu, 10 Sep 2020 21:06:05 +0000 Brady, Alan wrote:
> > It seems like these are triggering on old messages too, curious why this wasn't
> > caught sooner.  Will fix, thanks.
> > 
> I managed to get a 32-bit build environment setup and found that we
> do indeed have alignment issues there on 32 bit systems for some of
> the new ops we added with the series.  However, I think I'm still
> missing something as it looks like you have errors triggering on much
> more than I found and I'm suspecting there might be a compile option
> I'm missing or perhaps my GCC version is older than yours.  E.g., I
> found issues in virtchnl_txq_info_v2, virtchnl_rxq_info_v2,
> virtchnl_config_rx_queues, and virtchnl_rss_hash.  It appears you
> have compile issues in virtchnl_get_capabilities (among others)
> however which did not trigger on mine.  Manual inspection indicates
> that it _should_ be triggering a failure and that your setup is more
> correct than mine.  I'm guessing some extra padding is getting
> included in some places and causing a false positive on the other
> alignment issues.  Are there any hints you can provide me that might
> help me more accurately reproduce this?

Hm. I build like this:

make CC="ccache gcc" O=build32/ ARCH=i386 allmodconfig
make CC="ccache gcc" O=build32/ ARCH=i386 -j 64 W=1 C=1

My GCC is:
Target: x86_64-pc-linux-gnu
Configured with: ../gcc-10.1.0/configure --enable-languages=c,c++
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 10.1.0 (GCC) 

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2020-09-11 15:15 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-24 17:32 [net-next v5 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-08-24 Tony Nguyen
2020-08-24 17:32 ` [net-next v5 01/15] virtchnl: Extend AVF ops Tony Nguyen
2020-08-24 19:42   ` Jakub Kicinski
2020-08-27 17:16     ` Brady, Alan
2020-09-10 21:06       ` Brady, Alan
2020-09-11 15:08         ` Jakub Kicinski
2020-08-24 17:32 ` [net-next v5 02/15] iecm: Add framework set of header files Tony Nguyen
2020-08-24 17:32 ` [net-next v5 03/15] iecm: Add TX/RX " Tony Nguyen
2020-08-24 17:32 ` [net-next v5 04/15] iecm: Common module introduction and function stubs Tony Nguyen
2020-08-24 20:41   ` Jakub Kicinski
2020-08-27 17:18     ` Brady, Alan
2020-08-24 17:32 ` [net-next v5 05/15] iecm: Add basic netdevice functionality Tony Nguyen
2020-08-24 17:32 ` [net-next v5 06/15] iecm: Implement mailbox functionality Tony Nguyen
2020-08-24 20:35   ` Brady, Alan
2020-08-24 20:40     ` Jakub Kicinski
2020-08-24 17:32 ` [net-next v5 07/15] iecm: Implement virtchnl commands Tony Nguyen
2020-08-24 17:32 ` [net-next v5 08/15] iecm: Implement vector allocation Tony Nguyen
2020-08-24 20:41   ` Jakub Kicinski
2020-08-27 17:28     ` Brady, Alan
2020-08-27 18:19       ` Jakub Kicinski
2020-08-24 17:33 ` [net-next v5 09/15] iecm: Init and allocate vport Tony Nguyen
2020-08-24 17:33 ` [net-next v5 10/15] iecm: Deinit vport Tony Nguyen
2020-08-24 17:33 ` [net-next v5 11/15] iecm: Add splitq TX/RX Tony Nguyen
2020-08-24 20:40   ` Jakub Kicinski
2020-08-27 17:17     ` Brady, Alan
2020-08-24 17:33 ` [net-next v5 12/15] iecm: Add singleq TX/RX Tony Nguyen
2020-08-24 17:33 ` [net-next v5 13/15] iecm: Add ethtool Tony Nguyen
2020-08-24 21:44   ` Michal Kubecek
2020-08-27 17:29     ` Brady, Alan
2020-08-24 17:33 ` [net-next v5 14/15] iecm: Add iecm to the kernel build system Tony Nguyen
2020-08-24 17:33 ` [net-next v5 15/15] idpf: Introduce idpf driver Tony Nguyen
2020-08-28 10:35   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.