All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 net-next 0/8] Hisilicon Network Subsystem 3 VF Ethernet Driver
@ 2017-12-08 21:16 ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch-set contains the support of the HNS3 (Hisilicon Network Subsystem 3)
Virtual Function Ethernet driver for hip08 family of SoCs. The Physical Function
driver is already part of the Linux mainline. 

This VF driver has its Hardware Compatibility Layer and has commom/unified ENET
layer/client/ethtool code with the PF driver. It also has support of mailbox to
communicate with the HNS3 PF driver. The basic architecture of VF driver is
derivative of the PF driver. Just like PF driver, this driver is also PCI
Express based.

This driver is the ongoing development work and HNS3 VF Ethernet driver would be
incrementally enhanced with more new features.

High Level Architecture:

                     [ Ethtool ]
	                 | 
                 [ Ethernet Client ] ... [ RoCE Client ] 
                         |                     |           
                   [ HNAE Device ]             |________       
                         |                     |       |
    ---------------------------------------------      |
                                                       |
     [ HNAE3 Framework (Register/unregister) ]         |
                                                       |
    ---------------------------------------------      |
                         |                             |
                 [ VF HCLGE Layer ]                    |
                  |             |                      |
                  |             |                      |
                  |             |                      |
                  |     [ VF Mailbox (To PF via IMP) ] |                 
                  |             |                      |   
             [ IMP command Interface ]  [ IMP command Interface ]
                        |                              |
                        |                              |
           (A B O V E  R U N S  O N  G U E S T  S Y S T E M)
    -------------------------------------------------------------
              Q E M U / V F I O / K V M (on Host System)
    -------------------------------------------------------------
            HIP08  H A R D W A R E (limited to VF by SMMU)

    [ IMP/Mgmt Processor (hardware common to system/cmd based) ]


                Fig 1.   HNS3 Virtual Function Driver


                    

    	[ dcbnl ]  [ Ethtool ]
            |          |
   	[  Ethernet Client  ]  [ ODP/UIO Client ] . . .[ RoCE Client ]     
              |_____________________|                 |
                         |                   _________|        
                   [ HNAE Device ]           |        |
                         |                   |        |
    ---------------------------------------------     |
                                                      |
     [ HNAE3 Framework (Register/unregister) ]        |
                                                      |
    ---------------------------------------------     |
                         |                            |
                  [ HCLGE Layer ]                     |
         ________________|_________________           |
        |                |                 |          |
     [ DCB ]             |                 |          |
        |                |                 |          |
  [ Scheduler/Shaper ] [ MDIO ]      [ PF Mailbox ]   |
        |                |                 |          |
        |________________|_________________|          |     
                         |                            |
             [ IMP command Interface ]     [ IMP command Interface ] 
    ----------------------------------------------------------------
              HIP08  H A R D W A R E                  

    [ IMP/Mgmt Processor (hardware common to system/cmd based) ]


               Fig 2.    Existing HNS3 PF Driver (added with mailbox)

Change Log Summary:
Patch V2: 1. Addressed some comments by David Miller.
	  2. Addressed some internal comments on various patches
Patch V1: Initial Submit

NOTE: This patch depends upon https://lkml.org/lkml/2017/11/30/1079

Salil Mehta (8):
  net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface
  net: hns3: Add mailbox support to VF driver
  net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support
  net: hns3: Add HNS3 VF driver to kernel build framework
  net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC
  net: hns3: Add mailbox support to PF driver
  net: hns3: Change PF to add ring-vect binding & resetQ to mailbox
  net: hns3: Add mailbox interrupt handling to PF driver

 drivers/net/ethernet/hisilicon/Kconfig             |   28 +-
 drivers/net/ethernet/hisilicon/hns3/Makefile       |    7 +-
 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h    |   94 ++
 drivers/net/ethernet/hisilicon/hns3/hnae3.c        |   14 +-
 drivers/net/ethernet/hisilicon/hns3/hnae3.h        |    7 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c       |    2 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.c        |    2 +
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.h        |    0
 .../hisilicon/hns3/{hns3pf => }/hns3_ethtool.c     |   22 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |    7 +-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    |  207 +--
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   17 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c |  416 ++++++
 .../net/ethernet/hisilicon/hns3/hns3vf/Makefile    |    8 +
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c   |  348 +++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h   |  262 ++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  | 1495 ++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h  |  170 +++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c   |  188 +++
 19 files changed, 3171 insertions(+), 123 deletions(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c (97%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.c (99%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.h (100%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_ethtool.c (97%)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 0/8] Hisilicon Network Subsystem 3 VF Ethernet Driver
@ 2017-12-08 21:16 ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch-set contains the support of the HNS3 (Hisilicon Network Subsystem 3)
Virtual Function Ethernet driver for hip08 family of SoCs. The Physical Function
driver is already part of the Linux mainline. 

This VF driver has its Hardware Compatibility Layer and has commom/unified ENET
layer/client/ethtool code with the PF driver. It also has support of mailbox to
communicate with the HNS3 PF driver. The basic architecture of VF driver is
derivative of the PF driver. Just like PF driver, this driver is also PCI
Express based.

This driver is the ongoing development work and HNS3 VF Ethernet driver would be
incrementally enhanced with more new features.

High Level Architecture:

                     [ Ethtool ]
	                 | 
                 [ Ethernet Client ] ... [ RoCE Client ] 
                         |                     |           
                   [ HNAE Device ]             |________       
                         |                     |       |
    ---------------------------------------------      |
                                                       |
     [ HNAE3 Framework (Register/unregister) ]         |
                                                       |
    ---------------------------------------------      |
                         |                             |
                 [ VF HCLGE Layer ]                    |
                  |             |                      |
                  |             |                      |
                  |             |                      |
                  |     [ VF Mailbox (To PF via IMP) ] |                 
                  |             |                      |   
             [ IMP command Interface ]  [ IMP command Interface ]
                        |                              |
                        |                              |
           (A B O V E  R U N S  O N  G U E S T  S Y S T E M)
    -------------------------------------------------------------
              Q E M U / V F I O / K V M (on Host System)
    -------------------------------------------------------------
            HIP08  H A R D W A R E (limited to VF by SMMU)

    [ IMP/Mgmt Processor (hardware common to system/cmd based) ]


                Fig 1.   HNS3 Virtual Function Driver


                    

    	[ dcbnl ]  [ Ethtool ]
            |          |
   	[  Ethernet Client  ]  [ ODP/UIO Client ] . . .[ RoCE Client ]     
              |_____________________|                 |
                         |                   _________|        
                   [ HNAE Device ]           |        |
                         |                   |        |
    ---------------------------------------------     |
                                                      |
     [ HNAE3 Framework (Register/unregister) ]        |
                                                      |
    ---------------------------------------------     |
                         |                            |
                  [ HCLGE Layer ]                     |
         ________________|_________________           |
        |                |                 |          |
     [ DCB ]             |                 |          |
        |                |                 |          |
  [ Scheduler/Shaper ] [ MDIO ]      [ PF Mailbox ]   |
        |                |                 |          |
        |________________|_________________|          |     
                         |                            |
             [ IMP command Interface ]     [ IMP command Interface ] 
    ----------------------------------------------------------------
              HIP08  H A R D W A R E                  

    [ IMP/Mgmt Processor (hardware common to system/cmd based) ]


               Fig 2.    Existing HNS3 PF Driver (added with mailbox)

Change Log Summary:
Patch V2: 1. Addressed some comments by David Miller.
	  2. Addressed some internal comments on various patches
Patch V1: Initial Submit

NOTE: This patch depends upon https://lkml.org/lkml/2017/11/30/1079

Salil Mehta (8):
  net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface
  net: hns3: Add mailbox support to VF driver
  net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support
  net: hns3: Add HNS3 VF driver to kernel build framework
  net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC
  net: hns3: Add mailbox support to PF driver
  net: hns3: Change PF to add ring-vect binding & resetQ to mailbox
  net: hns3: Add mailbox interrupt handling to PF driver

 drivers/net/ethernet/hisilicon/Kconfig             |   28 +-
 drivers/net/ethernet/hisilicon/hns3/Makefile       |    7 +-
 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h    |   94 ++
 drivers/net/ethernet/hisilicon/hns3/hnae3.c        |   14 +-
 drivers/net/ethernet/hisilicon/hns3/hnae3.h        |    7 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c       |    2 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.c        |    2 +
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.h        |    0
 .../hisilicon/hns3/{hns3pf => }/hns3_ethtool.c     |   22 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |    7 +-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    |  207 +--
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   17 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c |  416 ++++++
 .../net/ethernet/hisilicon/hns3/hns3vf/Makefile    |    8 +
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c   |  348 +++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h   |  262 ++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  | 1495 ++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h  |  170 +++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c   |  188 +++
 19 files changed, 3171 insertions(+), 123 deletions(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c (97%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.c (99%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.h (100%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_ethtool.c (97%)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 1/8] net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface
  2017-12-08 21:16 ` Salil Mehta
@ 2017-12-08 21:16   ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch adds support of command interface for communication with
the IMP(Integrated Management Processor) for HNS3 Virtual Function
Driver.

Each VF has support of CQP(Command Queue Pair) ring interface.
Each CQP consis of send queue CSQ and receive queue CRQ.
There are various commands a VF may support, like to query frimware
version, TQP management, statistics, interrupt related, mailbox etc.

This also contains code to initialize the command queue, manage the
command queue descriptors and Rx/Tx protocol with the command processor
in the form of various commands/results and acknowledgements.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Reworked comments by David Miller(except one comment on the
          udelay() while holding locks. Needs further discussion)
  Link: https://lkml.org/lkml/2017/12/5/639
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c   | 348 +++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h   | 262 ++++++++++++++++
 2 files changed, 610 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
new file mode 100644
index 0000000..d786ab8
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
@@ -0,0 +1,348 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include "hclgevf_cmd.h"
+#include "hclgevf_main.h"
+#include "hnae3.h"
+
+#define hclgevf_is_csq(ring) ((ring)->flag & HCLGEVF_TYPE_CSQ)
+#define hclgevf_ring_to_dma_dir(ring) (hclgevf_is_csq(ring) ? \
+					DMA_TO_DEVICE : DMA_FROM_DEVICE)
+#define cmq_ring_to_dev(ring)   (&(ring)->dev->pdev->dev)
+
+static int hclgevf_ring_space(struct hclgevf_cmq_ring *ring)
+{
+	int ntc = ring->next_to_clean;
+	int ntu = ring->next_to_use;
+	int used;
+
+	used = (ntu - ntc + ring->desc_num) % ring->desc_num;
+
+	return ring->desc_num - used - 1;
+}
+
+static int hclgevf_cmd_csq_clean(struct hclgevf_hw *hw)
+{
+	struct hclgevf_cmq_ring *csq = &hw->cmq.csq;
+	u16 ntc = csq->next_to_clean;
+	struct hclgevf_desc *desc;
+	int clean = 0;
+	u32 head;
+
+	desc = &csq->desc[ntc];
+	head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
+	while (head != ntc) {
+		memset(desc, 0, sizeof(*desc));
+		ntc++;
+		if (ntc == csq->desc_num)
+			ntc = 0;
+		desc = &csq->desc[ntc];
+		clean++;
+	}
+	csq->next_to_clean = ntc;
+
+	return clean;
+}
+
+static bool hclgevf_cmd_csq_done(struct hclgevf_hw *hw)
+{
+	u32 head;
+
+	head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
+
+	return head == hw->cmq.csq.next_to_use;
+}
+
+static bool hclgevf_is_special_opcode(u16 opcode)
+{
+	u16 spec_opcode[] = {0x30, 0x31, 0x32};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) {
+		if (spec_opcode[i] == opcode)
+			return true;
+	}
+
+	return false;
+}
+
+static int hclgevf_alloc_cmd_desc(struct hclgevf_cmq_ring *ring)
+{
+	int size = ring->desc_num * sizeof(struct hclgevf_desc);
+
+	ring->desc = kzalloc(size, GFP_KERNEL);
+	if (!ring->desc)
+		return -ENOMEM;
+
+	ring->desc_dma_addr = dma_map_single(cmq_ring_to_dev(ring), ring->desc,
+					     size, DMA_BIDIRECTIONAL);
+
+	if (dma_mapping_error(cmq_ring_to_dev(ring), ring->desc_dma_addr)) {
+		ring->desc_dma_addr = 0;
+		kfree(ring->desc);
+		ring->desc = NULL;
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void hclgevf_free_cmd_desc(struct hclgevf_cmq_ring *ring)
+{
+	dma_unmap_single(cmq_ring_to_dev(ring), ring->desc_dma_addr,
+			 ring->desc_num * sizeof(ring->desc[0]),
+			 hclgevf_ring_to_dma_dir(ring));
+
+	ring->desc_dma_addr = 0;
+	kfree(ring->desc);
+	ring->desc = NULL;
+}
+
+static int hclgevf_init_cmd_queue(struct hclgevf_dev *hdev,
+				  struct hclgevf_cmq_ring *ring)
+{
+	struct hclgevf_hw *hw = &hdev->hw;
+	int ring_type = ring->flag;
+	u32 reg_val;
+	int ret;
+
+	ring->desc_num = HCLGEVF_NIC_CMQ_DESC_NUM;
+	spin_lock_init(&ring->lock);
+	ring->next_to_clean = 0;
+	ring->next_to_use = 0;
+	ring->dev = hdev;
+
+	/* allocate CSQ/CRQ descriptor */
+	ret = hclgevf_alloc_cmd_desc(ring);
+	if (ret) {
+		dev_err(&hdev->pdev->dev, "failed(%d) to alloc %s desc\n", ret,
+			(ring_type == HCLGEVF_TYPE_CSQ) ? "CSQ" : "CRQ");
+		return ret;
+	}
+
+	/* initialize the hardware registers with csq/crq dma-address,
+	 * descriptor number, head & tail pointers
+	 */
+	switch (ring_type) {
+	case HCLGEVF_TYPE_CSQ:
+		reg_val = (u32)ring->desc_dma_addr;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val);
+		reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val);
+
+		reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
+		reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val);
+
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0);
+		break;
+	case HCLGEVF_TYPE_CRQ:
+		reg_val = (u32)ring->desc_dma_addr;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val);
+		reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val);
+
+		reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
+		reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val);
+
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0);
+		break;
+	}
+
+	return 0;
+}
+
+void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
+				  enum hclgevf_opcode_type opcode, bool is_read)
+{
+	memset(desc, 0, sizeof(struct hclgevf_desc));
+	desc->opcode = cpu_to_le16(opcode);
+	desc->flag = cpu_to_le16(HCLGEVF_CMD_FLAG_NO_INTR |
+				 HCLGEVF_CMD_FLAG_IN);
+	if (is_read)
+		desc->flag |= cpu_to_le16(HCLGEVF_CMD_FLAG_WR);
+	else
+		desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR);
+}
+
+/* hclgevf_cmd_send - send command to command queue
+ * @hw: pointer to the hw struct
+ * @desc: prefilled descriptor for describing the command
+ * @num : the number of descriptors to be sent
+ *
+ * This is the main send command for command queue, it
+ * sends the queue, cleans the queue, etc
+ */
+int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
+{
+	struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev;
+	struct hclgevf_desc *desc_to_use;
+	bool complete = false;
+	u32 timeout = 0;
+	int handle = 0;
+	int status = 0;
+	u16 retval;
+	u16 opcode;
+	int ntc;
+
+	spin_lock_bh(&hw->cmq.csq.lock);
+
+	if (num > hclgevf_ring_space(&hw->cmq.csq)) {
+		spin_unlock_bh(&hw->cmq.csq.lock);
+		return -EBUSY;
+	}
+
+	/* Record the location of desc in the ring for this time
+	 * which will be use for hardware to write back
+	 */
+	ntc = hw->cmq.csq.next_to_use;
+	opcode = le16_to_cpu(desc[0].opcode);
+	while (handle < num) {
+		desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use];
+		*desc_to_use = desc[handle];
+		(hw->cmq.csq.next_to_use)++;
+		if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num)
+			hw->cmq.csq.next_to_use = 0;
+		handle++;
+	}
+
+	/* Write to hardware */
+	hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG,
+			  hw->cmq.csq.next_to_use);
+
+	/* If the command is sync, wait for the firmware to write back,
+	 * if multi descriptors to be sent, use the first one to check
+	 */
+	if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) {
+		do {
+			if (hclgevf_cmd_csq_done(hw))
+				break;
+			udelay(1);
+			timeout++;
+		} while (timeout < hw->cmq.tx_timeout);
+	}
+
+	if (hclgevf_cmd_csq_done(hw)) {
+		complete = true;
+		handle = 0;
+
+		while (handle < num) {
+			/* Get the result of hardware write back */
+			desc_to_use = &hw->cmq.csq.desc[ntc];
+			desc[handle] = *desc_to_use;
+
+			if (likely(!hclgevf_is_special_opcode(opcode)))
+				retval = le16_to_cpu(desc[handle].retval);
+			else
+				retval = le16_to_cpu(desc[0].retval);
+
+			if ((enum hclgevf_cmd_return_status)retval ==
+			    HCLGEVF_CMD_EXEC_SUCCESS)
+				status = 0;
+			else
+				status = -EIO;
+			hw->cmq.last_status = (enum hclgevf_cmd_status)retval;
+			ntc++;
+			handle++;
+			if (ntc == hw->cmq.csq.desc_num)
+				ntc = 0;
+		}
+	}
+
+	if (!complete)
+		status = -EAGAIN;
+
+	/* Clean the command send queue */
+	handle = hclgevf_cmd_csq_clean(hw);
+	if (handle != num) {
+		dev_warn(&hdev->pdev->dev,
+			 "cleaned %d, need to clean %d\n", handle, num);
+	}
+
+	spin_unlock_bh(&hw->cmq.csq.lock);
+
+	return status;
+}
+
+static int  hclgevf_cmd_query_firmware_version(struct hclgevf_hw *hw,
+					       u32 *version)
+{
+	struct hclgevf_query_version_cmd *resp;
+	struct hclgevf_desc desc;
+	int status;
+
+	resp = (struct hclgevf_query_version_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_FW_VER, 1);
+	status = hclgevf_cmd_send(hw, &desc, 1);
+	if (!status)
+		*version = le32_to_cpu(resp->firmware);
+
+	return status;
+}
+
+int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+{
+	u32 version;
+	int ret;
+
+	/* setup Tx write back timeout */
+	hdev->hw.cmq.tx_timeout = HCLGEVF_CMDQ_TX_TIMEOUT;
+
+	/* setup queue CSQ/CRQ rings */
+	hdev->hw.cmq.csq.flag = HCLGEVF_TYPE_CSQ;
+	ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.csq);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize CSQ ring\n", ret);
+		return ret;
+	}
+
+	hdev->hw.cmq.crq.flag = HCLGEVF_TYPE_CRQ;
+	ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.crq);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize CRQ ring\n", ret);
+		goto err_csq;
+	}
+
+	/* get firmware version */
+	ret = hclgevf_cmd_query_firmware_version(&hdev->hw, &version);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to query firmware version\n", ret);
+		goto err_crq;
+	}
+	hdev->fw_version = version;
+
+	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+
+	return 0;
+err_crq:
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+err_csq:
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+
+	return ret;
+}
+
+void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
+{
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
new file mode 100644
index 0000000..df72927
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
@@ -0,0 +1,262 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGEVF_CMD_H
+#define __HCLGEVF_CMD_H
+#include <linux/io.h>
+#include <linux/types.h>
+#include "hnae3.h"
+
+#define HCLGEVF_CMDQ_TX_TIMEOUT		200
+#define HCLGEVF_CMDQ_RX_INVLD_B		0
+#define HCLGEVF_CMDQ_RX_OUTVLD_B	1
+
+struct hclgevf_hw;
+struct hclgevf_dev;
+
+struct hclgevf_desc {
+	__le16 opcode;
+	__le16 flag;
+	__le16 retval;
+	__le16 rsv;
+	__le32 data[6];
+};
+
+struct hclgevf_desc_cb {
+	dma_addr_t dma;
+	void *va;
+	u32 length;
+};
+
+struct hclgevf_cmq_ring {
+	dma_addr_t desc_dma_addr;
+	struct hclgevf_desc *desc;
+	struct hclgevf_desc_cb *desc_cb;
+	struct hclgevf_dev  *dev;
+	u32 head;
+	u32 tail;
+
+	u16 buf_size;
+	u16 desc_num;
+	int next_to_use;
+	int next_to_clean;
+	u8 flag;
+	spinlock_t lock; /* Command queue lock */
+};
+
+enum hclgevf_cmd_return_status {
+	HCLGEVF_CMD_EXEC_SUCCESS	= 0,
+	HCLGEVF_CMD_NO_AUTH	= 1,
+	HCLGEVF_CMD_NOT_EXEC	= 2,
+	HCLGEVF_CMD_QUEUE_FULL	= 3,
+};
+
+enum hclgevf_cmd_status {
+	HCLGEVF_STATUS_SUCCESS	= 0,
+	HCLGEVF_ERR_CSQ_FULL	= -1,
+	HCLGEVF_ERR_CSQ_TIMEOUT	= -2,
+	HCLGEVF_ERR_CSQ_ERROR	= -3
+};
+
+struct hclgevf_cmq {
+	struct hclgevf_cmq_ring csq;
+	struct hclgevf_cmq_ring crq;
+	u16 tx_timeout; /* Tx timeout */
+	enum hclgevf_cmd_status last_status;
+};
+
+#define HCLGEVF_CMD_FLAG_IN_VALID_SHIFT		0
+#define HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT	1
+#define HCLGEVF_CMD_FLAG_NEXT_SHIFT		2
+#define HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT		3
+#define HCLGEVF_CMD_FLAG_NO_INTR_SHIFT		4
+#define HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT		5
+
+#define HCLGEVF_CMD_FLAG_IN		BIT(HCLGEVF_CMD_FLAG_IN_VALID_SHIFT)
+#define HCLGEVF_CMD_FLAG_OUT		BIT(HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT)
+#define HCLGEVF_CMD_FLAG_NEXT		BIT(HCLGEVF_CMD_FLAG_NEXT_SHIFT)
+#define HCLGEVF_CMD_FLAG_WR		BIT(HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT)
+#define HCLGEVF_CMD_FLAG_NO_INTR	BIT(HCLGEVF_CMD_FLAG_NO_INTR_SHIFT)
+#define HCLGEVF_CMD_FLAG_ERR_INTR	BIT(HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT)
+
+enum hclgevf_opcode_type {
+	/* Generic command */
+	HCLGEVF_OPC_QUERY_FW_VER	= 0x0001,
+	/* TQP command */
+	HCLGEVF_OPC_QUERY_TX_STATUS	= 0x0B03,
+	HCLGEVF_OPC_QUERY_RX_STATUS	= 0x0B13,
+	HCLGEVF_OPC_CFG_COM_TQP_QUEUE	= 0x0B20,
+	/* TSO cmd */
+	HCLGEVF_OPC_TSO_GENERIC_CONFIG	= 0x0C01,
+	/* RSS cmd */
+	HCLGEVF_OPC_RSS_GENERIC_CONFIG	= 0x0D01,
+	HCLGEVF_OPC_RSS_INDIR_TABLE	= 0x0D07,
+	HCLGEVF_OPC_RSS_TC_MODE		= 0x0D08,
+	/* Mailbox cmd */
+	HCLGEVF_OPC_MBX_VF_TO_PF	= 0x2001,
+};
+
+#define HCLGEVF_TQP_REG_OFFSET		0x80000
+#define HCLGEVF_TQP_REG_SIZE		0x200
+
+struct hclgevf_tqp_map {
+	__le16 tqp_id;	/* Absolute tqp id for in this pf */
+	u8 tqp_vf; /* VF id */
+#define HCLGEVF_TQP_MAP_TYPE_PF		0
+#define HCLGEVF_TQP_MAP_TYPE_VF		1
+#define HCLGEVF_TQP_MAP_TYPE_B		0
+#define HCLGEVF_TQP_MAP_EN_B		1
+	u8 tqp_flag;	/* Indicate it's pf or vf tqp */
+	__le16 tqp_vid; /* Virtual id in this pf/vf */
+	u8 rsv[18];
+};
+
+#define HCLGEVF_VECTOR_ELEMENTS_PER_CMD	10
+
+enum hclgevf_int_type {
+	HCLGEVF_INT_TX = 0,
+	HCLGEVF_INT_RX,
+	HCLGEVF_INT_EVENT,
+};
+
+struct hclgevf_ctrl_vector_chain {
+	u8 int_vector_id;
+	u8 int_cause_num;
+#define HCLGEVF_INT_TYPE_S	0
+#define HCLGEVF_INT_TYPE_M	0x3
+#define HCLGEVF_TQP_ID_S	2
+#define HCLGEVF_TQP_ID_M	(0x3fff << HCLGEVF_TQP_ID_S)
+	__le16 tqp_type_and_id[HCLGEVF_VECTOR_ELEMENTS_PER_CMD];
+	u8 vfid;
+	u8 resv;
+};
+
+struct hclgevf_query_version_cmd {
+	__le32 firmware;
+	__le32 firmware_rsv[5];
+};
+
+#define HCLGEVF_RSS_HASH_KEY_OFFSET	4
+#define HCLGEVF_RSS_HASH_KEY_NUM	16
+struct hclgevf_rss_config_cmd {
+	u8 hash_config;
+	u8 rsv[7];
+	u8 hash_key[HCLGEVF_RSS_HASH_KEY_NUM];
+};
+
+struct hclgevf_rss_input_tuple_cmd {
+	u8 ipv4_tcp_en;
+	u8 ipv4_udp_en;
+	u8 ipv4_stcp_en;
+	u8 ipv4_fragment_en;
+	u8 ipv6_tcp_en;
+	u8 ipv6_udp_en;
+	u8 ipv6_stcp_en;
+	u8 ipv6_fragment_en;
+	u8 rsv[16];
+};
+
+#define HCLGEVF_RSS_CFG_TBL_SIZE	16
+
+struct hclgevf_rss_indirection_table_cmd {
+	u16 start_table_index;
+	u16 rss_set_bitmap;
+	u8 rsv[4];
+	u8 rss_result[HCLGEVF_RSS_CFG_TBL_SIZE];
+};
+
+#define HCLGEVF_RSS_TC_OFFSET_S		0
+#define HCLGEVF_RSS_TC_OFFSET_M		(0x3ff << HCLGEVF_RSS_TC_OFFSET_S)
+#define HCLGEVF_RSS_TC_SIZE_S		12
+#define HCLGEVF_RSS_TC_SIZE_M		(0x7 << HCLGEVF_RSS_TC_SIZE_S)
+#define HCLGEVF_RSS_TC_VALID_B		15
+#define HCLGEVF_MAX_TC_NUM		8
+struct hclgevf_rss_tc_mode_cmd {
+	u16 rss_tc_mode[HCLGEVF_MAX_TC_NUM];
+	u8 rsv[8];
+};
+
+#define HCLGEVF_LINK_STS_B	0
+#define HCLGEVF_LINK_STATUS	BIT(HCLGEVF_LINK_STS_B)
+struct hclgevf_link_status_cmd {
+	u8 status;
+	u8 rsv[23];
+};
+
+#define HCLGEVF_RING_ID_MASK	0x3ff
+#define HCLGEVF_TQP_ENABLE_B	0
+
+struct hclgevf_cfg_com_tqp_queue_cmd {
+	__le16 tqp_id;
+	__le16 stream_id;
+	u8 enable;
+	u8 rsv[19];
+};
+
+struct hclgevf_cfg_tx_queue_pointer_cmd {
+	__le16 tqp_id;
+	__le16 tx_tail;
+	__le16 tx_head;
+	__le16 fbd_num;
+	__le16 ring_offset;
+	u8 rsv[14];
+};
+
+#define HCLGEVF_TSO_ENABLE_B	0
+struct hclgevf_cfg_tso_status_cmd {
+	u8 tso_enable;
+	u8 rsv[23];
+};
+
+#define HCLGEVF_TYPE_CRQ		0
+#define HCLGEVF_TYPE_CSQ		1
+#define HCLGEVF_NIC_CSQ_BASEADDR_L_REG	0x27000
+#define HCLGEVF_NIC_CSQ_BASEADDR_H_REG	0x27004
+#define HCLGEVF_NIC_CSQ_DEPTH_REG	0x27008
+#define HCLGEVF_NIC_CSQ_TAIL_REG	0x27010
+#define HCLGEVF_NIC_CSQ_HEAD_REG	0x27014
+#define HCLGEVF_NIC_CRQ_BASEADDR_L_REG	0x27018
+#define HCLGEVF_NIC_CRQ_BASEADDR_H_REG	0x2701c
+#define HCLGEVF_NIC_CRQ_DEPTH_REG	0x27020
+#define HCLGEVF_NIC_CRQ_TAIL_REG	0x27024
+#define HCLGEVF_NIC_CRQ_HEAD_REG	0x27028
+#define HCLGEVF_NIC_CMQ_EN_B		16
+#define HCLGEVF_NIC_CMQ_ENABLE		BIT(HCLGEVF_NIC_CMQ_EN_B)
+#define HCLGEVF_NIC_CMQ_DESC_NUM	1024
+#define HCLGEVF_NIC_CMQ_DESC_NUM_S	3
+#define HCLGEVF_NIC_CMDQ_INT_SRC_REG	0x27100
+
+static inline void hclgevf_write_reg(void __iomem *base, u32 reg, u32 value)
+{
+	writel(value, base + reg);
+}
+
+static inline u32 hclgevf_read_reg(u8 __iomem *base, u32 reg)
+{
+	u8 __iomem *reg_addr = READ_ONCE(base);
+
+	return readl(reg_addr + reg);
+}
+
+#define hclgevf_write_dev(a, reg, value) \
+	hclgevf_write_reg((a)->io_base, (reg), (value))
+#define hclgevf_read_dev(a, reg) \
+	hclgevf_read_reg((a)->io_base, (reg))
+
+#define HCLGEVF_SEND_SYNC(flag) \
+	((flag) & HCLGEVF_CMD_FLAG_NO_INTR)
+
+int hclgevf_cmd_init(struct hclgevf_dev *hdev);
+void hclgevf_cmd_uninit(struct hclgevf_dev *hdev);
+
+int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num);
+void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
+				  enum hclgevf_opcode_type opcode,
+				  bool is_read);
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 1/8] net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface
@ 2017-12-08 21:16   ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch adds support of command interface for communication with
the IMP(Integrated Management Processor) for HNS3 Virtual Function
Driver.

Each VF has support of CQP(Command Queue Pair) ring interface.
Each CQP consis of send queue CSQ and receive queue CRQ.
There are various commands a VF may support, like to query frimware
version, TQP management, statistics, interrupt related, mailbox etc.

This also contains code to initialize the command queue, manage the
command queue descriptors and Rx/Tx protocol with the command processor
in the form of various commands/results and acknowledgements.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Reworked comments by David Miller(except one comment on the
          udelay() while holding locks. Needs further discussion)
  Link: https://lkml.org/lkml/2017/12/5/639
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c   | 348 +++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h   | 262 ++++++++++++++++
 2 files changed, 610 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
new file mode 100644
index 0000000..d786ab8
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
@@ -0,0 +1,348 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include "hclgevf_cmd.h"
+#include "hclgevf_main.h"
+#include "hnae3.h"
+
+#define hclgevf_is_csq(ring) ((ring)->flag & HCLGEVF_TYPE_CSQ)
+#define hclgevf_ring_to_dma_dir(ring) (hclgevf_is_csq(ring) ? \
+					DMA_TO_DEVICE : DMA_FROM_DEVICE)
+#define cmq_ring_to_dev(ring)   (&(ring)->dev->pdev->dev)
+
+static int hclgevf_ring_space(struct hclgevf_cmq_ring *ring)
+{
+	int ntc = ring->next_to_clean;
+	int ntu = ring->next_to_use;
+	int used;
+
+	used = (ntu - ntc + ring->desc_num) % ring->desc_num;
+
+	return ring->desc_num - used - 1;
+}
+
+static int hclgevf_cmd_csq_clean(struct hclgevf_hw *hw)
+{
+	struct hclgevf_cmq_ring *csq = &hw->cmq.csq;
+	u16 ntc = csq->next_to_clean;
+	struct hclgevf_desc *desc;
+	int clean = 0;
+	u32 head;
+
+	desc = &csq->desc[ntc];
+	head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
+	while (head != ntc) {
+		memset(desc, 0, sizeof(*desc));
+		ntc++;
+		if (ntc == csq->desc_num)
+			ntc = 0;
+		desc = &csq->desc[ntc];
+		clean++;
+	}
+	csq->next_to_clean = ntc;
+
+	return clean;
+}
+
+static bool hclgevf_cmd_csq_done(struct hclgevf_hw *hw)
+{
+	u32 head;
+
+	head = hclgevf_read_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG);
+
+	return head == hw->cmq.csq.next_to_use;
+}
+
+static bool hclgevf_is_special_opcode(u16 opcode)
+{
+	u16 spec_opcode[] = {0x30, 0x31, 0x32};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) {
+		if (spec_opcode[i] == opcode)
+			return true;
+	}
+
+	return false;
+}
+
+static int hclgevf_alloc_cmd_desc(struct hclgevf_cmq_ring *ring)
+{
+	int size = ring->desc_num * sizeof(struct hclgevf_desc);
+
+	ring->desc = kzalloc(size, GFP_KERNEL);
+	if (!ring->desc)
+		return -ENOMEM;
+
+	ring->desc_dma_addr = dma_map_single(cmq_ring_to_dev(ring), ring->desc,
+					     size, DMA_BIDIRECTIONAL);
+
+	if (dma_mapping_error(cmq_ring_to_dev(ring), ring->desc_dma_addr)) {
+		ring->desc_dma_addr = 0;
+		kfree(ring->desc);
+		ring->desc = NULL;
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void hclgevf_free_cmd_desc(struct hclgevf_cmq_ring *ring)
+{
+	dma_unmap_single(cmq_ring_to_dev(ring), ring->desc_dma_addr,
+			 ring->desc_num * sizeof(ring->desc[0]),
+			 hclgevf_ring_to_dma_dir(ring));
+
+	ring->desc_dma_addr = 0;
+	kfree(ring->desc);
+	ring->desc = NULL;
+}
+
+static int hclgevf_init_cmd_queue(struct hclgevf_dev *hdev,
+				  struct hclgevf_cmq_ring *ring)
+{
+	struct hclgevf_hw *hw = &hdev->hw;
+	int ring_type = ring->flag;
+	u32 reg_val;
+	int ret;
+
+	ring->desc_num = HCLGEVF_NIC_CMQ_DESC_NUM;
+	spin_lock_init(&ring->lock);
+	ring->next_to_clean = 0;
+	ring->next_to_use = 0;
+	ring->dev = hdev;
+
+	/* allocate CSQ/CRQ descriptor */
+	ret = hclgevf_alloc_cmd_desc(ring);
+	if (ret) {
+		dev_err(&hdev->pdev->dev, "failed(%d) to alloc %s desc\n", ret,
+			(ring_type == HCLGEVF_TYPE_CSQ) ? "CSQ" : "CRQ");
+		return ret;
+	}
+
+	/* initialize the hardware registers with csq/crq dma-address,
+	 * descriptor number, head & tail pointers
+	 */
+	switch (ring_type) {
+	case HCLGEVF_TYPE_CSQ:
+		reg_val = (u32)ring->desc_dma_addr;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_L_REG, reg_val);
+		reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val);
+
+		reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
+		reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val);
+
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, 0);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0);
+		break;
+	case HCLGEVF_TYPE_CRQ:
+		reg_val = (u32)ring->desc_dma_addr;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_L_REG, reg_val);
+		reg_val = (u32)((ring->desc_dma_addr >> 31) >> 1);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val);
+
+		reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
+		reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val);
+
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_TAIL_REG, 0);
+		hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0);
+		break;
+	}
+
+	return 0;
+}
+
+void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
+				  enum hclgevf_opcode_type opcode, bool is_read)
+{
+	memset(desc, 0, sizeof(struct hclgevf_desc));
+	desc->opcode = cpu_to_le16(opcode);
+	desc->flag = cpu_to_le16(HCLGEVF_CMD_FLAG_NO_INTR |
+				 HCLGEVF_CMD_FLAG_IN);
+	if (is_read)
+		desc->flag |= cpu_to_le16(HCLGEVF_CMD_FLAG_WR);
+	else
+		desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR);
+}
+
+/* hclgevf_cmd_send - send command to command queue
+ * @hw: pointer to the hw struct
+ * @desc: prefilled descriptor for describing the command
+ * @num : the number of descriptors to be sent
+ *
+ * This is the main send command for command queue, it
+ * sends the queue, cleans the queue, etc
+ */
+int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
+{
+	struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev;
+	struct hclgevf_desc *desc_to_use;
+	bool complete = false;
+	u32 timeout = 0;
+	int handle = 0;
+	int status = 0;
+	u16 retval;
+	u16 opcode;
+	int ntc;
+
+	spin_lock_bh(&hw->cmq.csq.lock);
+
+	if (num > hclgevf_ring_space(&hw->cmq.csq)) {
+		spin_unlock_bh(&hw->cmq.csq.lock);
+		return -EBUSY;
+	}
+
+	/* Record the location of desc in the ring for this time
+	 * which will be use for hardware to write back
+	 */
+	ntc = hw->cmq.csq.next_to_use;
+	opcode = le16_to_cpu(desc[0].opcode);
+	while (handle < num) {
+		desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use];
+		*desc_to_use = desc[handle];
+		(hw->cmq.csq.next_to_use)++;
+		if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num)
+			hw->cmq.csq.next_to_use = 0;
+		handle++;
+	}
+
+	/* Write to hardware */
+	hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG,
+			  hw->cmq.csq.next_to_use);
+
+	/* If the command is sync, wait for the firmware to write back,
+	 * if multi descriptors to be sent, use the first one to check
+	 */
+	if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) {
+		do {
+			if (hclgevf_cmd_csq_done(hw))
+				break;
+			udelay(1);
+			timeout++;
+		} while (timeout < hw->cmq.tx_timeout);
+	}
+
+	if (hclgevf_cmd_csq_done(hw)) {
+		complete = true;
+		handle = 0;
+
+		while (handle < num) {
+			/* Get the result of hardware write back */
+			desc_to_use = &hw->cmq.csq.desc[ntc];
+			desc[handle] = *desc_to_use;
+
+			if (likely(!hclgevf_is_special_opcode(opcode)))
+				retval = le16_to_cpu(desc[handle].retval);
+			else
+				retval = le16_to_cpu(desc[0].retval);
+
+			if ((enum hclgevf_cmd_return_status)retval ==
+			    HCLGEVF_CMD_EXEC_SUCCESS)
+				status = 0;
+			else
+				status = -EIO;
+			hw->cmq.last_status = (enum hclgevf_cmd_status)retval;
+			ntc++;
+			handle++;
+			if (ntc == hw->cmq.csq.desc_num)
+				ntc = 0;
+		}
+	}
+
+	if (!complete)
+		status = -EAGAIN;
+
+	/* Clean the command send queue */
+	handle = hclgevf_cmd_csq_clean(hw);
+	if (handle != num) {
+		dev_warn(&hdev->pdev->dev,
+			 "cleaned %d, need to clean %d\n", handle, num);
+	}
+
+	spin_unlock_bh(&hw->cmq.csq.lock);
+
+	return status;
+}
+
+static int  hclgevf_cmd_query_firmware_version(struct hclgevf_hw *hw,
+					       u32 *version)
+{
+	struct hclgevf_query_version_cmd *resp;
+	struct hclgevf_desc desc;
+	int status;
+
+	resp = (struct hclgevf_query_version_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_FW_VER, 1);
+	status = hclgevf_cmd_send(hw, &desc, 1);
+	if (!status)
+		*version = le32_to_cpu(resp->firmware);
+
+	return status;
+}
+
+int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+{
+	u32 version;
+	int ret;
+
+	/* setup Tx write back timeout */
+	hdev->hw.cmq.tx_timeout = HCLGEVF_CMDQ_TX_TIMEOUT;
+
+	/* setup queue CSQ/CRQ rings */
+	hdev->hw.cmq.csq.flag = HCLGEVF_TYPE_CSQ;
+	ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.csq);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize CSQ ring\n", ret);
+		return ret;
+	}
+
+	hdev->hw.cmq.crq.flag = HCLGEVF_TYPE_CRQ;
+	ret = hclgevf_init_cmd_queue(hdev, &hdev->hw.cmq.crq);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize CRQ ring\n", ret);
+		goto err_csq;
+	}
+
+	/* get firmware version */
+	ret = hclgevf_cmd_query_firmware_version(&hdev->hw, &version);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to query firmware version\n", ret);
+		goto err_crq;
+	}
+	hdev->fw_version = version;
+
+	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+
+	return 0;
+err_crq:
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+err_csq:
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+
+	return ret;
+}
+
+void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
+{
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
new file mode 100644
index 0000000..df72927
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
@@ -0,0 +1,262 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGEVF_CMD_H
+#define __HCLGEVF_CMD_H
+#include <linux/io.h>
+#include <linux/types.h>
+#include "hnae3.h"
+
+#define HCLGEVF_CMDQ_TX_TIMEOUT		200
+#define HCLGEVF_CMDQ_RX_INVLD_B		0
+#define HCLGEVF_CMDQ_RX_OUTVLD_B	1
+
+struct hclgevf_hw;
+struct hclgevf_dev;
+
+struct hclgevf_desc {
+	__le16 opcode;
+	__le16 flag;
+	__le16 retval;
+	__le16 rsv;
+	__le32 data[6];
+};
+
+struct hclgevf_desc_cb {
+	dma_addr_t dma;
+	void *va;
+	u32 length;
+};
+
+struct hclgevf_cmq_ring {
+	dma_addr_t desc_dma_addr;
+	struct hclgevf_desc *desc;
+	struct hclgevf_desc_cb *desc_cb;
+	struct hclgevf_dev  *dev;
+	u32 head;
+	u32 tail;
+
+	u16 buf_size;
+	u16 desc_num;
+	int next_to_use;
+	int next_to_clean;
+	u8 flag;
+	spinlock_t lock; /* Command queue lock */
+};
+
+enum hclgevf_cmd_return_status {
+	HCLGEVF_CMD_EXEC_SUCCESS	= 0,
+	HCLGEVF_CMD_NO_AUTH	= 1,
+	HCLGEVF_CMD_NOT_EXEC	= 2,
+	HCLGEVF_CMD_QUEUE_FULL	= 3,
+};
+
+enum hclgevf_cmd_status {
+	HCLGEVF_STATUS_SUCCESS	= 0,
+	HCLGEVF_ERR_CSQ_FULL	= -1,
+	HCLGEVF_ERR_CSQ_TIMEOUT	= -2,
+	HCLGEVF_ERR_CSQ_ERROR	= -3
+};
+
+struct hclgevf_cmq {
+	struct hclgevf_cmq_ring csq;
+	struct hclgevf_cmq_ring crq;
+	u16 tx_timeout; /* Tx timeout */
+	enum hclgevf_cmd_status last_status;
+};
+
+#define HCLGEVF_CMD_FLAG_IN_VALID_SHIFT		0
+#define HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT	1
+#define HCLGEVF_CMD_FLAG_NEXT_SHIFT		2
+#define HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT		3
+#define HCLGEVF_CMD_FLAG_NO_INTR_SHIFT		4
+#define HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT		5
+
+#define HCLGEVF_CMD_FLAG_IN		BIT(HCLGEVF_CMD_FLAG_IN_VALID_SHIFT)
+#define HCLGEVF_CMD_FLAG_OUT		BIT(HCLGEVF_CMD_FLAG_OUT_VALID_SHIFT)
+#define HCLGEVF_CMD_FLAG_NEXT		BIT(HCLGEVF_CMD_FLAG_NEXT_SHIFT)
+#define HCLGEVF_CMD_FLAG_WR		BIT(HCLGEVF_CMD_FLAG_WR_OR_RD_SHIFT)
+#define HCLGEVF_CMD_FLAG_NO_INTR	BIT(HCLGEVF_CMD_FLAG_NO_INTR_SHIFT)
+#define HCLGEVF_CMD_FLAG_ERR_INTR	BIT(HCLGEVF_CMD_FLAG_ERR_INTR_SHIFT)
+
+enum hclgevf_opcode_type {
+	/* Generic command */
+	HCLGEVF_OPC_QUERY_FW_VER	= 0x0001,
+	/* TQP command */
+	HCLGEVF_OPC_QUERY_TX_STATUS	= 0x0B03,
+	HCLGEVF_OPC_QUERY_RX_STATUS	= 0x0B13,
+	HCLGEVF_OPC_CFG_COM_TQP_QUEUE	= 0x0B20,
+	/* TSO cmd */
+	HCLGEVF_OPC_TSO_GENERIC_CONFIG	= 0x0C01,
+	/* RSS cmd */
+	HCLGEVF_OPC_RSS_GENERIC_CONFIG	= 0x0D01,
+	HCLGEVF_OPC_RSS_INDIR_TABLE	= 0x0D07,
+	HCLGEVF_OPC_RSS_TC_MODE		= 0x0D08,
+	/* Mailbox cmd */
+	HCLGEVF_OPC_MBX_VF_TO_PF	= 0x2001,
+};
+
+#define HCLGEVF_TQP_REG_OFFSET		0x80000
+#define HCLGEVF_TQP_REG_SIZE		0x200
+
+struct hclgevf_tqp_map {
+	__le16 tqp_id;	/* Absolute tqp id for in this pf */
+	u8 tqp_vf; /* VF id */
+#define HCLGEVF_TQP_MAP_TYPE_PF		0
+#define HCLGEVF_TQP_MAP_TYPE_VF		1
+#define HCLGEVF_TQP_MAP_TYPE_B		0
+#define HCLGEVF_TQP_MAP_EN_B		1
+	u8 tqp_flag;	/* Indicate it's pf or vf tqp */
+	__le16 tqp_vid; /* Virtual id in this pf/vf */
+	u8 rsv[18];
+};
+
+#define HCLGEVF_VECTOR_ELEMENTS_PER_CMD	10
+
+enum hclgevf_int_type {
+	HCLGEVF_INT_TX = 0,
+	HCLGEVF_INT_RX,
+	HCLGEVF_INT_EVENT,
+};
+
+struct hclgevf_ctrl_vector_chain {
+	u8 int_vector_id;
+	u8 int_cause_num;
+#define HCLGEVF_INT_TYPE_S	0
+#define HCLGEVF_INT_TYPE_M	0x3
+#define HCLGEVF_TQP_ID_S	2
+#define HCLGEVF_TQP_ID_M	(0x3fff << HCLGEVF_TQP_ID_S)
+	__le16 tqp_type_and_id[HCLGEVF_VECTOR_ELEMENTS_PER_CMD];
+	u8 vfid;
+	u8 resv;
+};
+
+struct hclgevf_query_version_cmd {
+	__le32 firmware;
+	__le32 firmware_rsv[5];
+};
+
+#define HCLGEVF_RSS_HASH_KEY_OFFSET	4
+#define HCLGEVF_RSS_HASH_KEY_NUM	16
+struct hclgevf_rss_config_cmd {
+	u8 hash_config;
+	u8 rsv[7];
+	u8 hash_key[HCLGEVF_RSS_HASH_KEY_NUM];
+};
+
+struct hclgevf_rss_input_tuple_cmd {
+	u8 ipv4_tcp_en;
+	u8 ipv4_udp_en;
+	u8 ipv4_stcp_en;
+	u8 ipv4_fragment_en;
+	u8 ipv6_tcp_en;
+	u8 ipv6_udp_en;
+	u8 ipv6_stcp_en;
+	u8 ipv6_fragment_en;
+	u8 rsv[16];
+};
+
+#define HCLGEVF_RSS_CFG_TBL_SIZE	16
+
+struct hclgevf_rss_indirection_table_cmd {
+	u16 start_table_index;
+	u16 rss_set_bitmap;
+	u8 rsv[4];
+	u8 rss_result[HCLGEVF_RSS_CFG_TBL_SIZE];
+};
+
+#define HCLGEVF_RSS_TC_OFFSET_S		0
+#define HCLGEVF_RSS_TC_OFFSET_M		(0x3ff << HCLGEVF_RSS_TC_OFFSET_S)
+#define HCLGEVF_RSS_TC_SIZE_S		12
+#define HCLGEVF_RSS_TC_SIZE_M		(0x7 << HCLGEVF_RSS_TC_SIZE_S)
+#define HCLGEVF_RSS_TC_VALID_B		15
+#define HCLGEVF_MAX_TC_NUM		8
+struct hclgevf_rss_tc_mode_cmd {
+	u16 rss_tc_mode[HCLGEVF_MAX_TC_NUM];
+	u8 rsv[8];
+};
+
+#define HCLGEVF_LINK_STS_B	0
+#define HCLGEVF_LINK_STATUS	BIT(HCLGEVF_LINK_STS_B)
+struct hclgevf_link_status_cmd {
+	u8 status;
+	u8 rsv[23];
+};
+
+#define HCLGEVF_RING_ID_MASK	0x3ff
+#define HCLGEVF_TQP_ENABLE_B	0
+
+struct hclgevf_cfg_com_tqp_queue_cmd {
+	__le16 tqp_id;
+	__le16 stream_id;
+	u8 enable;
+	u8 rsv[19];
+};
+
+struct hclgevf_cfg_tx_queue_pointer_cmd {
+	__le16 tqp_id;
+	__le16 tx_tail;
+	__le16 tx_head;
+	__le16 fbd_num;
+	__le16 ring_offset;
+	u8 rsv[14];
+};
+
+#define HCLGEVF_TSO_ENABLE_B	0
+struct hclgevf_cfg_tso_status_cmd {
+	u8 tso_enable;
+	u8 rsv[23];
+};
+
+#define HCLGEVF_TYPE_CRQ		0
+#define HCLGEVF_TYPE_CSQ		1
+#define HCLGEVF_NIC_CSQ_BASEADDR_L_REG	0x27000
+#define HCLGEVF_NIC_CSQ_BASEADDR_H_REG	0x27004
+#define HCLGEVF_NIC_CSQ_DEPTH_REG	0x27008
+#define HCLGEVF_NIC_CSQ_TAIL_REG	0x27010
+#define HCLGEVF_NIC_CSQ_HEAD_REG	0x27014
+#define HCLGEVF_NIC_CRQ_BASEADDR_L_REG	0x27018
+#define HCLGEVF_NIC_CRQ_BASEADDR_H_REG	0x2701c
+#define HCLGEVF_NIC_CRQ_DEPTH_REG	0x27020
+#define HCLGEVF_NIC_CRQ_TAIL_REG	0x27024
+#define HCLGEVF_NIC_CRQ_HEAD_REG	0x27028
+#define HCLGEVF_NIC_CMQ_EN_B		16
+#define HCLGEVF_NIC_CMQ_ENABLE		BIT(HCLGEVF_NIC_CMQ_EN_B)
+#define HCLGEVF_NIC_CMQ_DESC_NUM	1024
+#define HCLGEVF_NIC_CMQ_DESC_NUM_S	3
+#define HCLGEVF_NIC_CMDQ_INT_SRC_REG	0x27100
+
+static inline void hclgevf_write_reg(void __iomem *base, u32 reg, u32 value)
+{
+	writel(value, base + reg);
+}
+
+static inline u32 hclgevf_read_reg(u8 __iomem *base, u32 reg)
+{
+	u8 __iomem *reg_addr = READ_ONCE(base);
+
+	return readl(reg_addr + reg);
+}
+
+#define hclgevf_write_dev(a, reg, value) \
+	hclgevf_write_reg((a)->io_base, (reg), (value))
+#define hclgevf_read_dev(a, reg) \
+	hclgevf_read_reg((a)->io_base, (reg))
+
+#define HCLGEVF_SEND_SYNC(flag) \
+	((flag) & HCLGEVF_CMD_FLAG_NO_INTR)
+
+int hclgevf_cmd_init(struct hclgevf_dev *hdev);
+void hclgevf_cmd_uninit(struct hclgevf_dev *hdev);
+
+int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num);
+void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
+				  enum hclgevf_opcode_type opcode,
+				  bool is_read);
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
  2017-12-08 21:16 ` Salil Mehta
  (?)
@ 2017-12-08 21:16     ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch adds the support of the mailbox to the VF driver. The
mailbox shall be used as an interface to communicate with the
PF driver for various purposes like {set|get} MAC related
operations, reset, link status etc. The mailbox supports both
synchronous and asynchronous command send to PF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
Patch V2: Addressed some internal commnets
Patch V1: Initial Submit
---
 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h    |  94 +++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c   | 188 +++++++++++++++++++++
 2 files changed, 282 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c

diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
new file mode 100644
index 0000000..5a6ebfca
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGE_MBX_H
+#define __HCLGE_MBX_H
+#include <linux/init.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+#define HCLGE_MBX_VF_MSG_DATA_NUM	16
+
+enum HCLGE_MBX_OPCODE {
+	HCLGE_MBX_RESET = 0x01,		/* (VF -> PF) assert reset */
+	HCLGE_MBX_SET_UNICAST,		/* (VF -> PF) set UC addr */
+	HCLGE_MBX_SET_MULTICAST,	/* (VF -> PF) set MC addr */
+	HCLGE_MBX_SET_VLAN,		/* (VF -> PF) set VLAN */
+	HCLGE_MBX_MAP_RING_TO_VECTOR,	/* (VF -> PF) map ring-to-vector */
+	HCLGE_MBX_UNMAP_RING_TO_VECTOR,	/* (VF -> PF) unamp ring-to-vector */
+	HCLGE_MBX_SET_PROMISC_MODE,	/* (VF -> PF) set promiscuous mode */
+	HCLGE_MBX_SET_MACVLAN,		/* (VF -> PF) set unicast filter */
+	HCLGE_MBX_API_NEGOTIATE,	/* (VF -> PF) negotiate API version */
+	HCLGE_MBX_GET_QINFO,		/* (VF -> PF) get queue config */
+	HCLGE_MBX_GET_TCINFO,		/* (VF -> PF) get TC config */
+	HCLGE_MBX_GET_RETA,		/* (VF -> PF) get RETA */
+	HCLGE_MBX_GET_RSS_KEY,		/* (VF -> PF) get RSS key */
+	HCLGE_MBX_GET_MAC_ADDR,		/* (VF -> PF) get MAC addr */
+	HCLGE_MBX_PF_VF_RESP,		/* (PF -> VF) generate respone to VF */
+	HCLGE_MBX_GET_BDNUM,		/* (VF -> PF) get BD num */
+	HCLGE_MBX_GET_BUFSIZE,		/* (VF -> PF) get buffer size */
+	HCLGE_MBX_GET_STREAMID,		/* (VF -> PF) get stream id */
+	HCLGE_MBX_SET_AESTART,		/* (VF -> PF) start ae */
+	HCLGE_MBX_SET_TSOSTATS,		/* (VF -> PF) get tso stats */
+	HCLGE_MBX_LINK_STAT_CHANGE,	/* (PF -> VF) link status has changed */
+	HCLGE_MBX_GET_BASE_CONFIG,	/* (VF -> PF) get config */
+	HCLGE_MBX_BIND_FUNC_QUEUE,	/* (VF -> PF) bind function and queue */
+	HCLGE_MBX_GET_LINK_STATUS,	/* (VF -> PF) get link status */
+	HCLGE_MBX_QUEUE_RESET,		/* (VF -> PF) reset queue */
+};
+
+/* below are per-VF mac-vlan subcodes */
+enum hclge_mbx_mac_vlan_subcode {
+	HCLGE_MBX_MAC_VLAN_UC_MODIFY = 0,	/* modify UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_ADD,		/* add a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_REMOVE,		/* remove a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_MODIFY,		/* modify MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_ADD,		/* add new MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_REMOVE,		/* remove MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,	/* config func MTA enable */
+};
+
+/* below are per-VF vlan cfg subcodes */
+enum hclge_mbx_vlan_cfg_subcode {
+	HCLGE_MBX_VLAN_FILTER = 0,	/* set vlan filter */
+	HCLGE_MBX_VLAN_TX_OFF_CFG,	/* set tx side vlan offload */
+	HCLGE_MBX_VLAN_RX_OFF_CFG,	/* set rx side vlan offload */
+};
+
+#define HCLGE_MBX_MAX_MSG_SIZE	16
+#define HCLGE_MBX_MAX_RESP_DATA_SIZE	8
+
+struct hclgevf_mbx_resp_status {
+	struct mutex mbx_mutex; /* protects against contending sync cmd resp */
+	u32 origin_mbx_msg;
+	bool received_resp;
+	int resp_status;
+	u8 additional_info[HCLGE_MBX_MAX_RESP_DATA_SIZE];
+};
+
+struct hclge_mbx_vf_to_pf_cmd {
+	u8 rsv;
+	u8 mbx_src_vfid; /* Auto filled by IMP */
+	u8 rsv1[2];
+	u8 msg_len;
+	u8 rsv2[3];
+	u8 msg[HCLGE_MBX_MAX_MSG_SIZE];
+};
+
+struct hclge_mbx_pf_to_vf_cmd {
+	u8 dest_vfid;
+	u8 rsv[3];
+	u8 msg_len;
+	u8 rsv1[3];
+	u16 msg[8];
+};
+
+#define hclge_mbx_ring_ptr_move_crq(crq) \
+	(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
+#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
new file mode 100644
index 0000000..fb3380b
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
@@ -0,0 +1,188 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "hclge_mbx.h"
+#include "hclgevf_main.h"
+#include "hnae3.h"
+
+static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev)
+{
+	/* this function should be called with mbx_resp.mbx_mutex held
+	 * to prtect the received_response from race condition
+	 */
+	hdev->mbx_resp.received_resp  = false;
+	hdev->mbx_resp.origin_mbx_msg = 0;
+	hdev->mbx_resp.resp_status    = 0;
+	memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE);
+}
+
+/* hclgevf_get_mbx_resp: used to get a response from PF after VF sends a mailbox
+ * message to PF.
+ * @hdev: pointer to struct hclgevf_dev
+ * @resp_msg: pointer to store the original message type and response status
+ * @len: the resp_msg data array length.
+ */
+static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1,
+				u8 *resp_data, u16 resp_len)
+{
+#define HCLGEVF_MAX_TRY_TIMES	500
+#define HCLGEVF_SLEEP_USCOEND	1000
+	struct hclgevf_mbx_resp_status *mbx_resp;
+	u16 r_code0, r_code1;
+	int i = 0;
+
+	if (resp_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
+		dev_err(&hdev->pdev->dev,
+			"VF mbx response len(=%d) exceeds maximum(=%d)\n",
+			resp_len,
+			HCLGE_MBX_MAX_RESP_DATA_SIZE);
+		return -EINVAL;
+	}
+
+	while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) {
+		udelay(HCLGEVF_SLEEP_USCOEND);
+		i++;
+	}
+
+	if (i >= HCLGEVF_MAX_TRY_TIMES) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not get mbx resp(=%d) from PF in %d tries\n",
+			hdev->mbx_resp.received_resp, i);
+		return -EIO;
+	}
+
+	mbx_resp = &hdev->mbx_resp;
+	r_code0 = (u16)(mbx_resp->origin_mbx_msg >> 16);
+	r_code1 = (u16)(mbx_resp->origin_mbx_msg & 0xff);
+	if (resp_data)
+		memcpy(resp_data, &mbx_resp->additional_info[0], resp_len);
+
+	hclgevf_reset_mbx_resp_status(hdev);
+
+	if (!(r_code0 == code0 && r_code1 == code1 && !mbx_resp->resp_status)) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not match resp code(code0=%d,code1=%d), %d",
+			code0, code1, mbx_resp->resp_status);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	/* first two bytes are reserved for code & subcode */
+	if (msg_len > (HCLGE_MBX_MAX_MSG_SIZE - 2)) {
+		dev_err(&hdev->pdev->dev,
+			"VF send mbx msg fail, msg len %d exceeds max len %d\n",
+			msg_len, HCLGE_MBX_MAX_MSG_SIZE);
+		return -EINVAL;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = code;
+	req->msg[1] = subcode;
+	memcpy(&req->msg[2], msg_data, msg_len);
+
+	/* synchronous send */
+	if (need_resp) {
+		mutex_lock(&hdev->mbx_resp.mbx_mutex);
+		hclgevf_reset_mbx_resp_status(hdev);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+			return status;
+		}
+
+		status = hclgevf_get_mbx_resp(hdev, code, subcode, resp_data,
+					      resp_len);
+		mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+	} else {
+		/* asynchronous send */
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			return status;
+		}
+	}
+
+	return status;
+}
+
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_mbx_resp_status *resp;
+	struct hclge_mbx_pf_to_vf_cmd *req;
+	struct hclgevf_cmq_ring *crq;
+	struct hclgevf_desc *desc;
+	u16 link_status, flag;
+	u8 *temp;
+	int i;
+
+	resp = &hdev->mbx_resp;
+	crq = &hdev->hw.cmq.crq;
+
+	flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	while (hnae_get_bit(flag, HCLGEVF_CMDQ_RX_OUTVLD_B)) {
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data;
+
+		switch (req->msg[0]) {
+		case HCLGE_MBX_PF_VF_RESP:
+			if (resp->received_resp)
+				dev_warn(&hdev->pdev->dev,
+					 "VF mbx resp flag not clear(%d)\n",
+					 req->msg[1]);
+
+			resp->origin_mbx_msg = (req->msg[1] << 16);
+			resp->origin_mbx_msg |= req->msg[2];
+			resp->received_resp = true;
+
+			resp->resp_status = req->msg[3];
+
+			temp = (u8 *)&req->msg[4];
+			for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) {
+				resp->additional_info[i] = *temp;
+				temp++;
+			}
+			break;
+		case HCLGE_MBX_LINK_STAT_CHANGE:
+			link_status = le16_to_cpu(req->msg[1]);
+
+			/* update upper layer with new link link status */
+			hclgevf_update_link_status(hdev, link_status);
+
+			break;
+		default:
+			dev_err(&hdev->pdev->dev,
+				"VF received unsupported(%d) mbx msg from PF\n",
+				req->msg[0]);
+			break;
+		}
+		hclge_mbx_ring_ptr_move_crq(crq);
+		flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	}
+
+	/* Write back CMDQ_RQ header pointer, M7 need this pointer */
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CRQ_HEAD_REG,
+			  crq->next_to_use);
+}
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch adds the support of the mailbox to the VF driver. The
mailbox shall be used as an interface to communicate with the
PF driver for various purposes like {set|get} MAC related
operations, reset, link status etc. The mailbox supports both
synchronous and asynchronous command send to PF driver.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Addressed some internal commnets
Patch V1: Initial Submit
---
 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h    |  94 +++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c   | 188 +++++++++++++++++++++
 2 files changed, 282 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c

diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
new file mode 100644
index 0000000..5a6ebfca
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGE_MBX_H
+#define __HCLGE_MBX_H
+#include <linux/init.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+#define HCLGE_MBX_VF_MSG_DATA_NUM	16
+
+enum HCLGE_MBX_OPCODE {
+	HCLGE_MBX_RESET = 0x01,		/* (VF -> PF) assert reset */
+	HCLGE_MBX_SET_UNICAST,		/* (VF -> PF) set UC addr */
+	HCLGE_MBX_SET_MULTICAST,	/* (VF -> PF) set MC addr */
+	HCLGE_MBX_SET_VLAN,		/* (VF -> PF) set VLAN */
+	HCLGE_MBX_MAP_RING_TO_VECTOR,	/* (VF -> PF) map ring-to-vector */
+	HCLGE_MBX_UNMAP_RING_TO_VECTOR,	/* (VF -> PF) unamp ring-to-vector */
+	HCLGE_MBX_SET_PROMISC_MODE,	/* (VF -> PF) set promiscuous mode */
+	HCLGE_MBX_SET_MACVLAN,		/* (VF -> PF) set unicast filter */
+	HCLGE_MBX_API_NEGOTIATE,	/* (VF -> PF) negotiate API version */
+	HCLGE_MBX_GET_QINFO,		/* (VF -> PF) get queue config */
+	HCLGE_MBX_GET_TCINFO,		/* (VF -> PF) get TC config */
+	HCLGE_MBX_GET_RETA,		/* (VF -> PF) get RETA */
+	HCLGE_MBX_GET_RSS_KEY,		/* (VF -> PF) get RSS key */
+	HCLGE_MBX_GET_MAC_ADDR,		/* (VF -> PF) get MAC addr */
+	HCLGE_MBX_PF_VF_RESP,		/* (PF -> VF) generate respone to VF */
+	HCLGE_MBX_GET_BDNUM,		/* (VF -> PF) get BD num */
+	HCLGE_MBX_GET_BUFSIZE,		/* (VF -> PF) get buffer size */
+	HCLGE_MBX_GET_STREAMID,		/* (VF -> PF) get stream id */
+	HCLGE_MBX_SET_AESTART,		/* (VF -> PF) start ae */
+	HCLGE_MBX_SET_TSOSTATS,		/* (VF -> PF) get tso stats */
+	HCLGE_MBX_LINK_STAT_CHANGE,	/* (PF -> VF) link status has changed */
+	HCLGE_MBX_GET_BASE_CONFIG,	/* (VF -> PF) get config */
+	HCLGE_MBX_BIND_FUNC_QUEUE,	/* (VF -> PF) bind function and queue */
+	HCLGE_MBX_GET_LINK_STATUS,	/* (VF -> PF) get link status */
+	HCLGE_MBX_QUEUE_RESET,		/* (VF -> PF) reset queue */
+};
+
+/* below are per-VF mac-vlan subcodes */
+enum hclge_mbx_mac_vlan_subcode {
+	HCLGE_MBX_MAC_VLAN_UC_MODIFY = 0,	/* modify UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_ADD,		/* add a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_REMOVE,		/* remove a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_MODIFY,		/* modify MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_ADD,		/* add new MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_REMOVE,		/* remove MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,	/* config func MTA enable */
+};
+
+/* below are per-VF vlan cfg subcodes */
+enum hclge_mbx_vlan_cfg_subcode {
+	HCLGE_MBX_VLAN_FILTER = 0,	/* set vlan filter */
+	HCLGE_MBX_VLAN_TX_OFF_CFG,	/* set tx side vlan offload */
+	HCLGE_MBX_VLAN_RX_OFF_CFG,	/* set rx side vlan offload */
+};
+
+#define HCLGE_MBX_MAX_MSG_SIZE	16
+#define HCLGE_MBX_MAX_RESP_DATA_SIZE	8
+
+struct hclgevf_mbx_resp_status {
+	struct mutex mbx_mutex; /* protects against contending sync cmd resp */
+	u32 origin_mbx_msg;
+	bool received_resp;
+	int resp_status;
+	u8 additional_info[HCLGE_MBX_MAX_RESP_DATA_SIZE];
+};
+
+struct hclge_mbx_vf_to_pf_cmd {
+	u8 rsv;
+	u8 mbx_src_vfid; /* Auto filled by IMP */
+	u8 rsv1[2];
+	u8 msg_len;
+	u8 rsv2[3];
+	u8 msg[HCLGE_MBX_MAX_MSG_SIZE];
+};
+
+struct hclge_mbx_pf_to_vf_cmd {
+	u8 dest_vfid;
+	u8 rsv[3];
+	u8 msg_len;
+	u8 rsv1[3];
+	u16 msg[8];
+};
+
+#define hclge_mbx_ring_ptr_move_crq(crq) \
+	(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
+#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
new file mode 100644
index 0000000..fb3380b
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
@@ -0,0 +1,188 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "hclge_mbx.h"
+#include "hclgevf_main.h"
+#include "hnae3.h"
+
+static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev)
+{
+	/* this function should be called with mbx_resp.mbx_mutex held
+	 * to prtect the received_response from race condition
+	 */
+	hdev->mbx_resp.received_resp  = false;
+	hdev->mbx_resp.origin_mbx_msg = 0;
+	hdev->mbx_resp.resp_status    = 0;
+	memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE);
+}
+
+/* hclgevf_get_mbx_resp: used to get a response from PF after VF sends a mailbox
+ * message to PF.
+ * @hdev: pointer to struct hclgevf_dev
+ * @resp_msg: pointer to store the original message type and response status
+ * @len: the resp_msg data array length.
+ */
+static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1,
+				u8 *resp_data, u16 resp_len)
+{
+#define HCLGEVF_MAX_TRY_TIMES	500
+#define HCLGEVF_SLEEP_USCOEND	1000
+	struct hclgevf_mbx_resp_status *mbx_resp;
+	u16 r_code0, r_code1;
+	int i = 0;
+
+	if (resp_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
+		dev_err(&hdev->pdev->dev,
+			"VF mbx response len(=%d) exceeds maximum(=%d)\n",
+			resp_len,
+			HCLGE_MBX_MAX_RESP_DATA_SIZE);
+		return -EINVAL;
+	}
+
+	while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) {
+		udelay(HCLGEVF_SLEEP_USCOEND);
+		i++;
+	}
+
+	if (i >= HCLGEVF_MAX_TRY_TIMES) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not get mbx resp(=%d) from PF in %d tries\n",
+			hdev->mbx_resp.received_resp, i);
+		return -EIO;
+	}
+
+	mbx_resp = &hdev->mbx_resp;
+	r_code0 = (u16)(mbx_resp->origin_mbx_msg >> 16);
+	r_code1 = (u16)(mbx_resp->origin_mbx_msg & 0xff);
+	if (resp_data)
+		memcpy(resp_data, &mbx_resp->additional_info[0], resp_len);
+
+	hclgevf_reset_mbx_resp_status(hdev);
+
+	if (!(r_code0 == code0 && r_code1 == code1 && !mbx_resp->resp_status)) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not match resp code(code0=%d,code1=%d), %d",
+			code0, code1, mbx_resp->resp_status);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	/* first two bytes are reserved for code & subcode */
+	if (msg_len > (HCLGE_MBX_MAX_MSG_SIZE - 2)) {
+		dev_err(&hdev->pdev->dev,
+			"VF send mbx msg fail, msg len %d exceeds max len %d\n",
+			msg_len, HCLGE_MBX_MAX_MSG_SIZE);
+		return -EINVAL;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = code;
+	req->msg[1] = subcode;
+	memcpy(&req->msg[2], msg_data, msg_len);
+
+	/* synchronous send */
+	if (need_resp) {
+		mutex_lock(&hdev->mbx_resp.mbx_mutex);
+		hclgevf_reset_mbx_resp_status(hdev);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+			return status;
+		}
+
+		status = hclgevf_get_mbx_resp(hdev, code, subcode, resp_data,
+					      resp_len);
+		mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+	} else {
+		/* asynchronous send */
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			return status;
+		}
+	}
+
+	return status;
+}
+
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_mbx_resp_status *resp;
+	struct hclge_mbx_pf_to_vf_cmd *req;
+	struct hclgevf_cmq_ring *crq;
+	struct hclgevf_desc *desc;
+	u16 link_status, flag;
+	u8 *temp;
+	int i;
+
+	resp = &hdev->mbx_resp;
+	crq = &hdev->hw.cmq.crq;
+
+	flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	while (hnae_get_bit(flag, HCLGEVF_CMDQ_RX_OUTVLD_B)) {
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data;
+
+		switch (req->msg[0]) {
+		case HCLGE_MBX_PF_VF_RESP:
+			if (resp->received_resp)
+				dev_warn(&hdev->pdev->dev,
+					 "VF mbx resp flag not clear(%d)\n",
+					 req->msg[1]);
+
+			resp->origin_mbx_msg = (req->msg[1] << 16);
+			resp->origin_mbx_msg |= req->msg[2];
+			resp->received_resp = true;
+
+			resp->resp_status = req->msg[3];
+
+			temp = (u8 *)&req->msg[4];
+			for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) {
+				resp->additional_info[i] = *temp;
+				temp++;
+			}
+			break;
+		case HCLGE_MBX_LINK_STAT_CHANGE:
+			link_status = le16_to_cpu(req->msg[1]);
+
+			/* update upper layer with new link link status */
+			hclgevf_update_link_status(hdev, link_status);
+
+			break;
+		default:
+			dev_err(&hdev->pdev->dev,
+				"VF received unsupported(%d) mbx msg from PF\n",
+				req->msg[0]);
+			break;
+		}
+		hclge_mbx_ring_ptr_move_crq(crq);
+		flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	}
+
+	/* Write back CMDQ_RQ header pointer, M7 need this pointer */
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CRQ_HEAD_REG,
+			  crq->next_to_use);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch adds the support of the mailbox to the VF driver. The
mailbox shall be used as an interface to communicate with the
PF driver for various purposes like {set|get} MAC related
operations, reset, link status etc. The mailbox supports both
synchronous and asynchronous command send to PF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
Patch V2: Addressed some internal commnets
Patch V1: Initial Submit
---
 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h    |  94 +++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c   | 188 +++++++++++++++++++++
 2 files changed, 282 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c

diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
new file mode 100644
index 0000000..5a6ebfca
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGE_MBX_H
+#define __HCLGE_MBX_H
+#include <linux/init.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+#define HCLGE_MBX_VF_MSG_DATA_NUM	16
+
+enum HCLGE_MBX_OPCODE {
+	HCLGE_MBX_RESET = 0x01,		/* (VF -> PF) assert reset */
+	HCLGE_MBX_SET_UNICAST,		/* (VF -> PF) set UC addr */
+	HCLGE_MBX_SET_MULTICAST,	/* (VF -> PF) set MC addr */
+	HCLGE_MBX_SET_VLAN,		/* (VF -> PF) set VLAN */
+	HCLGE_MBX_MAP_RING_TO_VECTOR,	/* (VF -> PF) map ring-to-vector */
+	HCLGE_MBX_UNMAP_RING_TO_VECTOR,	/* (VF -> PF) unamp ring-to-vector */
+	HCLGE_MBX_SET_PROMISC_MODE,	/* (VF -> PF) set promiscuous mode */
+	HCLGE_MBX_SET_MACVLAN,		/* (VF -> PF) set unicast filter */
+	HCLGE_MBX_API_NEGOTIATE,	/* (VF -> PF) negotiate API version */
+	HCLGE_MBX_GET_QINFO,		/* (VF -> PF) get queue config */
+	HCLGE_MBX_GET_TCINFO,		/* (VF -> PF) get TC config */
+	HCLGE_MBX_GET_RETA,		/* (VF -> PF) get RETA */
+	HCLGE_MBX_GET_RSS_KEY,		/* (VF -> PF) get RSS key */
+	HCLGE_MBX_GET_MAC_ADDR,		/* (VF -> PF) get MAC addr */
+	HCLGE_MBX_PF_VF_RESP,		/* (PF -> VF) generate respone to VF */
+	HCLGE_MBX_GET_BDNUM,		/* (VF -> PF) get BD num */
+	HCLGE_MBX_GET_BUFSIZE,		/* (VF -> PF) get buffer size */
+	HCLGE_MBX_GET_STREAMID,		/* (VF -> PF) get stream id */
+	HCLGE_MBX_SET_AESTART,		/* (VF -> PF) start ae */
+	HCLGE_MBX_SET_TSOSTATS,		/* (VF -> PF) get tso stats */
+	HCLGE_MBX_LINK_STAT_CHANGE,	/* (PF -> VF) link status has changed */
+	HCLGE_MBX_GET_BASE_CONFIG,	/* (VF -> PF) get config */
+	HCLGE_MBX_BIND_FUNC_QUEUE,	/* (VF -> PF) bind function and queue */
+	HCLGE_MBX_GET_LINK_STATUS,	/* (VF -> PF) get link status */
+	HCLGE_MBX_QUEUE_RESET,		/* (VF -> PF) reset queue */
+};
+
+/* below are per-VF mac-vlan subcodes */
+enum hclge_mbx_mac_vlan_subcode {
+	HCLGE_MBX_MAC_VLAN_UC_MODIFY = 0,	/* modify UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_ADD,		/* add a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_UC_REMOVE,		/* remove a new UC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_MODIFY,		/* modify MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_ADD,		/* add new MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_REMOVE,		/* remove MC mac addr */
+	HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,	/* config func MTA enable */
+};
+
+/* below are per-VF vlan cfg subcodes */
+enum hclge_mbx_vlan_cfg_subcode {
+	HCLGE_MBX_VLAN_FILTER = 0,	/* set vlan filter */
+	HCLGE_MBX_VLAN_TX_OFF_CFG,	/* set tx side vlan offload */
+	HCLGE_MBX_VLAN_RX_OFF_CFG,	/* set rx side vlan offload */
+};
+
+#define HCLGE_MBX_MAX_MSG_SIZE	16
+#define HCLGE_MBX_MAX_RESP_DATA_SIZE	8
+
+struct hclgevf_mbx_resp_status {
+	struct mutex mbx_mutex; /* protects against contending sync cmd resp */
+	u32 origin_mbx_msg;
+	bool received_resp;
+	int resp_status;
+	u8 additional_info[HCLGE_MBX_MAX_RESP_DATA_SIZE];
+};
+
+struct hclge_mbx_vf_to_pf_cmd {
+	u8 rsv;
+	u8 mbx_src_vfid; /* Auto filled by IMP */
+	u8 rsv1[2];
+	u8 msg_len;
+	u8 rsv2[3];
+	u8 msg[HCLGE_MBX_MAX_MSG_SIZE];
+};
+
+struct hclge_mbx_pf_to_vf_cmd {
+	u8 dest_vfid;
+	u8 rsv[3];
+	u8 msg_len;
+	u8 rsv1[3];
+	u16 msg[8];
+};
+
+#define hclge_mbx_ring_ptr_move_crq(crq) \
+	(crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num)
+#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
new file mode 100644
index 0000000..fb3380b
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
@@ -0,0 +1,188 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "hclge_mbx.h"
+#include "hclgevf_main.h"
+#include "hnae3.h"
+
+static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev)
+{
+	/* this function should be called with mbx_resp.mbx_mutex held
+	 * to prtect the received_response from race condition
+	 */
+	hdev->mbx_resp.received_resp  = false;
+	hdev->mbx_resp.origin_mbx_msg = 0;
+	hdev->mbx_resp.resp_status    = 0;
+	memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE);
+}
+
+/* hclgevf_get_mbx_resp: used to get a response from PF after VF sends a mailbox
+ * message to PF.
+ * @hdev: pointer to struct hclgevf_dev
+ * @resp_msg: pointer to store the original message type and response status
+ * @len: the resp_msg data array length.
+ */
+static int hclgevf_get_mbx_resp(struct hclgevf_dev *hdev, u16 code0, u16 code1,
+				u8 *resp_data, u16 resp_len)
+{
+#define HCLGEVF_MAX_TRY_TIMES	500
+#define HCLGEVF_SLEEP_USCOEND	1000
+	struct hclgevf_mbx_resp_status *mbx_resp;
+	u16 r_code0, r_code1;
+	int i = 0;
+
+	if (resp_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
+		dev_err(&hdev->pdev->dev,
+			"VF mbx response len(=%d) exceeds maximum(=%d)\n",
+			resp_len,
+			HCLGE_MBX_MAX_RESP_DATA_SIZE);
+		return -EINVAL;
+	}
+
+	while ((!hdev->mbx_resp.received_resp) && (i < HCLGEVF_MAX_TRY_TIMES)) {
+		udelay(HCLGEVF_SLEEP_USCOEND);
+		i++;
+	}
+
+	if (i >= HCLGEVF_MAX_TRY_TIMES) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not get mbx resp(=%d) from PF in %d tries\n",
+			hdev->mbx_resp.received_resp, i);
+		return -EIO;
+	}
+
+	mbx_resp = &hdev->mbx_resp;
+	r_code0 = (u16)(mbx_resp->origin_mbx_msg >> 16);
+	r_code1 = (u16)(mbx_resp->origin_mbx_msg & 0xff);
+	if (resp_data)
+		memcpy(resp_data, &mbx_resp->additional_info[0], resp_len);
+
+	hclgevf_reset_mbx_resp_status(hdev);
+
+	if (!(r_code0 == code0 && r_code1 == code1 && !mbx_resp->resp_status)) {
+		dev_err(&hdev->pdev->dev,
+			"VF could not match resp code(code0=%d,code1=%d), %d",
+			code0, code1, mbx_resp->resp_status);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	/* first two bytes are reserved for code & subcode */
+	if (msg_len > (HCLGE_MBX_MAX_MSG_SIZE - 2)) {
+		dev_err(&hdev->pdev->dev,
+			"VF send mbx msg fail, msg len %d exceeds max len %d\n",
+			msg_len, HCLGE_MBX_MAX_MSG_SIZE);
+		return -EINVAL;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = code;
+	req->msg[1] = subcode;
+	memcpy(&req->msg[2], msg_data, msg_len);
+
+	/* synchronous send */
+	if (need_resp) {
+		mutex_lock(&hdev->mbx_resp.mbx_mutex);
+		hclgevf_reset_mbx_resp_status(hdev);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+			return status;
+		}
+
+		status = hclgevf_get_mbx_resp(hdev, code, subcode, resp_data,
+					      resp_len);
+		mutex_unlock(&hdev->mbx_resp.mbx_mutex);
+	} else {
+		/* asynchronous send */
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to send mbx message to PF\n",
+				status);
+			return status;
+		}
+	}
+
+	return status;
+}
+
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_mbx_resp_status *resp;
+	struct hclge_mbx_pf_to_vf_cmd *req;
+	struct hclgevf_cmq_ring *crq;
+	struct hclgevf_desc *desc;
+	u16 link_status, flag;
+	u8 *temp;
+	int i;
+
+	resp = &hdev->mbx_resp;
+	crq = &hdev->hw.cmq.crq;
+
+	flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	while (hnae_get_bit(flag, HCLGEVF_CMDQ_RX_OUTVLD_B)) {
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hclge_mbx_pf_to_vf_cmd *)desc->data;
+
+		switch (req->msg[0]) {
+		case HCLGE_MBX_PF_VF_RESP:
+			if (resp->received_resp)
+				dev_warn(&hdev->pdev->dev,
+					 "VF mbx resp flag not clear(%d)\n",
+					 req->msg[1]);
+
+			resp->origin_mbx_msg = (req->msg[1] << 16);
+			resp->origin_mbx_msg |= req->msg[2];
+			resp->received_resp = true;
+
+			resp->resp_status = req->msg[3];
+
+			temp = (u8 *)&req->msg[4];
+			for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) {
+				resp->additional_info[i] = *temp;
+				temp++;
+			}
+			break;
+		case HCLGE_MBX_LINK_STAT_CHANGE:
+			link_status = le16_to_cpu(req->msg[1]);
+
+			/* update upper layer with new link link status */
+			hclgevf_update_link_status(hdev, link_status);
+
+			break;
+		default:
+			dev_err(&hdev->pdev->dev,
+				"VF received unsupported(%d) mbx msg from PF\n",
+				req->msg[0]);
+			break;
+		}
+		hclge_mbx_ring_ptr_move_crq(crq);
+		flag = le16_to_cpu(crq->desc[crq->next_to_use].flag);
+	}
+
+	/* Write back CMDQ_RQ header pointer, M7 need this pointer */
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CRQ_HEAD_REG,
+			  crq->next_to_use);
+}
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 3/8] net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support
  2017-12-08 21:16 ` Salil Mehta
  (?)
@ 2017-12-08 21:16     ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch adds the support of hardware compatibiltiy layer to the
HNS3 VF Driver. This layer implements various {set|get} operations
over MAC address for a virtual port, RSS related configuration,
fetches the link status info from PF, does various VLAN related
configuration over the virtual port, queries the statistics from
the hardware etc.

This layer can directly interact with hardware through the
IMP(Integrated Mangement Processor) interface or can use mailbox
to interact with the PF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
Patch V2: Addressed some internal comments
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  | 1494 ++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h  |  170 +++
 2 files changed, 1664 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
new file mode 100644
index 0000000..a878c63
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -0,0 +1,1494 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/etherdevice.h>
+#include "hclgevf_cmd.h"
+#include "hclgevf_main.h"
+#include "hclge_mbx.h"
+#include "hnae3.h"
+
+#define HCLGEVF_NAME	"hclgevf"
+
+static struct hnae3_ae_algo ae_algovf;
+
+static const struct pci_device_id ae_algovf_pci_tbl[] = {
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
+	/* required last entry */
+	{0, }
+};
+
+static inline struct hclgevf_dev *hclgevf_ae_get_hdev(
+	struct hnae3_handle *handle)
+{
+	return container_of(handle, struct hclgevf_dev, nic);
+}
+
+static int hclgevf_tqps_update_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_desc desc;
+	struct hclgevf_tqp *tqp;
+	int status;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_QUERY_RX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status,	i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_rx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_TX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status, i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_tx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+	}
+
+	return 0;
+}
+
+static u64 *hclgevf_tqps_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_tqp *tqp;
+	u64 *buff = data;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_tx_ring_pktnum_rcd;
+	}
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_rx_ring_pktnum_rcd;
+	}
+
+	return buff;
+}
+
+static int hclgevf_tqps_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->num_tqps * 2;
+}
+
+static u8 *hclgevf_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *buff = data;
+	int i = 0;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_tx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_rx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	return buff;
+}
+
+static void hclgevf_update_stats(struct hnae3_handle *handle,
+				 struct net_device_stats *net_stats)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int status;
+
+	status = hclgevf_tqps_update_stats(handle);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF update of TQPS stats fail, status = %d.\n",
+			status);
+}
+
+static int hclgevf_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	if (strset == ETH_SS_TEST)
+		return -EOPNOTSUPP;
+	else if (strset == ETH_SS_STATS)
+		return hclgevf_tqps_get_sset_count(handle, strset);
+
+	return 0;
+}
+
+static void hclgevf_get_strings(struct hnae3_handle *handle, u32 strset,
+				u8 *data)
+{
+	u8 *p = (char *)data;
+
+	if (strset == ETH_SS_STATS)
+		p = hclgevf_tqps_get_strings(handle, p);
+}
+
+static void hclgevf_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	hclgevf_tqps_get_stats(handle, data);
+}
+
+static int hclgevf_get_tc_info(struct hclgevf_dev *hdev)
+{
+	u8 resp_msg;
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_TCINFO, 0, NULL, 0,
+				      true, &resp_msg, sizeof(u8));
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get TC info from PF failed %d",
+			status);
+		return status;
+	}
+
+	hdev->hw_tc_map = resp_msg;
+
+	return 0;
+}
+
+static int hclge_get_queue_info(struct hclgevf_dev *hdev)
+{
+#define HCLGEVF_TQPS_RSS_INFO_LEN	8
+	u8 resp_msg[HCLGEVF_TQPS_RSS_INFO_LEN];
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QINFO, 0, NULL, 0,
+				      true, resp_msg,
+				      HCLGEVF_TQPS_RSS_INFO_LEN);
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get tqp info from PF failed %d",
+			status);
+		return status;
+	}
+
+	memcpy(&hdev->num_tqps, &resp_msg[0], sizeof(u16));
+	memcpy(&hdev->rss_size_max, &resp_msg[2], sizeof(u16));
+	memcpy(&hdev->num_desc, &resp_msg[4], sizeof(u16));
+	memcpy(&hdev->rx_buf_len, &resp_msg[6], sizeof(u16));
+
+	return 0;
+}
+
+static int hclgevf_enable_tso(struct hclgevf_dev *hdev, int enable)
+{
+	struct hclgevf_cfg_tso_status_cmd *req;
+	struct hclgevf_desc desc;
+
+	req = (struct hclgevf_cfg_tso_status_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_TSO_GENERIC_CONFIG,
+				     false);
+	hnae_set_bit(req->tso_enable, HCLGEVF_TSO_ENABLE_B, enable);
+
+	return hclgevf_cmd_send(&hdev->hw, &desc, 1);
+}
+
+static int hclgevf_alloc_tqps(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	hdev->htqp = devm_kcalloc(&hdev->pdev->dev, hdev->num_tqps,
+				  sizeof(struct hclgevf_tqp), GFP_KERNEL);
+	if (!hdev->htqp)
+		return -ENOMEM;
+
+	tqp = hdev->htqp;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp->dev = &hdev->pdev->dev;
+		tqp->index = i;
+
+		tqp->q.ae_algo = &ae_algovf;
+		tqp->q.buf_size = hdev->rx_buf_len;
+		tqp->q.desc_num = hdev->num_desc;
+		tqp->q.io_base = hdev->hw.io_base + HCLGEVF_TQP_REG_OFFSET +
+			i * HCLGEVF_TQP_REG_SIZE;
+
+		tqp++;
+	}
+
+	return 0;
+}
+
+static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	struct hnae3_knic_private_info *kinfo;
+	u16 new_tqps = hdev->num_tqps;
+	int i;
+
+	kinfo = &nic->kinfo;
+	kinfo->num_tc = 0;
+	kinfo->num_desc = hdev->num_desc;
+	kinfo->rx_buf_len = hdev->rx_buf_len;
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++)
+		if (hdev->hw_tc_map & BIT(i))
+			kinfo->num_tc++;
+
+	kinfo->rss_size
+		= min_t(u16, hdev->rss_size_max, new_tqps / kinfo->num_tc);
+	new_tqps = kinfo->rss_size * kinfo->num_tc;
+	kinfo->num_tqps = min(new_tqps, hdev->num_tqps);
+
+	kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps,
+				  sizeof(struct hnae3_queue *), GFP_KERNEL);
+	if (!kinfo->tqp)
+		return -ENOMEM;
+
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		hdev->htqp[i].q.handle = &hdev->nic;
+		hdev->htqp[i].q.tqp_index = i;
+		kinfo->tqp[i] = &hdev->htqp[i].q;
+	}
+
+	return 0;
+}
+
+static void hclgevf_request_link_info(struct hclgevf_dev *hdev)
+{
+	int status;
+	u8 resp_msg;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_STATUS, 0, NULL,
+				      0, false, &resp_msg, sizeof(u8));
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed to fetch link status(%d) from PF", status);
+}
+
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+{
+	struct hnae3_handle *handle = &hdev->nic;
+	struct hnae3_client *client;
+
+	client = handle->client;
+
+	if (link_state != hdev->hw.mac.link) {
+		client->ops->link_status_change(handle, !!link_state);
+		hdev->hw.mac.link = link_state;
+	}
+}
+
+static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	int ret;
+
+	nic->ae_algo = &ae_algovf;
+	nic->pdev = hdev->pdev;
+	nic->numa_node_mask = hdev->numa_node_mask;
+
+	if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
+		dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
+			hdev->ae_dev->dev_type);
+		return -EINVAL;
+	}
+
+	ret = hclgevf_knic_setup(hdev);
+	if (ret)
+		dev_err(&hdev->pdev->dev, "VF knic setup failed %d\n",
+			ret);
+	return ret;
+}
+
+static void hclgevf_free_vector(struct hclgevf_dev *hdev, int vector_id)
+{
+	hdev->vector_status[vector_id] = HCLGEVF_INVALID_VPORT;
+	hdev->num_msi_left += 1;
+	hdev->num_msi_used -= 1;
+}
+
+static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num,
+			      struct hnae3_vector_info *vector_info)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_vector_info *vector = vector_info;
+	int alloc = 0;
+	int i, j;
+
+	vector_num = min(hdev->num_msi_left, vector_num);
+
+	for (j = 0; j < vector_num; j++) {
+		for (i = HCLGEVF_MISC_VECTOR_NUM + 1; i < hdev->num_msi; i++) {
+			if (hdev->vector_status[i] == HCLGEVF_INVALID_VPORT) {
+				vector->vector = pci_irq_vector(hdev->pdev, i);
+				vector->io_addr = hdev->hw.io_base +
+					HCLGEVF_VECTOR_REG_BASE +
+					(i - 1) * HCLGEVF_VECTOR_REG_OFFSET;
+				hdev->vector_status[i] = 0;
+				hdev->vector_irq[i] = vector->vector;
+
+				vector++;
+				alloc++;
+
+				break;
+			}
+		}
+	}
+	hdev->num_msi_left -= alloc;
+	hdev->num_msi_used += alloc;
+
+	return alloc;
+}
+
+static int hclgevf_get_vector_index(struct hclgevf_dev *hdev, int vector)
+{
+	int i;
+
+	for (i = 0; i < hdev->num_msi; i++)
+		if (vector == hdev->vector_irq[i])
+			return i;
+
+	return -EINVAL;
+}
+
+static u32 hclgevf_get_rss_key_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_KEY_SIZE;
+}
+
+static u32 hclgevf_get_rss_indir_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_IND_TBL_SIZE;
+}
+
+static int hclgevf_set_rss_indir_table(struct hclgevf_dev *hdev)
+{
+	const u8 *indir = hdev->rss_cfg.rss_indirection_tbl;
+	struct hclgevf_rss_indirection_table_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+	int i, j;
+
+	req = (struct hclgevf_rss_indirection_table_cmd *)desc.data;
+
+	for (i = 0; i < HCLGEVF_RSS_CFG_TBL_NUM; i++) {
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INDIR_TABLE,
+					     false);
+		req->start_table_index = i * HCLGEVF_RSS_CFG_TBL_SIZE;
+		req->rss_set_bitmap = HCLGEVF_RSS_SET_BITMAP_MSK;
+		for (j = 0; j < HCLGEVF_RSS_CFG_TBL_SIZE; j++)
+			req->rss_result[j] =
+				indir[i * HCLGEVF_RSS_CFG_TBL_SIZE + j];
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to set RSS indirection table\n",
+				status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev,  u16 rss_size)
+{
+	struct hclgevf_rss_tc_mode_cmd *req;
+	u16 tc_offset[HCLGEVF_MAX_TC_NUM];
+	u16 tc_valid[HCLGEVF_MAX_TC_NUM];
+	u16 tc_size[HCLGEVF_MAX_TC_NUM];
+	struct hclgevf_desc desc;
+	u16 roundup_size;
+	int status;
+	int i;
+
+	req = (struct hclgevf_rss_tc_mode_cmd *)desc.data;
+
+	roundup_size = roundup_pow_of_two(rss_size);
+	roundup_size = ilog2(roundup_size);
+
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		tc_valid[i] = !!(hdev->hw_tc_map & BIT(i));
+		tc_size[i] = roundup_size;
+		tc_offset[i] = rss_size * i;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false);
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		hnae_set_bit(req->rss_tc_mode[i], HCLGEVF_RSS_TC_VALID_B,
+			     (tc_valid[i] & 0x1));
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_SIZE_M,
+			       HCLGEVF_RSS_TC_SIZE_S, tc_size[i]);
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_OFFSET_M,
+			       HCLGEVF_RSS_TC_OFFSET_S, tc_offset[i]);
+	}
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed(=%d) to set rss tc mode\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_rss_hw_cfg(struct hnae3_handle *handle, u8 *hash,
+				  u8 *key)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_config_cmd *req;
+	int lkup_times = key ? 3 : 1;
+	struct hclgevf_desc desc;
+	int key_offset;
+	int key_size;
+	int status;
+
+	req = (struct hclgevf_rss_config_cmd *)desc.data;
+	lkup_times = (lkup_times == 3) ? 3 : ((hash) ? 1 : 0);
+
+	for (key_offset = 0; key_offset < lkup_times; key_offset++) {
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_RSS_GENERIC_CONFIG,
+					     true);
+		req->hash_config |= (key_offset << HCLGEVF_RSS_HASH_KEY_OFFSET);
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"failed to get hardware RSS cfg, status = %d\n",
+				status);
+			return status;
+		}
+
+		if (key_offset == 2)
+			key_size =
+			HCLGEVF_RSS_KEY_SIZE - HCLGEVF_RSS_HASH_KEY_NUM * 2;
+		else
+			key_size = HCLGEVF_RSS_HASH_KEY_NUM;
+
+		if (key)
+			memcpy(key + key_offset * HCLGEVF_RSS_HASH_KEY_NUM,
+			       req->hash_key,
+			       key_size);
+	}
+
+	if (hash) {
+		if ((req->hash_config & 0xf) == HCLGEVF_RSS_HASH_ALGO_TOEPLITZ)
+			*hash = ETH_RSS_HASH_TOP;
+		else
+			*hash = ETH_RSS_HASH_UNKNOWN;
+	}
+
+	return 0;
+}
+
+static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key,
+			   u8 *hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	if (indir)
+		for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+			indir[i] = rss_cfg->rss_indirection_tbl[i];
+
+	return hclgevf_get_rss_hw_cfg(handle, hfunc, key);
+}
+
+static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
+			   const  u8 *key, const  u8 hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	/* update the shadow RSS table with user specified qids */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = indir[i];
+
+	/* update the hardware */
+	return hclgevf_set_rss_indir_table(hdev);
+}
+
+static int hclgevf_get_tc_size(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+
+	return rss_cfg->rss_size;
+}
+
+static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en,
+				       int vector,
+				       struct hnae3_ring_chain_node *ring_chain)
+{
+#define HCLGEVF_RING_NODE_VARIABLE_NUM		3
+#define HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM	3
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_ring_chain_node *node;
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int i, vector_id;
+	int status;
+	u8 type;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	type = en ?
+		HCLGE_MBX_MAP_RING_TO_VECTOR : HCLGE_MBX_UNMAP_RING_TO_VECTOR;
+	req->msg[0] = type;
+	req->msg[1] = vector_id; /* vector_id should be id in VF */
+
+	i = 0;
+	for (node = ring_chain; node; node = node->next) {
+		i++;
+		/* msg[2] is cause num */
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i] =
+				hnae_get_bit(node->flag, HNAE3_RING_TYPE_B);
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i + 1] =
+				node->tqp_index;
+		if (i == (HCLGE_MBX_VF_MSG_DATA_NUM -
+		    HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM) /
+		    HCLGEVF_RING_NODE_VARIABLE_NUM) {
+			req->msg[2] = i;
+
+			status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+			if (status) {
+				dev_err(&hdev->pdev->dev,
+					"Map TQP fail, status is %d.\n",
+					status);
+				return status;
+			}
+			i = 0;
+			hclgevf_cmd_setup_basic_desc(&desc,
+						     HCLGEVF_OPC_MBX_VF_TO_PF,
+						     false);
+			req->msg[0] = type;
+			req->msg[1] = vector_id;
+		}
+	}
+
+	if (i > 0) {
+		req->msg[2] = i;
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Map TQP fail, status is %d.\n", status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_map_ring_to_vector(struct hnae3_handle *handle, int vector,
+				      struct hnae3_ring_chain_node *ring_chain)
+{
+	return hclgevf_bind_ring_to_vector(handle, true, vector, ring_chain);
+}
+
+static int hclgevf_unmap_ring_from_vector(
+				struct hnae3_handle *handle,
+				int vector,
+				struct hnae3_ring_chain_node *ring_chain)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int ret, vector_id;
+
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	ret = hclgevf_bind_ring_to_vector(handle, false, vector, ring_chain);
+	if (ret) {
+		dev_err(&handle->pdev->dev,
+			"Unmap ring from vector fail. vector=%d, ret =%d\n",
+			vector_id,
+			ret);
+		return ret;
+	}
+
+	hclgevf_free_vector(hdev, vector);
+
+	return 0;
+}
+
+static int hclgevf_cmd_set_promisc_mode(struct hclgevf_dev *hdev, u32 en)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = HCLGE_MBX_SET_PROMISC_MODE;
+	req->msg[1] = en;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"Set promisc mode fail, status is %d.\n", status);
+
+	return status;
+}
+
+static void hclgevf_set_promisc_mode(struct hnae3_handle *handle, u32 en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	hclgevf_cmd_set_promisc_mode(hdev, en);
+}
+
+static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id,
+			      int stream_id, bool enable)
+{
+	struct hclgevf_cfg_com_tqp_queue_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclgevf_cfg_com_tqp_queue_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_CFG_COM_TQP_QUEUE,
+				     false);
+	req->tqp_id = cpu_to_le16(tqp_id & HCLGEVF_RING_ID_MASK);
+	req->stream_id = cpu_to_le16(stream_id);
+	req->enable |= enable << HCLGEVF_TQP_ENABLE_B;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"TQP enable fail, status =%d.\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_queue_id(struct hnae3_queue *queue)
+{
+	struct hclgevf_tqp *tqp = container_of(queue, struct hclgevf_tqp, q);
+
+	return tqp->index;
+}
+
+static void hclgevf_reset_tqp_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		memset(&tqp->tqp_stats, 0, sizeof(tqp->tqp_stats));
+	}
+}
+
+static int hclgevf_cfg_func_mta_filter(struct hnae3_handle *handle, bool en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg[2] = {0};
+
+	msg[0] = en;
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,
+				    msg, 1, false, NULL, 0);
+}
+
+static void hclgevf_get_mac_addr(struct hnae3_handle *handle, u8 *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	ether_addr_copy(p, hdev->hw.mac.mac_addr);
+}
+
+static int hclgevf_set_mac_addr(struct hnae3_handle *handle, void *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *old_mac_addr = (u8 *)hdev->hw.mac.mac_addr;
+	u8 *new_mac_addr = (u8 *)p;
+	u8 msg_data[ETH_ALEN * 2];
+	int status;
+
+	ether_addr_copy(msg_data, new_mac_addr);
+	ether_addr_copy(&msg_data[ETH_ALEN], old_mac_addr);
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				      HCLGE_MBX_MAC_VLAN_UC_MODIFY,
+				      msg_data, ETH_ALEN * 2,
+				      false, NULL, 0);
+	if (!status)
+		ether_addr_copy(hdev->hw.mac.mac_addr, new_mac_addr);
+
+	return status;
+}
+
+static int hclgevf_add_uc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_uc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_add_mc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_mc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_set_vlan_filter(struct hnae3_handle *handle,
+				   __be16 proto, u16 vlan_id,
+				   bool is_kill)
+{
+#define HCLGEVF_VLAN_MBX_MSG_LEN 5
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[HCLGEVF_VLAN_MBX_MSG_LEN];
+
+	if (vlan_id > 4095)
+		return -EINVAL;
+
+	if (proto != htons(ETH_P_8021Q))
+		return -EPROTONOSUPPORT;
+
+	msg_data[0] = is_kill;
+	memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
+	memcpy(&msg_data[3], &proto, sizeof(proto));
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN,
+				    HCLGE_MBX_VLAN_FILTER, msg_data,
+				    HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0);
+}
+
+static void hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[2];
+
+	memcpy(&msg_data[0], &queue_id, sizeof(queue_id));
+
+	hclgevf_send_mbx_msg(hdev, HCLGE_MBX_QUEUE_RESET, 0, msg_data, 2, false,
+			     NULL, 0);
+}
+
+static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->fw_version;
+}
+
+static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_misc_vector *vector = &hdev->misc_vector;
+
+	vector->vector_irq = pci_irq_vector(hdev->pdev,
+					    HCLGEVF_MISC_VECTOR_NUM);
+	vector->addr = hdev->hw.io_base + HCLGEVF_MISC_VECTOR_REG_BASE;
+	/* vector status always valid for Vector 0 */
+	hdev->vector_status[HCLGEVF_MISC_VECTOR_NUM] = 0;
+	hdev->vector_irq[HCLGEVF_MISC_VECTOR_NUM] = vector->vector_irq;
+
+	hdev->num_msi_left -= 1;
+	hdev->num_msi_used += 1;
+}
+
+static void hclgevf_mbx_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_and_set_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->mbx_service_task);
+}
+
+static void hclgevf_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_bit(HCLGEVF_STATE_DOWN, &hdev->state)  &&
+	    !test_and_set_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->service_task);
+}
+
+static void hclgevf_service_timer(struct timer_list *t)
+{
+	struct hclgevf_dev *hdev = from_timer(hdev, t, service_timer);
+
+	mod_timer(&hdev->service_timer, jiffies + 5 * HZ);
+
+	hclgevf_task_schedule(hdev);
+}
+
+static void hclgevf_mailbox_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, mbx_service_task);
+
+	if (test_and_set_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state))
+		return;
+
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+
+	hclgevf_mbx_handler(hdev);
+
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+}
+
+static void hclgevf_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, service_task);
+
+	/* request the link status from the PF. PF would be able to tell VF
+	 * about such updates in future so we might remove this later
+	 */
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+}
+
+static void hclgevf_clear_event_cause(struct hclgevf_dev *hdev, u32 regclr)
+{
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG, regclr);
+}
+
+static bool hclgevf_check_event_cause(struct hclgevf_dev *hdev, u32 *clearval)
+{
+	u32 cmdq_src_reg;
+
+	/* fetch the events from their corresponding regs */
+	cmdq_src_reg = hclgevf_read_dev(&hdev->hw,
+					HCLGEVF_VECTOR0_CMDQ_SRC_REG);
+
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) {
+		cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B);
+		*clearval = cmdq_src_reg;
+		return true;
+	}
+
+	dev_dbg(&hdev->pdev->dev, "vector 0 interrupt from unknown source\n");
+
+	return false;
+}
+
+static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en)
+{
+	writel(en ? 1 : 0, vector->addr);
+}
+
+static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+{
+	struct hclgevf_dev *hdev = data;
+	u32 clearval;
+
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	if (!hclgevf_check_event_cause(hdev, &clearval))
+		goto skip_sched;
+
+	/* schedule the VF mailbox service task, if not already scheduled */
+	hclgevf_mbx_task_schedule(hdev);
+
+	hclgevf_clear_event_cause(hdev, clearval);
+
+skip_sched:
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return IRQ_HANDLED;
+}
+
+static int hclgevf_configure(struct hclgevf_dev *hdev)
+{
+	int ret;
+
+	/* get queue configuration from PF */
+	ret = hclge_get_queue_info(hdev);
+	if (ret)
+		return ret;
+	/* get tc configuration from PF */
+	return hclgevf_get_tc_info(hdev);
+}
+
+static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *roce = &hdev->roce;
+	struct hnae3_handle *nic = &hdev->nic;
+
+	roce->rinfo.num_vectors = HCLGEVF_ROCEE_VECTOR_NUM;
+
+	if (hdev->num_msi_left < roce->rinfo.num_vectors ||
+	    hdev->num_msi_left == 0)
+		return -EINVAL;
+
+	roce->rinfo.base_vector =
+		hdev->vector_status[hdev->num_msi_used];
+
+	roce->rinfo.netdev = nic->kinfo.netdev;
+	roce->rinfo.roce_io_base = hdev->hw.io_base;
+
+	roce->pdev = nic->pdev;
+	roce->ae_algo = nic->ae_algo;
+	roce->numa_node_mask = nic->numa_node_mask;
+
+	return 0;
+}
+
+static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i, ret;
+
+	rss_cfg->rss_size = hdev->rss_size_max;
+
+	/* Initialize RSS indirect table for each vport */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = i % hdev->rss_size_max;
+
+	ret = hclgevf_set_rss_indir_table(hdev);
+	if (ret)
+		return ret;
+
+	return hclgevf_set_rss_tc_mode(hdev, hdev->rss_size_max);
+}
+
+static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
+{
+	/* other vlan config(like, VLAN TX/RX offload) would also be added
+	 * here later
+	 */
+	return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
+				       false);
+}
+
+static int hclgevf_ae_start(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < handle->kinfo.num_tqps; i++) {
+		/* ring enable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, true);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+	mod_timer(&hdev->service_timer, jiffies + HZ);
+
+	return 0;
+}
+
+static void hclgevf_ae_stop(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		/* Ring disable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, false);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+}
+
+static void hclgevf_state_init(struct hclgevf_dev *hdev)
+{
+	/* setup tasks for the MBX */
+	INIT_WORK(&hdev->mbx_service_task, hclgevf_mailbox_service_task);
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+
+	/* setup tasks for service timer */
+	timer_setup(&hdev->service_timer, hclgevf_service_timer, 0);
+
+	INIT_WORK(&hdev->service_task, hclgevf_service_task);
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+
+	mutex_init(&hdev->mbx_resp.mbx_mutex);
+
+	/* bring the device down */
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+}
+
+static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
+{
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+
+	if (hdev->service_timer.function)
+		del_timer_sync(&hdev->service_timer);
+	if (hdev->service_task.func)
+		cancel_work_sync(&hdev->service_task);
+	if (hdev->mbx_service_task.func)
+		cancel_work_sync(&hdev->mbx_service_task);
+
+	mutex_destroy(&hdev->mbx_resp.mbx_mutex);
+}
+
+static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	int vectors;
+	int i;
+
+	hdev->num_msi = HCLGEVF_MAX_VF_VECTOR_NUM;
+
+	vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
+					PCI_IRQ_MSI | PCI_IRQ_MSIX);
+	if (vectors < 0) {
+		dev_err(&pdev->dev,
+			"failed(%d) to allocate MSI/MSI-X vectors\n",
+			vectors);
+		return vectors;
+	}
+	if (vectors < hdev->num_msi)
+		dev_warn(&hdev->pdev->dev,
+			 "requested %d MSI/MSI-X, but allocated %d MSI/MSI-X\n",
+			 hdev->num_msi, vectors);
+
+	hdev->num_msi = vectors;
+	hdev->num_msi_left = vectors;
+	hdev->base_msi_vector = pdev->irq;
+
+	hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					   sizeof(u16), GFP_KERNEL);
+	if (!hdev->vector_status) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < hdev->num_msi; i++)
+		hdev->vector_status[i] = HCLGEVF_INVALID_VPORT;
+
+	hdev->vector_irq = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					sizeof(int), GFP_KERNEL);
+	if (!hdev->vector_irq) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_free_irq_vectors(pdev);
+}
+
+static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev)
+{
+	int ret = 0;
+
+	hclgevf_get_misc_vector(hdev);
+
+	ret = request_irq(hdev->misc_vector.vector_irq, hclgevf_misc_irq_handle,
+			  0, "hclgevf_cmd", hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev, "VF failed to request misc irq(%d)\n",
+			hdev->misc_vector.vector_irq);
+		return ret;
+	}
+
+	/* enable misc. vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return ret;
+}
+
+static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev)
+{
+	/* disable misc vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	free_irq(hdev->misc_vector.vector_irq, hdev);
+	hclgevf_free_vector(hdev, 0);
+}
+
+static int hclgevf_init_instance(struct hclgevf_dev *hdev,
+				 struct hnae3_client *client)
+{
+	int ret;
+
+	switch (client->type) {
+	case HNAE3_CLIENT_KNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			struct hnae3_client *rc = hdev->roce_client;
+
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+			ret = rc->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+		break;
+	case HNAE3_CLIENT_UNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+		break;
+	case HNAE3_CLIENT_ROCE:
+		hdev->roce_client = client;
+		hdev->roce.client = client;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+
+			ret = client->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_instance(struct hclgevf_dev *hdev,
+				    struct hnae3_client *client)
+{
+	/* un-init roce, if it exists */
+	if (hdev->roce_client)
+		hdev->roce_client->ops->uninit_instance(&hdev->roce, 0);
+
+	/* un-init nic/unic, if this was not called by roce client */
+	if ((client->ops->uninit_instance) &&
+	    (client->type != HNAE3_CLIENT_ROCE))
+		client->ops->uninit_instance(&hdev->nic, 0);
+}
+
+static int hclgevf_register_client(struct hnae3_client *client,
+				   struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	return hclgevf_init_instance(hdev, client);
+}
+
+static void hclgevf_unregister_client(struct hnae3_client *client,
+				      struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_uninit_instance(hdev, client);
+}
+
+static int hclgevf_pci_init(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	struct hclgevf_hw *hw;
+	int ret;
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to enable PCI device\n");
+		goto err_no_drvdata;
+	}
+
+	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(&pdev->dev, "can't set consistent PCI DMA, exiting");
+		goto err_disable_device;
+	}
+
+	ret = pci_request_regions(pdev, HCLGEVF_DRIVER_NAME);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI request regions failed %d\n", ret);
+		goto err_disable_device;
+	}
+
+	pci_set_master(pdev);
+	hw = &hdev->hw;
+	hw->hdev = hdev;
+	hw->io_base = pcim_iomap(pdev, 2, 0);
+	if (!hw->io_base) {
+		dev_err(&pdev->dev, "can't map configuration register space\n");
+		ret = -ENOMEM;
+		goto err_clr_master;
+	}
+
+	return 0;
+
+err_clr_master:
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+err_no_drvdata:
+	pci_set_drvdata(pdev, NULL);
+	return ret;
+}
+
+static void hclgevf_pci_uninit(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct pci_dev *pdev = ae_dev->pdev;
+	struct hclgevf_dev *hdev;
+	int ret;
+
+	hdev = devm_kzalloc(&pdev->dev, sizeof(*hdev), GFP_KERNEL);
+	if (!hdev)
+		return -ENOMEM;
+
+	hdev->pdev = pdev;
+	hdev->ae_dev = ae_dev;
+	ae_dev->priv = hdev;
+
+	ret = hclgevf_pci_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI initialization failed\n");
+		return ret;
+	}
+
+	ret = hclgevf_init_msi(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init MSI/MSI-X\n", ret);
+		goto err_irq_init;
+	}
+
+	hclgevf_state_init(hdev);
+
+	ret = hclgevf_misc_irq_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n",
+			ret);
+		goto err_misc_irq_init;
+	}
+
+	ret = hclgevf_cmd_init(hdev);
+	if (ret)
+		goto err_cmd_init;
+
+	ret = hclgevf_configure(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_alloc_tqps(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to allocate TQPs\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_set_handle_info(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to set handle info\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_enable_tso(hdev, true);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to enable tso\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize VF's MTA */
+	hdev->accept_mta_mc = true;
+	ret = hclgevf_cfg_func_mta_filter(&hdev->nic, hdev->accept_mta_mc);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to set mta filter mode\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize RSS for this VF */
+	ret = hclgevf_rss_init_hw(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize RSS\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_init_vlan_config(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize VLAN config\n", ret);
+		goto err_config;
+	}
+
+	pr_info("finished initializing %s driver\n", HCLGEVF_DRIVER_NAME);
+
+	return 0;
+
+err_config:
+	hclgevf_cmd_uninit(hdev);
+err_cmd_init:
+	hclgevf_misc_irq_uninit(hdev);
+err_misc_irq_init:
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+err_irq_init:
+	hclgevf_pci_uninit(hdev);
+	return ret;
+}
+
+static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_cmd_uninit(hdev);
+	hclgevf_misc_irq_uninit(hdev);
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+	hclgevf_pci_uninit(hdev);
+	ae_dev->priv = NULL;
+}
+
+static const struct hnae3_ae_ops hclgevf_ops = {
+	.init_ae_dev = hclgevf_init_ae_dev,
+	.uninit_ae_dev = hclgevf_uninit_ae_dev,
+	.init_client_instance = hclgevf_register_client,
+	.uninit_client_instance = hclgevf_unregister_client,
+	.start = hclgevf_ae_start,
+	.stop = hclgevf_ae_stop,
+	.map_ring_to_vector = hclgevf_map_ring_to_vector,
+	.unmap_ring_from_vector = hclgevf_unmap_ring_from_vector,
+	.get_vector = hclgevf_get_vector,
+	.reset_queue = hclgevf_reset_tqp,
+	.set_promisc_mode = hclgevf_set_promisc_mode,
+	.get_mac_addr = hclgevf_get_mac_addr,
+	.set_mac_addr = hclgevf_set_mac_addr,
+	.add_uc_addr = hclgevf_add_uc_addr,
+	.rm_uc_addr = hclgevf_rm_uc_addr,
+	.add_mc_addr = hclgevf_add_mc_addr,
+	.rm_mc_addr = hclgevf_rm_mc_addr,
+	.get_stats = hclgevf_get_stats,
+	.update_stats = hclgevf_update_stats,
+	.get_strings = hclgevf_get_strings,
+	.get_sset_count = hclgevf_get_sset_count,
+	.get_rss_key_size = hclgevf_get_rss_key_size,
+	.get_rss_indir_size = hclgevf_get_rss_indir_size,
+	.get_rss = hclgevf_get_rss,
+	.set_rss = hclgevf_set_rss,
+	.get_tc_size = hclgevf_get_tc_size,
+	.get_fw_version = hclgevf_get_fw_version,
+	.set_vlan_filter = hclgevf_set_vlan_filter,
+};
+
+static struct hnae3_ae_algo ae_algovf = {
+	.ops = &hclgevf_ops,
+	.name = HCLGEVF_NAME,
+	.pdev_id_table = ae_algovf_pci_tbl,
+};
+
+static int hclgevf_init(void)
+{
+	pr_info("%s is initializing\n", HCLGEVF_NAME);
+
+	return hnae3_register_ae_algo(&ae_algovf);
+}
+
+static void hclgevf_exit(void)
+{
+	hnae3_unregister_ae_algo(&ae_algovf);
+}
+module_init(hclgevf_init);
+module_exit(hclgevf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+MODULE_DESCRIPTION("HCLGEVF Driver");
+MODULE_VERSION(HCLGEVF_MOD_VERSION);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
new file mode 100644
index 0000000..4461433
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -0,0 +1,170 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGEVF_MAIN_H
+#define __HCLGEVF_MAIN_H
+#include <linux/fs.h>
+#include <linux/types.h>
+#include "hclge_mbx.h"
+#include "hclgevf_cmd.h"
+#include "hnae3.h"
+
+#define HCLGEVF_MOD_VERSION "v1.0"
+#define HCLGEVF_DRIVER_NAME "hclgevf"
+
+#define HCLGEVF_ROCEE_VECTOR_NUM	0
+#define HCLGEVF_MISC_VECTOR_NUM		0
+
+#define HCLGEVF_INVALID_VPORT		0xffff
+
+/* This number in actual depends upon the total number of VFs
+ * created by physical function. But the maximum number of
+ * possible vector-per-VF is {VFn(1-32), VECTn(32 + 1)}.
+ */
+#define HCLGEVF_MAX_VF_VECTOR_NUM	(32 + 1)
+
+#define HCLGEVF_VECTOR_REG_BASE		0x20000
+#define HCLGEVF_MISC_VECTOR_REG_BASE	0x20400
+#define HCLGEVF_VECTOR_REG_OFFSET	0x4
+#define HCLGEVF_VECTOR_VF_OFFSET		0x100000
+
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HCLGEVF_VECTOR0_CMDQ_SRC_REG	0x27100
+/* CMDQ register bits for RX event(=MBX event) */
+#define HCLGEVF_VECTOR0_RX_CMDQ_INT_B	1
+
+#define HCLGEVF_TQP_RESET_TRY_TIMES	10
+
+#define HCLGEVF_RSS_IND_TBL_SIZE		512
+#define HCLGEVF_RSS_SET_BITMAP_MSK	0xffff
+#define HCLGEVF_RSS_KEY_SIZE		40
+#define HCLGEVF_RSS_HASH_ALGO_TOEPLITZ	0
+#define HCLGEVF_RSS_HASH_ALGO_SIMPLE	1
+#define HCLGEVF_RSS_HASH_ALGO_SYMMETRIC	2
+#define HCLGEVF_RSS_HASH_ALGO_MASK	0xf
+#define HCLGEVF_RSS_CFG_TBL_NUM \
+	(HCLGEVF_RSS_IND_TBL_SIZE / HCLGEVF_RSS_CFG_TBL_SIZE)
+
+/* states of hclgevf device & tasks */
+enum hclgevf_states {
+	/* device states */
+	HCLGEVF_STATE_DOWN,
+	HCLGEVF_STATE_DISABLED,
+	/* task states */
+	HCLGEVF_STATE_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_HANDLING,
+};
+
+#define HCLGEVF_MPF_ENBALE 1
+
+struct hclgevf_mac {
+	u8 mac_addr[ETH_ALEN];
+	int link;
+};
+
+struct hclgevf_hw {
+	void __iomem *io_base;
+	int num_vec;
+	struct hclgevf_cmq cmq;
+	struct hclgevf_mac mac;
+	void *hdev; /* hchgevf device it is part of */
+};
+
+/* TQP stats */
+struct hlcgevf_tqp_stats {
+	/* query_tqp_tx_queue_statistics ,opcode id:  0x0B03 */
+	u64 rcb_tx_ring_pktnum_rcd; /* 32bit */
+	/* query_tqp_rx_queue_statistics ,opcode id:  0x0B13 */
+	u64 rcb_rx_ring_pktnum_rcd; /* 32bit */
+};
+
+struct hclgevf_tqp {
+	struct device *dev;	/* device for DMA mapping */
+	struct hnae3_queue q;
+	struct hlcgevf_tqp_stats tqp_stats;
+	u16 index;		/* global index in a NIC controller */
+
+	bool alloced;
+};
+
+struct hclgevf_cfg {
+	u8 vmdq_vport_num;
+	u8 tc_num;
+	u16 tqp_desc_num;
+	u16 rx_buf_len;
+	u8 phy_addr;
+	u8 media_type;
+	u8 mac_addr[ETH_ALEN];
+	u32 numa_node_map;
+};
+
+struct hclgevf_rss_cfg {
+	u8  rss_hash_key[HCLGEVF_RSS_KEY_SIZE]; /* user configured hash keys */
+	u32 hash_algo;
+	u32 rss_size;
+	u8 hw_tc_map;
+	u8  rss_indirection_tbl[HCLGEVF_RSS_IND_TBL_SIZE]; /* shadow table */
+};
+
+struct hclgevf_misc_vector {
+	u8 __iomem *addr;
+	int vector_irq;
+};
+
+struct hclgevf_dev {
+	struct pci_dev *pdev;
+	struct hnae3_ae_dev *ae_dev;
+	struct hclgevf_hw hw;
+	struct hclgevf_misc_vector misc_vector;
+	struct hclgevf_rss_cfg rss_cfg;
+	unsigned long state;
+
+	u32 fw_version;
+	u16 num_tqps;		/* num task queue pairs of this PF */
+
+	u16 alloc_rss_size;	/* allocated RSS task queue */
+	u16 rss_size_max;	/* HW defined max RSS task queue */
+
+	u16 num_alloc_vport;	/* num vports this driver supports */
+	u32 numa_node_mask;
+	u16 rx_buf_len;
+	u16 num_desc;
+	u8 hw_tc_map;
+
+	u16 num_msi;
+	u16 num_msi_left;
+	u16 num_msi_used;
+	u32 base_msi_vector;
+	u16 *vector_status;
+	int *vector_irq;
+
+	bool accept_mta_mc; /* whether to accept mta filter multicast */
+	struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */
+
+	struct timer_list service_timer;
+	struct work_struct service_task;
+	struct work_struct mbx_service_task;
+
+	struct hclgevf_tqp *htqp;
+
+	struct hnae3_handle nic;
+	struct hnae3_handle roce;
+
+	struct hnae3_client *nic_client;
+	struct hnae3_client *roce_client;
+	u32 flag;
+};
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len);
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev);
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state);
+#endif
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 3/8] net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch adds the support of hardware compatibiltiy layer to the
HNS3 VF Driver. This layer implements various {set|get} operations
over MAC address for a virtual port, RSS related configuration,
fetches the link status info from PF, does various VLAN related
configuration over the virtual port, queries the statistics from
the hardware etc.

This layer can directly interact with hardware through the
IMP(Integrated Mangement Processor) interface or can use mailbox
to interact with the PF driver.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Addressed some internal comments
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  | 1494 ++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h  |  170 +++
 2 files changed, 1664 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
new file mode 100644
index 0000000..a878c63
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -0,0 +1,1494 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/etherdevice.h>
+#include "hclgevf_cmd.h"
+#include "hclgevf_main.h"
+#include "hclge_mbx.h"
+#include "hnae3.h"
+
+#define HCLGEVF_NAME	"hclgevf"
+
+static struct hnae3_ae_algo ae_algovf;
+
+static const struct pci_device_id ae_algovf_pci_tbl[] = {
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
+	/* required last entry */
+	{0, }
+};
+
+static inline struct hclgevf_dev *hclgevf_ae_get_hdev(
+	struct hnae3_handle *handle)
+{
+	return container_of(handle, struct hclgevf_dev, nic);
+}
+
+static int hclgevf_tqps_update_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_desc desc;
+	struct hclgevf_tqp *tqp;
+	int status;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_QUERY_RX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status,	i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_rx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_TX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status, i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_tx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+	}
+
+	return 0;
+}
+
+static u64 *hclgevf_tqps_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_tqp *tqp;
+	u64 *buff = data;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_tx_ring_pktnum_rcd;
+	}
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_rx_ring_pktnum_rcd;
+	}
+
+	return buff;
+}
+
+static int hclgevf_tqps_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->num_tqps * 2;
+}
+
+static u8 *hclgevf_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *buff = data;
+	int i = 0;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_tx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_rx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	return buff;
+}
+
+static void hclgevf_update_stats(struct hnae3_handle *handle,
+				 struct net_device_stats *net_stats)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int status;
+
+	status = hclgevf_tqps_update_stats(handle);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF update of TQPS stats fail, status = %d.\n",
+			status);
+}
+
+static int hclgevf_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	if (strset == ETH_SS_TEST)
+		return -EOPNOTSUPP;
+	else if (strset == ETH_SS_STATS)
+		return hclgevf_tqps_get_sset_count(handle, strset);
+
+	return 0;
+}
+
+static void hclgevf_get_strings(struct hnae3_handle *handle, u32 strset,
+				u8 *data)
+{
+	u8 *p = (char *)data;
+
+	if (strset == ETH_SS_STATS)
+		p = hclgevf_tqps_get_strings(handle, p);
+}
+
+static void hclgevf_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	hclgevf_tqps_get_stats(handle, data);
+}
+
+static int hclgevf_get_tc_info(struct hclgevf_dev *hdev)
+{
+	u8 resp_msg;
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_TCINFO, 0, NULL, 0,
+				      true, &resp_msg, sizeof(u8));
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get TC info from PF failed %d",
+			status);
+		return status;
+	}
+
+	hdev->hw_tc_map = resp_msg;
+
+	return 0;
+}
+
+static int hclge_get_queue_info(struct hclgevf_dev *hdev)
+{
+#define HCLGEVF_TQPS_RSS_INFO_LEN	8
+	u8 resp_msg[HCLGEVF_TQPS_RSS_INFO_LEN];
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QINFO, 0, NULL, 0,
+				      true, resp_msg,
+				      HCLGEVF_TQPS_RSS_INFO_LEN);
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get tqp info from PF failed %d",
+			status);
+		return status;
+	}
+
+	memcpy(&hdev->num_tqps, &resp_msg[0], sizeof(u16));
+	memcpy(&hdev->rss_size_max, &resp_msg[2], sizeof(u16));
+	memcpy(&hdev->num_desc, &resp_msg[4], sizeof(u16));
+	memcpy(&hdev->rx_buf_len, &resp_msg[6], sizeof(u16));
+
+	return 0;
+}
+
+static int hclgevf_enable_tso(struct hclgevf_dev *hdev, int enable)
+{
+	struct hclgevf_cfg_tso_status_cmd *req;
+	struct hclgevf_desc desc;
+
+	req = (struct hclgevf_cfg_tso_status_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_TSO_GENERIC_CONFIG,
+				     false);
+	hnae_set_bit(req->tso_enable, HCLGEVF_TSO_ENABLE_B, enable);
+
+	return hclgevf_cmd_send(&hdev->hw, &desc, 1);
+}
+
+static int hclgevf_alloc_tqps(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	hdev->htqp = devm_kcalloc(&hdev->pdev->dev, hdev->num_tqps,
+				  sizeof(struct hclgevf_tqp), GFP_KERNEL);
+	if (!hdev->htqp)
+		return -ENOMEM;
+
+	tqp = hdev->htqp;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp->dev = &hdev->pdev->dev;
+		tqp->index = i;
+
+		tqp->q.ae_algo = &ae_algovf;
+		tqp->q.buf_size = hdev->rx_buf_len;
+		tqp->q.desc_num = hdev->num_desc;
+		tqp->q.io_base = hdev->hw.io_base + HCLGEVF_TQP_REG_OFFSET +
+			i * HCLGEVF_TQP_REG_SIZE;
+
+		tqp++;
+	}
+
+	return 0;
+}
+
+static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	struct hnae3_knic_private_info *kinfo;
+	u16 new_tqps = hdev->num_tqps;
+	int i;
+
+	kinfo = &nic->kinfo;
+	kinfo->num_tc = 0;
+	kinfo->num_desc = hdev->num_desc;
+	kinfo->rx_buf_len = hdev->rx_buf_len;
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++)
+		if (hdev->hw_tc_map & BIT(i))
+			kinfo->num_tc++;
+
+	kinfo->rss_size
+		= min_t(u16, hdev->rss_size_max, new_tqps / kinfo->num_tc);
+	new_tqps = kinfo->rss_size * kinfo->num_tc;
+	kinfo->num_tqps = min(new_tqps, hdev->num_tqps);
+
+	kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps,
+				  sizeof(struct hnae3_queue *), GFP_KERNEL);
+	if (!kinfo->tqp)
+		return -ENOMEM;
+
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		hdev->htqp[i].q.handle = &hdev->nic;
+		hdev->htqp[i].q.tqp_index = i;
+		kinfo->tqp[i] = &hdev->htqp[i].q;
+	}
+
+	return 0;
+}
+
+static void hclgevf_request_link_info(struct hclgevf_dev *hdev)
+{
+	int status;
+	u8 resp_msg;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_STATUS, 0, NULL,
+				      0, false, &resp_msg, sizeof(u8));
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed to fetch link status(%d) from PF", status);
+}
+
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+{
+	struct hnae3_handle *handle = &hdev->nic;
+	struct hnae3_client *client;
+
+	client = handle->client;
+
+	if (link_state != hdev->hw.mac.link) {
+		client->ops->link_status_change(handle, !!link_state);
+		hdev->hw.mac.link = link_state;
+	}
+}
+
+static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	int ret;
+
+	nic->ae_algo = &ae_algovf;
+	nic->pdev = hdev->pdev;
+	nic->numa_node_mask = hdev->numa_node_mask;
+
+	if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
+		dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
+			hdev->ae_dev->dev_type);
+		return -EINVAL;
+	}
+
+	ret = hclgevf_knic_setup(hdev);
+	if (ret)
+		dev_err(&hdev->pdev->dev, "VF knic setup failed %d\n",
+			ret);
+	return ret;
+}
+
+static void hclgevf_free_vector(struct hclgevf_dev *hdev, int vector_id)
+{
+	hdev->vector_status[vector_id] = HCLGEVF_INVALID_VPORT;
+	hdev->num_msi_left += 1;
+	hdev->num_msi_used -= 1;
+}
+
+static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num,
+			      struct hnae3_vector_info *vector_info)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_vector_info *vector = vector_info;
+	int alloc = 0;
+	int i, j;
+
+	vector_num = min(hdev->num_msi_left, vector_num);
+
+	for (j = 0; j < vector_num; j++) {
+		for (i = HCLGEVF_MISC_VECTOR_NUM + 1; i < hdev->num_msi; i++) {
+			if (hdev->vector_status[i] == HCLGEVF_INVALID_VPORT) {
+				vector->vector = pci_irq_vector(hdev->pdev, i);
+				vector->io_addr = hdev->hw.io_base +
+					HCLGEVF_VECTOR_REG_BASE +
+					(i - 1) * HCLGEVF_VECTOR_REG_OFFSET;
+				hdev->vector_status[i] = 0;
+				hdev->vector_irq[i] = vector->vector;
+
+				vector++;
+				alloc++;
+
+				break;
+			}
+		}
+	}
+	hdev->num_msi_left -= alloc;
+	hdev->num_msi_used += alloc;
+
+	return alloc;
+}
+
+static int hclgevf_get_vector_index(struct hclgevf_dev *hdev, int vector)
+{
+	int i;
+
+	for (i = 0; i < hdev->num_msi; i++)
+		if (vector == hdev->vector_irq[i])
+			return i;
+
+	return -EINVAL;
+}
+
+static u32 hclgevf_get_rss_key_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_KEY_SIZE;
+}
+
+static u32 hclgevf_get_rss_indir_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_IND_TBL_SIZE;
+}
+
+static int hclgevf_set_rss_indir_table(struct hclgevf_dev *hdev)
+{
+	const u8 *indir = hdev->rss_cfg.rss_indirection_tbl;
+	struct hclgevf_rss_indirection_table_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+	int i, j;
+
+	req = (struct hclgevf_rss_indirection_table_cmd *)desc.data;
+
+	for (i = 0; i < HCLGEVF_RSS_CFG_TBL_NUM; i++) {
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INDIR_TABLE,
+					     false);
+		req->start_table_index = i * HCLGEVF_RSS_CFG_TBL_SIZE;
+		req->rss_set_bitmap = HCLGEVF_RSS_SET_BITMAP_MSK;
+		for (j = 0; j < HCLGEVF_RSS_CFG_TBL_SIZE; j++)
+			req->rss_result[j] =
+				indir[i * HCLGEVF_RSS_CFG_TBL_SIZE + j];
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to set RSS indirection table\n",
+				status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev,  u16 rss_size)
+{
+	struct hclgevf_rss_tc_mode_cmd *req;
+	u16 tc_offset[HCLGEVF_MAX_TC_NUM];
+	u16 tc_valid[HCLGEVF_MAX_TC_NUM];
+	u16 tc_size[HCLGEVF_MAX_TC_NUM];
+	struct hclgevf_desc desc;
+	u16 roundup_size;
+	int status;
+	int i;
+
+	req = (struct hclgevf_rss_tc_mode_cmd *)desc.data;
+
+	roundup_size = roundup_pow_of_two(rss_size);
+	roundup_size = ilog2(roundup_size);
+
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		tc_valid[i] = !!(hdev->hw_tc_map & BIT(i));
+		tc_size[i] = roundup_size;
+		tc_offset[i] = rss_size * i;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false);
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		hnae_set_bit(req->rss_tc_mode[i], HCLGEVF_RSS_TC_VALID_B,
+			     (tc_valid[i] & 0x1));
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_SIZE_M,
+			       HCLGEVF_RSS_TC_SIZE_S, tc_size[i]);
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_OFFSET_M,
+			       HCLGEVF_RSS_TC_OFFSET_S, tc_offset[i]);
+	}
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed(=%d) to set rss tc mode\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_rss_hw_cfg(struct hnae3_handle *handle, u8 *hash,
+				  u8 *key)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_config_cmd *req;
+	int lkup_times = key ? 3 : 1;
+	struct hclgevf_desc desc;
+	int key_offset;
+	int key_size;
+	int status;
+
+	req = (struct hclgevf_rss_config_cmd *)desc.data;
+	lkup_times = (lkup_times == 3) ? 3 : ((hash) ? 1 : 0);
+
+	for (key_offset = 0; key_offset < lkup_times; key_offset++) {
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_RSS_GENERIC_CONFIG,
+					     true);
+		req->hash_config |= (key_offset << HCLGEVF_RSS_HASH_KEY_OFFSET);
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"failed to get hardware RSS cfg, status = %d\n",
+				status);
+			return status;
+		}
+
+		if (key_offset == 2)
+			key_size =
+			HCLGEVF_RSS_KEY_SIZE - HCLGEVF_RSS_HASH_KEY_NUM * 2;
+		else
+			key_size = HCLGEVF_RSS_HASH_KEY_NUM;
+
+		if (key)
+			memcpy(key + key_offset * HCLGEVF_RSS_HASH_KEY_NUM,
+			       req->hash_key,
+			       key_size);
+	}
+
+	if (hash) {
+		if ((req->hash_config & 0xf) == HCLGEVF_RSS_HASH_ALGO_TOEPLITZ)
+			*hash = ETH_RSS_HASH_TOP;
+		else
+			*hash = ETH_RSS_HASH_UNKNOWN;
+	}
+
+	return 0;
+}
+
+static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key,
+			   u8 *hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	if (indir)
+		for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+			indir[i] = rss_cfg->rss_indirection_tbl[i];
+
+	return hclgevf_get_rss_hw_cfg(handle, hfunc, key);
+}
+
+static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
+			   const  u8 *key, const  u8 hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	/* update the shadow RSS table with user specified qids */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = indir[i];
+
+	/* update the hardware */
+	return hclgevf_set_rss_indir_table(hdev);
+}
+
+static int hclgevf_get_tc_size(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+
+	return rss_cfg->rss_size;
+}
+
+static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en,
+				       int vector,
+				       struct hnae3_ring_chain_node *ring_chain)
+{
+#define HCLGEVF_RING_NODE_VARIABLE_NUM		3
+#define HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM	3
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_ring_chain_node *node;
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int i, vector_id;
+	int status;
+	u8 type;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	type = en ?
+		HCLGE_MBX_MAP_RING_TO_VECTOR : HCLGE_MBX_UNMAP_RING_TO_VECTOR;
+	req->msg[0] = type;
+	req->msg[1] = vector_id; /* vector_id should be id in VF */
+
+	i = 0;
+	for (node = ring_chain; node; node = node->next) {
+		i++;
+		/* msg[2] is cause num */
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i] =
+				hnae_get_bit(node->flag, HNAE3_RING_TYPE_B);
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i + 1] =
+				node->tqp_index;
+		if (i == (HCLGE_MBX_VF_MSG_DATA_NUM -
+		    HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM) /
+		    HCLGEVF_RING_NODE_VARIABLE_NUM) {
+			req->msg[2] = i;
+
+			status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+			if (status) {
+				dev_err(&hdev->pdev->dev,
+					"Map TQP fail, status is %d.\n",
+					status);
+				return status;
+			}
+			i = 0;
+			hclgevf_cmd_setup_basic_desc(&desc,
+						     HCLGEVF_OPC_MBX_VF_TO_PF,
+						     false);
+			req->msg[0] = type;
+			req->msg[1] = vector_id;
+		}
+	}
+
+	if (i > 0) {
+		req->msg[2] = i;
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Map TQP fail, status is %d.\n", status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_map_ring_to_vector(struct hnae3_handle *handle, int vector,
+				      struct hnae3_ring_chain_node *ring_chain)
+{
+	return hclgevf_bind_ring_to_vector(handle, true, vector, ring_chain);
+}
+
+static int hclgevf_unmap_ring_from_vector(
+				struct hnae3_handle *handle,
+				int vector,
+				struct hnae3_ring_chain_node *ring_chain)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int ret, vector_id;
+
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	ret = hclgevf_bind_ring_to_vector(handle, false, vector, ring_chain);
+	if (ret) {
+		dev_err(&handle->pdev->dev,
+			"Unmap ring from vector fail. vector=%d, ret =%d\n",
+			vector_id,
+			ret);
+		return ret;
+	}
+
+	hclgevf_free_vector(hdev, vector);
+
+	return 0;
+}
+
+static int hclgevf_cmd_set_promisc_mode(struct hclgevf_dev *hdev, u32 en)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = HCLGE_MBX_SET_PROMISC_MODE;
+	req->msg[1] = en;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"Set promisc mode fail, status is %d.\n", status);
+
+	return status;
+}
+
+static void hclgevf_set_promisc_mode(struct hnae3_handle *handle, u32 en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	hclgevf_cmd_set_promisc_mode(hdev, en);
+}
+
+static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id,
+			      int stream_id, bool enable)
+{
+	struct hclgevf_cfg_com_tqp_queue_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclgevf_cfg_com_tqp_queue_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_CFG_COM_TQP_QUEUE,
+				     false);
+	req->tqp_id = cpu_to_le16(tqp_id & HCLGEVF_RING_ID_MASK);
+	req->stream_id = cpu_to_le16(stream_id);
+	req->enable |= enable << HCLGEVF_TQP_ENABLE_B;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"TQP enable fail, status =%d.\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_queue_id(struct hnae3_queue *queue)
+{
+	struct hclgevf_tqp *tqp = container_of(queue, struct hclgevf_tqp, q);
+
+	return tqp->index;
+}
+
+static void hclgevf_reset_tqp_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		memset(&tqp->tqp_stats, 0, sizeof(tqp->tqp_stats));
+	}
+}
+
+static int hclgevf_cfg_func_mta_filter(struct hnae3_handle *handle, bool en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg[2] = {0};
+
+	msg[0] = en;
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,
+				    msg, 1, false, NULL, 0);
+}
+
+static void hclgevf_get_mac_addr(struct hnae3_handle *handle, u8 *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	ether_addr_copy(p, hdev->hw.mac.mac_addr);
+}
+
+static int hclgevf_set_mac_addr(struct hnae3_handle *handle, void *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *old_mac_addr = (u8 *)hdev->hw.mac.mac_addr;
+	u8 *new_mac_addr = (u8 *)p;
+	u8 msg_data[ETH_ALEN * 2];
+	int status;
+
+	ether_addr_copy(msg_data, new_mac_addr);
+	ether_addr_copy(&msg_data[ETH_ALEN], old_mac_addr);
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				      HCLGE_MBX_MAC_VLAN_UC_MODIFY,
+				      msg_data, ETH_ALEN * 2,
+				      false, NULL, 0);
+	if (!status)
+		ether_addr_copy(hdev->hw.mac.mac_addr, new_mac_addr);
+
+	return status;
+}
+
+static int hclgevf_add_uc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_uc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_add_mc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_mc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_set_vlan_filter(struct hnae3_handle *handle,
+				   __be16 proto, u16 vlan_id,
+				   bool is_kill)
+{
+#define HCLGEVF_VLAN_MBX_MSG_LEN 5
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[HCLGEVF_VLAN_MBX_MSG_LEN];
+
+	if (vlan_id > 4095)
+		return -EINVAL;
+
+	if (proto != htons(ETH_P_8021Q))
+		return -EPROTONOSUPPORT;
+
+	msg_data[0] = is_kill;
+	memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
+	memcpy(&msg_data[3], &proto, sizeof(proto));
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN,
+				    HCLGE_MBX_VLAN_FILTER, msg_data,
+				    HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0);
+}
+
+static void hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[2];
+
+	memcpy(&msg_data[0], &queue_id, sizeof(queue_id));
+
+	hclgevf_send_mbx_msg(hdev, HCLGE_MBX_QUEUE_RESET, 0, msg_data, 2, false,
+			     NULL, 0);
+}
+
+static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->fw_version;
+}
+
+static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_misc_vector *vector = &hdev->misc_vector;
+
+	vector->vector_irq = pci_irq_vector(hdev->pdev,
+					    HCLGEVF_MISC_VECTOR_NUM);
+	vector->addr = hdev->hw.io_base + HCLGEVF_MISC_VECTOR_REG_BASE;
+	/* vector status always valid for Vector 0 */
+	hdev->vector_status[HCLGEVF_MISC_VECTOR_NUM] = 0;
+	hdev->vector_irq[HCLGEVF_MISC_VECTOR_NUM] = vector->vector_irq;
+
+	hdev->num_msi_left -= 1;
+	hdev->num_msi_used += 1;
+}
+
+static void hclgevf_mbx_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_and_set_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->mbx_service_task);
+}
+
+static void hclgevf_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_bit(HCLGEVF_STATE_DOWN, &hdev->state)  &&
+	    !test_and_set_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->service_task);
+}
+
+static void hclgevf_service_timer(struct timer_list *t)
+{
+	struct hclgevf_dev *hdev = from_timer(hdev, t, service_timer);
+
+	mod_timer(&hdev->service_timer, jiffies + 5 * HZ);
+
+	hclgevf_task_schedule(hdev);
+}
+
+static void hclgevf_mailbox_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, mbx_service_task);
+
+	if (test_and_set_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state))
+		return;
+
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+
+	hclgevf_mbx_handler(hdev);
+
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+}
+
+static void hclgevf_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, service_task);
+
+	/* request the link status from the PF. PF would be able to tell VF
+	 * about such updates in future so we might remove this later
+	 */
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+}
+
+static void hclgevf_clear_event_cause(struct hclgevf_dev *hdev, u32 regclr)
+{
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG, regclr);
+}
+
+static bool hclgevf_check_event_cause(struct hclgevf_dev *hdev, u32 *clearval)
+{
+	u32 cmdq_src_reg;
+
+	/* fetch the events from their corresponding regs */
+	cmdq_src_reg = hclgevf_read_dev(&hdev->hw,
+					HCLGEVF_VECTOR0_CMDQ_SRC_REG);
+
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) {
+		cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B);
+		*clearval = cmdq_src_reg;
+		return true;
+	}
+
+	dev_dbg(&hdev->pdev->dev, "vector 0 interrupt from unknown source\n");
+
+	return false;
+}
+
+static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en)
+{
+	writel(en ? 1 : 0, vector->addr);
+}
+
+static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+{
+	struct hclgevf_dev *hdev = data;
+	u32 clearval;
+
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	if (!hclgevf_check_event_cause(hdev, &clearval))
+		goto skip_sched;
+
+	/* schedule the VF mailbox service task, if not already scheduled */
+	hclgevf_mbx_task_schedule(hdev);
+
+	hclgevf_clear_event_cause(hdev, clearval);
+
+skip_sched:
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return IRQ_HANDLED;
+}
+
+static int hclgevf_configure(struct hclgevf_dev *hdev)
+{
+	int ret;
+
+	/* get queue configuration from PF */
+	ret = hclge_get_queue_info(hdev);
+	if (ret)
+		return ret;
+	/* get tc configuration from PF */
+	return hclgevf_get_tc_info(hdev);
+}
+
+static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *roce = &hdev->roce;
+	struct hnae3_handle *nic = &hdev->nic;
+
+	roce->rinfo.num_vectors = HCLGEVF_ROCEE_VECTOR_NUM;
+
+	if (hdev->num_msi_left < roce->rinfo.num_vectors ||
+	    hdev->num_msi_left == 0)
+		return -EINVAL;
+
+	roce->rinfo.base_vector =
+		hdev->vector_status[hdev->num_msi_used];
+
+	roce->rinfo.netdev = nic->kinfo.netdev;
+	roce->rinfo.roce_io_base = hdev->hw.io_base;
+
+	roce->pdev = nic->pdev;
+	roce->ae_algo = nic->ae_algo;
+	roce->numa_node_mask = nic->numa_node_mask;
+
+	return 0;
+}
+
+static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i, ret;
+
+	rss_cfg->rss_size = hdev->rss_size_max;
+
+	/* Initialize RSS indirect table for each vport */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = i % hdev->rss_size_max;
+
+	ret = hclgevf_set_rss_indir_table(hdev);
+	if (ret)
+		return ret;
+
+	return hclgevf_set_rss_tc_mode(hdev, hdev->rss_size_max);
+}
+
+static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
+{
+	/* other vlan config(like, VLAN TX/RX offload) would also be added
+	 * here later
+	 */
+	return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
+				       false);
+}
+
+static int hclgevf_ae_start(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < handle->kinfo.num_tqps; i++) {
+		/* ring enable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, true);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+	mod_timer(&hdev->service_timer, jiffies + HZ);
+
+	return 0;
+}
+
+static void hclgevf_ae_stop(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		/* Ring disable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, false);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+}
+
+static void hclgevf_state_init(struct hclgevf_dev *hdev)
+{
+	/* setup tasks for the MBX */
+	INIT_WORK(&hdev->mbx_service_task, hclgevf_mailbox_service_task);
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+
+	/* setup tasks for service timer */
+	timer_setup(&hdev->service_timer, hclgevf_service_timer, 0);
+
+	INIT_WORK(&hdev->service_task, hclgevf_service_task);
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+
+	mutex_init(&hdev->mbx_resp.mbx_mutex);
+
+	/* bring the device down */
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+}
+
+static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
+{
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+
+	if (hdev->service_timer.function)
+		del_timer_sync(&hdev->service_timer);
+	if (hdev->service_task.func)
+		cancel_work_sync(&hdev->service_task);
+	if (hdev->mbx_service_task.func)
+		cancel_work_sync(&hdev->mbx_service_task);
+
+	mutex_destroy(&hdev->mbx_resp.mbx_mutex);
+}
+
+static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	int vectors;
+	int i;
+
+	hdev->num_msi = HCLGEVF_MAX_VF_VECTOR_NUM;
+
+	vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
+					PCI_IRQ_MSI | PCI_IRQ_MSIX);
+	if (vectors < 0) {
+		dev_err(&pdev->dev,
+			"failed(%d) to allocate MSI/MSI-X vectors\n",
+			vectors);
+		return vectors;
+	}
+	if (vectors < hdev->num_msi)
+		dev_warn(&hdev->pdev->dev,
+			 "requested %d MSI/MSI-X, but allocated %d MSI/MSI-X\n",
+			 hdev->num_msi, vectors);
+
+	hdev->num_msi = vectors;
+	hdev->num_msi_left = vectors;
+	hdev->base_msi_vector = pdev->irq;
+
+	hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					   sizeof(u16), GFP_KERNEL);
+	if (!hdev->vector_status) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < hdev->num_msi; i++)
+		hdev->vector_status[i] = HCLGEVF_INVALID_VPORT;
+
+	hdev->vector_irq = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					sizeof(int), GFP_KERNEL);
+	if (!hdev->vector_irq) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_free_irq_vectors(pdev);
+}
+
+static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev)
+{
+	int ret = 0;
+
+	hclgevf_get_misc_vector(hdev);
+
+	ret = request_irq(hdev->misc_vector.vector_irq, hclgevf_misc_irq_handle,
+			  0, "hclgevf_cmd", hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev, "VF failed to request misc irq(%d)\n",
+			hdev->misc_vector.vector_irq);
+		return ret;
+	}
+
+	/* enable misc. vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return ret;
+}
+
+static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev)
+{
+	/* disable misc vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	free_irq(hdev->misc_vector.vector_irq, hdev);
+	hclgevf_free_vector(hdev, 0);
+}
+
+static int hclgevf_init_instance(struct hclgevf_dev *hdev,
+				 struct hnae3_client *client)
+{
+	int ret;
+
+	switch (client->type) {
+	case HNAE3_CLIENT_KNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			struct hnae3_client *rc = hdev->roce_client;
+
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+			ret = rc->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+		break;
+	case HNAE3_CLIENT_UNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+		break;
+	case HNAE3_CLIENT_ROCE:
+		hdev->roce_client = client;
+		hdev->roce.client = client;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+
+			ret = client->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_instance(struct hclgevf_dev *hdev,
+				    struct hnae3_client *client)
+{
+	/* un-init roce, if it exists */
+	if (hdev->roce_client)
+		hdev->roce_client->ops->uninit_instance(&hdev->roce, 0);
+
+	/* un-init nic/unic, if this was not called by roce client */
+	if ((client->ops->uninit_instance) &&
+	    (client->type != HNAE3_CLIENT_ROCE))
+		client->ops->uninit_instance(&hdev->nic, 0);
+}
+
+static int hclgevf_register_client(struct hnae3_client *client,
+				   struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	return hclgevf_init_instance(hdev, client);
+}
+
+static void hclgevf_unregister_client(struct hnae3_client *client,
+				      struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_uninit_instance(hdev, client);
+}
+
+static int hclgevf_pci_init(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	struct hclgevf_hw *hw;
+	int ret;
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to enable PCI device\n");
+		goto err_no_drvdata;
+	}
+
+	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(&pdev->dev, "can't set consistent PCI DMA, exiting");
+		goto err_disable_device;
+	}
+
+	ret = pci_request_regions(pdev, HCLGEVF_DRIVER_NAME);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI request regions failed %d\n", ret);
+		goto err_disable_device;
+	}
+
+	pci_set_master(pdev);
+	hw = &hdev->hw;
+	hw->hdev = hdev;
+	hw->io_base = pcim_iomap(pdev, 2, 0);
+	if (!hw->io_base) {
+		dev_err(&pdev->dev, "can't map configuration register space\n");
+		ret = -ENOMEM;
+		goto err_clr_master;
+	}
+
+	return 0;
+
+err_clr_master:
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+err_no_drvdata:
+	pci_set_drvdata(pdev, NULL);
+	return ret;
+}
+
+static void hclgevf_pci_uninit(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct pci_dev *pdev = ae_dev->pdev;
+	struct hclgevf_dev *hdev;
+	int ret;
+
+	hdev = devm_kzalloc(&pdev->dev, sizeof(*hdev), GFP_KERNEL);
+	if (!hdev)
+		return -ENOMEM;
+
+	hdev->pdev = pdev;
+	hdev->ae_dev = ae_dev;
+	ae_dev->priv = hdev;
+
+	ret = hclgevf_pci_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI initialization failed\n");
+		return ret;
+	}
+
+	ret = hclgevf_init_msi(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init MSI/MSI-X\n", ret);
+		goto err_irq_init;
+	}
+
+	hclgevf_state_init(hdev);
+
+	ret = hclgevf_misc_irq_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n",
+			ret);
+		goto err_misc_irq_init;
+	}
+
+	ret = hclgevf_cmd_init(hdev);
+	if (ret)
+		goto err_cmd_init;
+
+	ret = hclgevf_configure(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_alloc_tqps(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to allocate TQPs\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_set_handle_info(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to set handle info\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_enable_tso(hdev, true);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to enable tso\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize VF's MTA */
+	hdev->accept_mta_mc = true;
+	ret = hclgevf_cfg_func_mta_filter(&hdev->nic, hdev->accept_mta_mc);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to set mta filter mode\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize RSS for this VF */
+	ret = hclgevf_rss_init_hw(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize RSS\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_init_vlan_config(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize VLAN config\n", ret);
+		goto err_config;
+	}
+
+	pr_info("finished initializing %s driver\n", HCLGEVF_DRIVER_NAME);
+
+	return 0;
+
+err_config:
+	hclgevf_cmd_uninit(hdev);
+err_cmd_init:
+	hclgevf_misc_irq_uninit(hdev);
+err_misc_irq_init:
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+err_irq_init:
+	hclgevf_pci_uninit(hdev);
+	return ret;
+}
+
+static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_cmd_uninit(hdev);
+	hclgevf_misc_irq_uninit(hdev);
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+	hclgevf_pci_uninit(hdev);
+	ae_dev->priv = NULL;
+}
+
+static const struct hnae3_ae_ops hclgevf_ops = {
+	.init_ae_dev = hclgevf_init_ae_dev,
+	.uninit_ae_dev = hclgevf_uninit_ae_dev,
+	.init_client_instance = hclgevf_register_client,
+	.uninit_client_instance = hclgevf_unregister_client,
+	.start = hclgevf_ae_start,
+	.stop = hclgevf_ae_stop,
+	.map_ring_to_vector = hclgevf_map_ring_to_vector,
+	.unmap_ring_from_vector = hclgevf_unmap_ring_from_vector,
+	.get_vector = hclgevf_get_vector,
+	.reset_queue = hclgevf_reset_tqp,
+	.set_promisc_mode = hclgevf_set_promisc_mode,
+	.get_mac_addr = hclgevf_get_mac_addr,
+	.set_mac_addr = hclgevf_set_mac_addr,
+	.add_uc_addr = hclgevf_add_uc_addr,
+	.rm_uc_addr = hclgevf_rm_uc_addr,
+	.add_mc_addr = hclgevf_add_mc_addr,
+	.rm_mc_addr = hclgevf_rm_mc_addr,
+	.get_stats = hclgevf_get_stats,
+	.update_stats = hclgevf_update_stats,
+	.get_strings = hclgevf_get_strings,
+	.get_sset_count = hclgevf_get_sset_count,
+	.get_rss_key_size = hclgevf_get_rss_key_size,
+	.get_rss_indir_size = hclgevf_get_rss_indir_size,
+	.get_rss = hclgevf_get_rss,
+	.set_rss = hclgevf_set_rss,
+	.get_tc_size = hclgevf_get_tc_size,
+	.get_fw_version = hclgevf_get_fw_version,
+	.set_vlan_filter = hclgevf_set_vlan_filter,
+};
+
+static struct hnae3_ae_algo ae_algovf = {
+	.ops = &hclgevf_ops,
+	.name = HCLGEVF_NAME,
+	.pdev_id_table = ae_algovf_pci_tbl,
+};
+
+static int hclgevf_init(void)
+{
+	pr_info("%s is initializing\n", HCLGEVF_NAME);
+
+	return hnae3_register_ae_algo(&ae_algovf);
+}
+
+static void hclgevf_exit(void)
+{
+	hnae3_unregister_ae_algo(&ae_algovf);
+}
+module_init(hclgevf_init);
+module_exit(hclgevf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+MODULE_DESCRIPTION("HCLGEVF Driver");
+MODULE_VERSION(HCLGEVF_MOD_VERSION);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
new file mode 100644
index 0000000..4461433
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -0,0 +1,170 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGEVF_MAIN_H
+#define __HCLGEVF_MAIN_H
+#include <linux/fs.h>
+#include <linux/types.h>
+#include "hclge_mbx.h"
+#include "hclgevf_cmd.h"
+#include "hnae3.h"
+
+#define HCLGEVF_MOD_VERSION "v1.0"
+#define HCLGEVF_DRIVER_NAME "hclgevf"
+
+#define HCLGEVF_ROCEE_VECTOR_NUM	0
+#define HCLGEVF_MISC_VECTOR_NUM		0
+
+#define HCLGEVF_INVALID_VPORT		0xffff
+
+/* This number in actual depends upon the total number of VFs
+ * created by physical function. But the maximum number of
+ * possible vector-per-VF is {VFn(1-32), VECTn(32 + 1)}.
+ */
+#define HCLGEVF_MAX_VF_VECTOR_NUM	(32 + 1)
+
+#define HCLGEVF_VECTOR_REG_BASE		0x20000
+#define HCLGEVF_MISC_VECTOR_REG_BASE	0x20400
+#define HCLGEVF_VECTOR_REG_OFFSET	0x4
+#define HCLGEVF_VECTOR_VF_OFFSET		0x100000
+
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HCLGEVF_VECTOR0_CMDQ_SRC_REG	0x27100
+/* CMDQ register bits for RX event(=MBX event) */
+#define HCLGEVF_VECTOR0_RX_CMDQ_INT_B	1
+
+#define HCLGEVF_TQP_RESET_TRY_TIMES	10
+
+#define HCLGEVF_RSS_IND_TBL_SIZE		512
+#define HCLGEVF_RSS_SET_BITMAP_MSK	0xffff
+#define HCLGEVF_RSS_KEY_SIZE		40
+#define HCLGEVF_RSS_HASH_ALGO_TOEPLITZ	0
+#define HCLGEVF_RSS_HASH_ALGO_SIMPLE	1
+#define HCLGEVF_RSS_HASH_ALGO_SYMMETRIC	2
+#define HCLGEVF_RSS_HASH_ALGO_MASK	0xf
+#define HCLGEVF_RSS_CFG_TBL_NUM \
+	(HCLGEVF_RSS_IND_TBL_SIZE / HCLGEVF_RSS_CFG_TBL_SIZE)
+
+/* states of hclgevf device & tasks */
+enum hclgevf_states {
+	/* device states */
+	HCLGEVF_STATE_DOWN,
+	HCLGEVF_STATE_DISABLED,
+	/* task states */
+	HCLGEVF_STATE_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_HANDLING,
+};
+
+#define HCLGEVF_MPF_ENBALE 1
+
+struct hclgevf_mac {
+	u8 mac_addr[ETH_ALEN];
+	int link;
+};
+
+struct hclgevf_hw {
+	void __iomem *io_base;
+	int num_vec;
+	struct hclgevf_cmq cmq;
+	struct hclgevf_mac mac;
+	void *hdev; /* hchgevf device it is part of */
+};
+
+/* TQP stats */
+struct hlcgevf_tqp_stats {
+	/* query_tqp_tx_queue_statistics ,opcode id:  0x0B03 */
+	u64 rcb_tx_ring_pktnum_rcd; /* 32bit */
+	/* query_tqp_rx_queue_statistics ,opcode id:  0x0B13 */
+	u64 rcb_rx_ring_pktnum_rcd; /* 32bit */
+};
+
+struct hclgevf_tqp {
+	struct device *dev;	/* device for DMA mapping */
+	struct hnae3_queue q;
+	struct hlcgevf_tqp_stats tqp_stats;
+	u16 index;		/* global index in a NIC controller */
+
+	bool alloced;
+};
+
+struct hclgevf_cfg {
+	u8 vmdq_vport_num;
+	u8 tc_num;
+	u16 tqp_desc_num;
+	u16 rx_buf_len;
+	u8 phy_addr;
+	u8 media_type;
+	u8 mac_addr[ETH_ALEN];
+	u32 numa_node_map;
+};
+
+struct hclgevf_rss_cfg {
+	u8  rss_hash_key[HCLGEVF_RSS_KEY_SIZE]; /* user configured hash keys */
+	u32 hash_algo;
+	u32 rss_size;
+	u8 hw_tc_map;
+	u8  rss_indirection_tbl[HCLGEVF_RSS_IND_TBL_SIZE]; /* shadow table */
+};
+
+struct hclgevf_misc_vector {
+	u8 __iomem *addr;
+	int vector_irq;
+};
+
+struct hclgevf_dev {
+	struct pci_dev *pdev;
+	struct hnae3_ae_dev *ae_dev;
+	struct hclgevf_hw hw;
+	struct hclgevf_misc_vector misc_vector;
+	struct hclgevf_rss_cfg rss_cfg;
+	unsigned long state;
+
+	u32 fw_version;
+	u16 num_tqps;		/* num task queue pairs of this PF */
+
+	u16 alloc_rss_size;	/* allocated RSS task queue */
+	u16 rss_size_max;	/* HW defined max RSS task queue */
+
+	u16 num_alloc_vport;	/* num vports this driver supports */
+	u32 numa_node_mask;
+	u16 rx_buf_len;
+	u16 num_desc;
+	u8 hw_tc_map;
+
+	u16 num_msi;
+	u16 num_msi_left;
+	u16 num_msi_used;
+	u32 base_msi_vector;
+	u16 *vector_status;
+	int *vector_irq;
+
+	bool accept_mta_mc; /* whether to accept mta filter multicast */
+	struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */
+
+	struct timer_list service_timer;
+	struct work_struct service_task;
+	struct work_struct mbx_service_task;
+
+	struct hclgevf_tqp *htqp;
+
+	struct hnae3_handle nic;
+	struct hnae3_handle roce;
+
+	struct hnae3_client *nic_client;
+	struct hnae3_client *roce_client;
+	u32 flag;
+};
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len);
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev);
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state);
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 3/8] net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch adds the support of hardware compatibiltiy layer to the
HNS3 VF Driver. This layer implements various {set|get} operations
over MAC address for a virtual port, RSS related configuration,
fetches the link status info from PF, does various VLAN related
configuration over the virtual port, queries the statistics from
the hardware etc.

This layer can directly interact with hardware through the
IMP(Integrated Mangement Processor) interface or can use mailbox
to interact with the PF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
Patch V2: Addressed some internal comments
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  | 1494 ++++++++++++++++++++
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h  |  170 +++
 2 files changed, 1664 insertions(+)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
new file mode 100644
index 0000000..a878c63
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -0,0 +1,1494 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/etherdevice.h>
+#include "hclgevf_cmd.h"
+#include "hclgevf_main.h"
+#include "hclge_mbx.h"
+#include "hnae3.h"
+
+#define HCLGEVF_NAME	"hclgevf"
+
+static struct hnae3_ae_algo ae_algovf;
+
+static const struct pci_device_id ae_algovf_pci_tbl[] = {
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
+	/* required last entry */
+	{0, }
+};
+
+static inline struct hclgevf_dev *hclgevf_ae_get_hdev(
+	struct hnae3_handle *handle)
+{
+	return container_of(handle, struct hclgevf_dev, nic);
+}
+
+static int hclgevf_tqps_update_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_desc desc;
+	struct hclgevf_tqp *tqp;
+	int status;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_QUERY_RX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status,	i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_rx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_QUERY_TX_STATUS,
+					     true);
+
+		desc.data[0] = cpu_to_le32(tqp->index & 0x1ff);
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Query tqp stat fail, status = %d,queue = %d\n",
+				status, i);
+			return status;
+		}
+		tqp->tqp_stats.rcb_tx_ring_pktnum_rcd +=
+			le32_to_cpu(desc.data[4]);
+	}
+
+	return 0;
+}
+
+static u64 *hclgevf_tqps_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_tqp *tqp;
+	u64 *buff = data;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_tx_ring_pktnum_rcd;
+	}
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		tqp = container_of(handle->kinfo.tqp[i], struct hclgevf_tqp, q);
+		*buff++ = tqp->tqp_stats.rcb_rx_ring_pktnum_rcd;
+	}
+
+	return buff;
+}
+
+static int hclgevf_tqps_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->num_tqps * 2;
+}
+
+static u8 *hclgevf_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *buff = data;
+	int i = 0;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_tx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		struct hclgevf_tqp *tqp = container_of(handle->kinfo.tqp[i],
+			struct hclgevf_tqp, q);
+		snprintf(buff, ETH_GSTRING_LEN, "rcb_q%d_rx_pktnum_rcd",
+			 tqp->index);
+		buff += ETH_GSTRING_LEN;
+	}
+
+	return buff;
+}
+
+static void hclgevf_update_stats(struct hnae3_handle *handle,
+				 struct net_device_stats *net_stats)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int status;
+
+	status = hclgevf_tqps_update_stats(handle);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF update of TQPS stats fail, status = %d.\n",
+			status);
+}
+
+static int hclgevf_get_sset_count(struct hnae3_handle *handle, int strset)
+{
+	if (strset == ETH_SS_TEST)
+		return -EOPNOTSUPP;
+	else if (strset == ETH_SS_STATS)
+		return hclgevf_tqps_get_sset_count(handle, strset);
+
+	return 0;
+}
+
+static void hclgevf_get_strings(struct hnae3_handle *handle, u32 strset,
+				u8 *data)
+{
+	u8 *p = (char *)data;
+
+	if (strset == ETH_SS_STATS)
+		p = hclgevf_tqps_get_strings(handle, p);
+}
+
+static void hclgevf_get_stats(struct hnae3_handle *handle, u64 *data)
+{
+	hclgevf_tqps_get_stats(handle, data);
+}
+
+static int hclgevf_get_tc_info(struct hclgevf_dev *hdev)
+{
+	u8 resp_msg;
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_TCINFO, 0, NULL, 0,
+				      true, &resp_msg, sizeof(u8));
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get TC info from PF failed %d",
+			status);
+		return status;
+	}
+
+	hdev->hw_tc_map = resp_msg;
+
+	return 0;
+}
+
+static int hclge_get_queue_info(struct hclgevf_dev *hdev)
+{
+#define HCLGEVF_TQPS_RSS_INFO_LEN	8
+	u8 resp_msg[HCLGEVF_TQPS_RSS_INFO_LEN];
+	int status;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QINFO, 0, NULL, 0,
+				      true, resp_msg,
+				      HCLGEVF_TQPS_RSS_INFO_LEN);
+	if (status) {
+		dev_err(&hdev->pdev->dev,
+			"VF request to get tqp info from PF failed %d",
+			status);
+		return status;
+	}
+
+	memcpy(&hdev->num_tqps, &resp_msg[0], sizeof(u16));
+	memcpy(&hdev->rss_size_max, &resp_msg[2], sizeof(u16));
+	memcpy(&hdev->num_desc, &resp_msg[4], sizeof(u16));
+	memcpy(&hdev->rx_buf_len, &resp_msg[6], sizeof(u16));
+
+	return 0;
+}
+
+static int hclgevf_enable_tso(struct hclgevf_dev *hdev, int enable)
+{
+	struct hclgevf_cfg_tso_status_cmd *req;
+	struct hclgevf_desc desc;
+
+	req = (struct hclgevf_cfg_tso_status_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_TSO_GENERIC_CONFIG,
+				     false);
+	hnae_set_bit(req->tso_enable, HCLGEVF_TSO_ENABLE_B, enable);
+
+	return hclgevf_cmd_send(&hdev->hw, &desc, 1);
+}
+
+static int hclgevf_alloc_tqps(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	hdev->htqp = devm_kcalloc(&hdev->pdev->dev, hdev->num_tqps,
+				  sizeof(struct hclgevf_tqp), GFP_KERNEL);
+	if (!hdev->htqp)
+		return -ENOMEM;
+
+	tqp = hdev->htqp;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		tqp->dev = &hdev->pdev->dev;
+		tqp->index = i;
+
+		tqp->q.ae_algo = &ae_algovf;
+		tqp->q.buf_size = hdev->rx_buf_len;
+		tqp->q.desc_num = hdev->num_desc;
+		tqp->q.io_base = hdev->hw.io_base + HCLGEVF_TQP_REG_OFFSET +
+			i * HCLGEVF_TQP_REG_SIZE;
+
+		tqp++;
+	}
+
+	return 0;
+}
+
+static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	struct hnae3_knic_private_info *kinfo;
+	u16 new_tqps = hdev->num_tqps;
+	int i;
+
+	kinfo = &nic->kinfo;
+	kinfo->num_tc = 0;
+	kinfo->num_desc = hdev->num_desc;
+	kinfo->rx_buf_len = hdev->rx_buf_len;
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++)
+		if (hdev->hw_tc_map & BIT(i))
+			kinfo->num_tc++;
+
+	kinfo->rss_size
+		= min_t(u16, hdev->rss_size_max, new_tqps / kinfo->num_tc);
+	new_tqps = kinfo->rss_size * kinfo->num_tc;
+	kinfo->num_tqps = min(new_tqps, hdev->num_tqps);
+
+	kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps,
+				  sizeof(struct hnae3_queue *), GFP_KERNEL);
+	if (!kinfo->tqp)
+		return -ENOMEM;
+
+	for (i = 0; i < kinfo->num_tqps; i++) {
+		hdev->htqp[i].q.handle = &hdev->nic;
+		hdev->htqp[i].q.tqp_index = i;
+		kinfo->tqp[i] = &hdev->htqp[i].q;
+	}
+
+	return 0;
+}
+
+static void hclgevf_request_link_info(struct hclgevf_dev *hdev)
+{
+	int status;
+	u8 resp_msg;
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_STATUS, 0, NULL,
+				      0, false, &resp_msg, sizeof(u8));
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed to fetch link status(%d) from PF", status);
+}
+
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+{
+	struct hnae3_handle *handle = &hdev->nic;
+	struct hnae3_client *client;
+
+	client = handle->client;
+
+	if (link_state != hdev->hw.mac.link) {
+		client->ops->link_status_change(handle, !!link_state);
+		hdev->hw.mac.link = link_state;
+	}
+}
+
+static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *nic = &hdev->nic;
+	int ret;
+
+	nic->ae_algo = &ae_algovf;
+	nic->pdev = hdev->pdev;
+	nic->numa_node_mask = hdev->numa_node_mask;
+
+	if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
+		dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
+			hdev->ae_dev->dev_type);
+		return -EINVAL;
+	}
+
+	ret = hclgevf_knic_setup(hdev);
+	if (ret)
+		dev_err(&hdev->pdev->dev, "VF knic setup failed %d\n",
+			ret);
+	return ret;
+}
+
+static void hclgevf_free_vector(struct hclgevf_dev *hdev, int vector_id)
+{
+	hdev->vector_status[vector_id] = HCLGEVF_INVALID_VPORT;
+	hdev->num_msi_left += 1;
+	hdev->num_msi_used -= 1;
+}
+
+static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num,
+			      struct hnae3_vector_info *vector_info)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_vector_info *vector = vector_info;
+	int alloc = 0;
+	int i, j;
+
+	vector_num = min(hdev->num_msi_left, vector_num);
+
+	for (j = 0; j < vector_num; j++) {
+		for (i = HCLGEVF_MISC_VECTOR_NUM + 1; i < hdev->num_msi; i++) {
+			if (hdev->vector_status[i] == HCLGEVF_INVALID_VPORT) {
+				vector->vector = pci_irq_vector(hdev->pdev, i);
+				vector->io_addr = hdev->hw.io_base +
+					HCLGEVF_VECTOR_REG_BASE +
+					(i - 1) * HCLGEVF_VECTOR_REG_OFFSET;
+				hdev->vector_status[i] = 0;
+				hdev->vector_irq[i] = vector->vector;
+
+				vector++;
+				alloc++;
+
+				break;
+			}
+		}
+	}
+	hdev->num_msi_left -= alloc;
+	hdev->num_msi_used += alloc;
+
+	return alloc;
+}
+
+static int hclgevf_get_vector_index(struct hclgevf_dev *hdev, int vector)
+{
+	int i;
+
+	for (i = 0; i < hdev->num_msi; i++)
+		if (vector == hdev->vector_irq[i])
+			return i;
+
+	return -EINVAL;
+}
+
+static u32 hclgevf_get_rss_key_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_KEY_SIZE;
+}
+
+static u32 hclgevf_get_rss_indir_size(struct hnae3_handle *handle)
+{
+	return HCLGEVF_RSS_IND_TBL_SIZE;
+}
+
+static int hclgevf_set_rss_indir_table(struct hclgevf_dev *hdev)
+{
+	const u8 *indir = hdev->rss_cfg.rss_indirection_tbl;
+	struct hclgevf_rss_indirection_table_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+	int i, j;
+
+	req = (struct hclgevf_rss_indirection_table_cmd *)desc.data;
+
+	for (i = 0; i < HCLGEVF_RSS_CFG_TBL_NUM; i++) {
+		hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INDIR_TABLE,
+					     false);
+		req->start_table_index = i * HCLGEVF_RSS_CFG_TBL_SIZE;
+		req->rss_set_bitmap = HCLGEVF_RSS_SET_BITMAP_MSK;
+		for (j = 0; j < HCLGEVF_RSS_CFG_TBL_SIZE; j++)
+			req->rss_result[j] =
+				indir[i * HCLGEVF_RSS_CFG_TBL_SIZE + j];
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"VF failed(=%d) to set RSS indirection table\n",
+				status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev,  u16 rss_size)
+{
+	struct hclgevf_rss_tc_mode_cmd *req;
+	u16 tc_offset[HCLGEVF_MAX_TC_NUM];
+	u16 tc_valid[HCLGEVF_MAX_TC_NUM];
+	u16 tc_size[HCLGEVF_MAX_TC_NUM];
+	struct hclgevf_desc desc;
+	u16 roundup_size;
+	int status;
+	int i;
+
+	req = (struct hclgevf_rss_tc_mode_cmd *)desc.data;
+
+	roundup_size = roundup_pow_of_two(rss_size);
+	roundup_size = ilog2(roundup_size);
+
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		tc_valid[i] = !!(hdev->hw_tc_map & BIT(i));
+		tc_size[i] = roundup_size;
+		tc_offset[i] = rss_size * i;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_TC_MODE, false);
+	for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++) {
+		hnae_set_bit(req->rss_tc_mode[i], HCLGEVF_RSS_TC_VALID_B,
+			     (tc_valid[i] & 0x1));
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_SIZE_M,
+			       HCLGEVF_RSS_TC_SIZE_S, tc_size[i]);
+		hnae_set_field(req->rss_tc_mode[i], HCLGEVF_RSS_TC_OFFSET_M,
+			       HCLGEVF_RSS_TC_OFFSET_S, tc_offset[i]);
+	}
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"VF failed(=%d) to set rss tc mode\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_rss_hw_cfg(struct hnae3_handle *handle, u8 *hash,
+				  u8 *key)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_config_cmd *req;
+	int lkup_times = key ? 3 : 1;
+	struct hclgevf_desc desc;
+	int key_offset;
+	int key_size;
+	int status;
+
+	req = (struct hclgevf_rss_config_cmd *)desc.data;
+	lkup_times = (lkup_times == 3) ? 3 : ((hash) ? 1 : 0);
+
+	for (key_offset = 0; key_offset < lkup_times; key_offset++) {
+		hclgevf_cmd_setup_basic_desc(&desc,
+					     HCLGEVF_OPC_RSS_GENERIC_CONFIG,
+					     true);
+		req->hash_config |= (key_offset << HCLGEVF_RSS_HASH_KEY_OFFSET);
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"failed to get hardware RSS cfg, status = %d\n",
+				status);
+			return status;
+		}
+
+		if (key_offset == 2)
+			key_size =
+			HCLGEVF_RSS_KEY_SIZE - HCLGEVF_RSS_HASH_KEY_NUM * 2;
+		else
+			key_size = HCLGEVF_RSS_HASH_KEY_NUM;
+
+		if (key)
+			memcpy(key + key_offset * HCLGEVF_RSS_HASH_KEY_NUM,
+			       req->hash_key,
+			       key_size);
+	}
+
+	if (hash) {
+		if ((req->hash_config & 0xf) == HCLGEVF_RSS_HASH_ALGO_TOEPLITZ)
+			*hash = ETH_RSS_HASH_TOP;
+		else
+			*hash = ETH_RSS_HASH_UNKNOWN;
+	}
+
+	return 0;
+}
+
+static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key,
+			   u8 *hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	if (indir)
+		for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+			indir[i] = rss_cfg->rss_indirection_tbl[i];
+
+	return hclgevf_get_rss_hw_cfg(handle, hfunc, key);
+}
+
+static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
+			   const  u8 *key, const  u8 hfunc)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i;
+
+	/* update the shadow RSS table with user specified qids */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = indir[i];
+
+	/* update the hardware */
+	return hclgevf_set_rss_indir_table(hdev);
+}
+
+static int hclgevf_get_tc_size(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+
+	return rss_cfg->rss_size;
+}
+
+static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en,
+				       int vector,
+				       struct hnae3_ring_chain_node *ring_chain)
+{
+#define HCLGEVF_RING_NODE_VARIABLE_NUM		3
+#define HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM	3
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_ring_chain_node *node;
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int i, vector_id;
+	int status;
+	u8 type;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	type = en ?
+		HCLGE_MBX_MAP_RING_TO_VECTOR : HCLGE_MBX_UNMAP_RING_TO_VECTOR;
+	req->msg[0] = type;
+	req->msg[1] = vector_id; /* vector_id should be id in VF */
+
+	i = 0;
+	for (node = ring_chain; node; node = node->next) {
+		i++;
+		/* msg[2] is cause num */
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i] =
+				hnae_get_bit(node->flag, HNAE3_RING_TYPE_B);
+		req->msg[HCLGEVF_RING_NODE_VARIABLE_NUM * i + 1] =
+				node->tqp_index;
+		if (i == (HCLGE_MBX_VF_MSG_DATA_NUM -
+		    HCLGEVF_RING_MAP_MBX_BASIC_MSG_NUM) /
+		    HCLGEVF_RING_NODE_VARIABLE_NUM) {
+			req->msg[2] = i;
+
+			status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+			if (status) {
+				dev_err(&hdev->pdev->dev,
+					"Map TQP fail, status is %d.\n",
+					status);
+				return status;
+			}
+			i = 0;
+			hclgevf_cmd_setup_basic_desc(&desc,
+						     HCLGEVF_OPC_MBX_VF_TO_PF,
+						     false);
+			req->msg[0] = type;
+			req->msg[1] = vector_id;
+		}
+	}
+
+	if (i > 0) {
+		req->msg[2] = i;
+
+		status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
+			dev_err(&hdev->pdev->dev,
+				"Map TQP fail, status is %d.\n", status);
+			return status;
+		}
+	}
+
+	return 0;
+}
+
+static int hclgevf_map_ring_to_vector(struct hnae3_handle *handle, int vector,
+				      struct hnae3_ring_chain_node *ring_chain)
+{
+	return hclgevf_bind_ring_to_vector(handle, true, vector, ring_chain);
+}
+
+static int hclgevf_unmap_ring_from_vector(
+				struct hnae3_handle *handle,
+				int vector,
+				struct hnae3_ring_chain_node *ring_chain)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int ret, vector_id;
+
+	vector_id = hclgevf_get_vector_index(hdev, vector);
+	if (vector_id < 0) {
+		dev_err(&handle->pdev->dev,
+			"Get vector index fail. ret =%d\n", vector_id);
+		return vector_id;
+	}
+
+	ret = hclgevf_bind_ring_to_vector(handle, false, vector, ring_chain);
+	if (ret) {
+		dev_err(&handle->pdev->dev,
+			"Unmap ring from vector fail. vector=%d, ret =%d\n",
+			vector_id,
+			ret);
+		return ret;
+	}
+
+	hclgevf_free_vector(hdev, vector);
+
+	return 0;
+}
+
+static int hclgevf_cmd_set_promisc_mode(struct hclgevf_dev *hdev, u32 en)
+{
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = HCLGE_MBX_SET_PROMISC_MODE;
+	req->msg[1] = en;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"Set promisc mode fail, status is %d.\n", status);
+
+	return status;
+}
+
+static void hclgevf_set_promisc_mode(struct hnae3_handle *handle, u32 en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	hclgevf_cmd_set_promisc_mode(hdev, en);
+}
+
+static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id,
+			      int stream_id, bool enable)
+{
+	struct hclgevf_cfg_com_tqp_queue_cmd *req;
+	struct hclgevf_desc desc;
+	int status;
+
+	req = (struct hclgevf_cfg_com_tqp_queue_cmd *)desc.data;
+
+	hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_CFG_COM_TQP_QUEUE,
+				     false);
+	req->tqp_id = cpu_to_le16(tqp_id & HCLGEVF_RING_ID_MASK);
+	req->stream_id = cpu_to_le16(stream_id);
+	req->enable |= enable << HCLGEVF_TQP_ENABLE_B;
+
+	status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"TQP enable fail, status =%d.\n", status);
+
+	return status;
+}
+
+static int hclgevf_get_queue_id(struct hnae3_queue *queue)
+{
+	struct hclgevf_tqp *tqp = container_of(queue, struct hclgevf_tqp, q);
+
+	return tqp->index;
+}
+
+static void hclgevf_reset_tqp_stats(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	struct hnae3_queue *queue;
+	struct hclgevf_tqp *tqp;
+	int i;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		queue = handle->kinfo.tqp[i];
+		tqp = container_of(queue, struct hclgevf_tqp, q);
+		memset(&tqp->tqp_stats, 0, sizeof(tqp->tqp_stats));
+	}
+}
+
+static int hclgevf_cfg_func_mta_filter(struct hnae3_handle *handle, bool en)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg[2] = {0};
+
+	msg[0] = en;
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE,
+				    msg, 1, false, NULL, 0);
+}
+
+static void hclgevf_get_mac_addr(struct hnae3_handle *handle, u8 *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	ether_addr_copy(p, hdev->hw.mac.mac_addr);
+}
+
+static int hclgevf_set_mac_addr(struct hnae3_handle *handle, void *p)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 *old_mac_addr = (u8 *)hdev->hw.mac.mac_addr;
+	u8 *new_mac_addr = (u8 *)p;
+	u8 msg_data[ETH_ALEN * 2];
+	int status;
+
+	ether_addr_copy(msg_data, new_mac_addr);
+	ether_addr_copy(&msg_data[ETH_ALEN], old_mac_addr);
+
+	status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				      HCLGE_MBX_MAC_VLAN_UC_MODIFY,
+				      msg_data, ETH_ALEN * 2,
+				      false, NULL, 0);
+	if (!status)
+		ether_addr_copy(hdev->hw.mac.mac_addr, new_mac_addr);
+
+	return status;
+}
+
+static int hclgevf_add_uc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_uc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
+				    HCLGE_MBX_MAC_VLAN_UC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_add_mc_addr(struct hnae3_handle *handle,
+			       const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_ADD,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_rm_mc_addr(struct hnae3_handle *handle,
+			      const unsigned char *addr)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_MULTICAST,
+				    HCLGE_MBX_MAC_VLAN_MC_REMOVE,
+				    addr, ETH_ALEN, false, NULL, 0);
+}
+
+static int hclgevf_set_vlan_filter(struct hnae3_handle *handle,
+				   __be16 proto, u16 vlan_id,
+				   bool is_kill)
+{
+#define HCLGEVF_VLAN_MBX_MSG_LEN 5
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[HCLGEVF_VLAN_MBX_MSG_LEN];
+
+	if (vlan_id > 4095)
+		return -EINVAL;
+
+	if (proto != htons(ETH_P_8021Q))
+		return -EPROTONOSUPPORT;
+
+	msg_data[0] = is_kill;
+	memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
+	memcpy(&msg_data[3], &proto, sizeof(proto));
+	return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN,
+				    HCLGE_MBX_VLAN_FILTER, msg_data,
+				    HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0);
+}
+
+static void hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	u8 msg_data[2];
+
+	memcpy(&msg_data[0], &queue_id, sizeof(queue_id));
+
+	hclgevf_send_mbx_msg(hdev, HCLGE_MBX_QUEUE_RESET, 0, msg_data, 2, false,
+			     NULL, 0);
+}
+
+static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+
+	return hdev->fw_version;
+}
+
+static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_misc_vector *vector = &hdev->misc_vector;
+
+	vector->vector_irq = pci_irq_vector(hdev->pdev,
+					    HCLGEVF_MISC_VECTOR_NUM);
+	vector->addr = hdev->hw.io_base + HCLGEVF_MISC_VECTOR_REG_BASE;
+	/* vector status always valid for Vector 0 */
+	hdev->vector_status[HCLGEVF_MISC_VECTOR_NUM] = 0;
+	hdev->vector_irq[HCLGEVF_MISC_VECTOR_NUM] = vector->vector_irq;
+
+	hdev->num_msi_left -= 1;
+	hdev->num_msi_used += 1;
+}
+
+static void hclgevf_mbx_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_and_set_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->mbx_service_task);
+}
+
+static void hclgevf_task_schedule(struct hclgevf_dev *hdev)
+{
+	if (!test_bit(HCLGEVF_STATE_DOWN, &hdev->state)  &&
+	    !test_and_set_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->service_task);
+}
+
+static void hclgevf_service_timer(struct timer_list *t)
+{
+	struct hclgevf_dev *hdev = from_timer(hdev, t, service_timer);
+
+	mod_timer(&hdev->service_timer, jiffies + 5 * HZ);
+
+	hclgevf_task_schedule(hdev);
+}
+
+static void hclgevf_mailbox_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, mbx_service_task);
+
+	if (test_and_set_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state))
+		return;
+
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+
+	hclgevf_mbx_handler(hdev);
+
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+}
+
+static void hclgevf_service_task(struct work_struct *work)
+{
+	struct hclgevf_dev *hdev;
+
+	hdev = container_of(work, struct hclgevf_dev, service_task);
+
+	/* request the link status from the PF. PF would be able to tell VF
+	 * about such updates in future so we might remove this later
+	 */
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+}
+
+static void hclgevf_clear_event_cause(struct hclgevf_dev *hdev, u32 regclr)
+{
+	hclgevf_write_dev(&hdev->hw, HCLGEVF_VECTOR0_CMDQ_SRC_REG, regclr);
+}
+
+static bool hclgevf_check_event_cause(struct hclgevf_dev *hdev, u32 *clearval)
+{
+	u32 cmdq_src_reg;
+
+	/* fetch the events from their corresponding regs */
+	cmdq_src_reg = hclgevf_read_dev(&hdev->hw,
+					HCLGEVF_VECTOR0_CMDQ_SRC_REG);
+
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) {
+		cmdq_src_reg &= ~BIT(HCLGEVF_VECTOR0_RX_CMDQ_INT_B);
+		*clearval = cmdq_src_reg;
+		return true;
+	}
+
+	dev_dbg(&hdev->pdev->dev, "vector 0 interrupt from unknown source\n");
+
+	return false;
+}
+
+static void hclgevf_enable_vector(struct hclgevf_misc_vector *vector, bool en)
+{
+	writel(en ? 1 : 0, vector->addr);
+}
+
+static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+{
+	struct hclgevf_dev *hdev = data;
+	u32 clearval;
+
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	if (!hclgevf_check_event_cause(hdev, &clearval))
+		goto skip_sched;
+
+	/* schedule the VF mailbox service task, if not already scheduled */
+	hclgevf_mbx_task_schedule(hdev);
+
+	hclgevf_clear_event_cause(hdev, clearval);
+
+skip_sched:
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return IRQ_HANDLED;
+}
+
+static int hclgevf_configure(struct hclgevf_dev *hdev)
+{
+	int ret;
+
+	/* get queue configuration from PF */
+	ret = hclge_get_queue_info(hdev);
+	if (ret)
+		return ret;
+	/* get tc configuration from PF */
+	return hclgevf_get_tc_info(hdev);
+}
+
+static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev)
+{
+	struct hnae3_handle *roce = &hdev->roce;
+	struct hnae3_handle *nic = &hdev->nic;
+
+	roce->rinfo.num_vectors = HCLGEVF_ROCEE_VECTOR_NUM;
+
+	if (hdev->num_msi_left < roce->rinfo.num_vectors ||
+	    hdev->num_msi_left == 0)
+		return -EINVAL;
+
+	roce->rinfo.base_vector =
+		hdev->vector_status[hdev->num_msi_used];
+
+	roce->rinfo.netdev = nic->kinfo.netdev;
+	roce->rinfo.roce_io_base = hdev->hw.io_base;
+
+	roce->pdev = nic->pdev;
+	roce->ae_algo = nic->ae_algo;
+	roce->numa_node_mask = nic->numa_node_mask;
+
+	return 0;
+}
+
+static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+{
+	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+	int i, ret;
+
+	rss_cfg->rss_size = hdev->rss_size_max;
+
+	/* Initialize RSS indirect table for each vport */
+	for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = i % hdev->rss_size_max;
+
+	ret = hclgevf_set_rss_indir_table(hdev);
+	if (ret)
+		return ret;
+
+	return hclgevf_set_rss_tc_mode(hdev, hdev->rss_size_max);
+}
+
+static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
+{
+	/* other vlan config(like, VLAN TX/RX offload) would also be added
+	 * here later
+	 */
+	return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
+				       false);
+}
+
+static int hclgevf_ae_start(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < handle->kinfo.num_tqps; i++) {
+		/* ring enable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, true);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+
+	hclgevf_request_link_info(hdev);
+
+	clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+	mod_timer(&hdev->service_timer, jiffies + HZ);
+
+	return 0;
+}
+
+static void hclgevf_ae_stop(struct hnae3_handle *handle)
+{
+	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+	int i, queue_id;
+
+	for (i = 0; i < hdev->num_tqps; i++) {
+		/* Ring disable */
+		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+		if (queue_id < 0) {
+			dev_warn(&hdev->pdev->dev,
+				 "Get invalid queue id, ignore it\n");
+			continue;
+		}
+
+		hclgevf_tqp_enable(hdev, queue_id, 0, false);
+	}
+
+	/* reset tqp stats */
+	hclgevf_reset_tqp_stats(handle);
+}
+
+static void hclgevf_state_init(struct hclgevf_dev *hdev)
+{
+	/* setup tasks for the MBX */
+	INIT_WORK(&hdev->mbx_service_task, hclgevf_mailbox_service_task);
+	clear_bit(HCLGEVF_STATE_MBX_SERVICE_SCHED, &hdev->state);
+	clear_bit(HCLGEVF_STATE_MBX_HANDLING, &hdev->state);
+
+	/* setup tasks for service timer */
+	timer_setup(&hdev->service_timer, hclgevf_service_timer, 0);
+
+	INIT_WORK(&hdev->service_task, hclgevf_service_task);
+	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+
+	mutex_init(&hdev->mbx_resp.mbx_mutex);
+
+	/* bring the device down */
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+}
+
+static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
+{
+	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+
+	if (hdev->service_timer.function)
+		del_timer_sync(&hdev->service_timer);
+	if (hdev->service_task.func)
+		cancel_work_sync(&hdev->service_task);
+	if (hdev->mbx_service_task.func)
+		cancel_work_sync(&hdev->mbx_service_task);
+
+	mutex_destroy(&hdev->mbx_resp.mbx_mutex);
+}
+
+static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	int vectors;
+	int i;
+
+	hdev->num_msi = HCLGEVF_MAX_VF_VECTOR_NUM;
+
+	vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
+					PCI_IRQ_MSI | PCI_IRQ_MSIX);
+	if (vectors < 0) {
+		dev_err(&pdev->dev,
+			"failed(%d) to allocate MSI/MSI-X vectors\n",
+			vectors);
+		return vectors;
+	}
+	if (vectors < hdev->num_msi)
+		dev_warn(&hdev->pdev->dev,
+			 "requested %d MSI/MSI-X, but allocated %d MSI/MSI-X\n",
+			 hdev->num_msi, vectors);
+
+	hdev->num_msi = vectors;
+	hdev->num_msi_left = vectors;
+	hdev->base_msi_vector = pdev->irq;
+
+	hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					   sizeof(u16), GFP_KERNEL);
+	if (!hdev->vector_status) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < hdev->num_msi; i++)
+		hdev->vector_status[i] = HCLGEVF_INVALID_VPORT;
+
+	hdev->vector_irq = devm_kcalloc(&pdev->dev, hdev->num_msi,
+					sizeof(int), GFP_KERNEL);
+	if (!hdev->vector_irq) {
+		pci_free_irq_vectors(pdev);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_msi(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_free_irq_vectors(pdev);
+}
+
+static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev)
+{
+	int ret = 0;
+
+	hclgevf_get_misc_vector(hdev);
+
+	ret = request_irq(hdev->misc_vector.vector_irq, hclgevf_misc_irq_handle,
+			  0, "hclgevf_cmd", hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev, "VF failed to request misc irq(%d)\n",
+			hdev->misc_vector.vector_irq);
+		return ret;
+	}
+
+	/* enable misc. vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, true);
+
+	return ret;
+}
+
+static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev)
+{
+	/* disable misc vector(vector 0) */
+	hclgevf_enable_vector(&hdev->misc_vector, false);
+	free_irq(hdev->misc_vector.vector_irq, hdev);
+	hclgevf_free_vector(hdev, 0);
+}
+
+static int hclgevf_init_instance(struct hclgevf_dev *hdev,
+				 struct hnae3_client *client)
+{
+	int ret;
+
+	switch (client->type) {
+	case HNAE3_CLIENT_KNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			struct hnae3_client *rc = hdev->roce_client;
+
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+			ret = rc->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+		break;
+	case HNAE3_CLIENT_UNIC:
+		hdev->nic_client = client;
+		hdev->nic.client = client;
+
+		ret = client->ops->init_instance(&hdev->nic);
+		if (ret)
+			return ret;
+		break;
+	case HNAE3_CLIENT_ROCE:
+		hdev->roce_client = client;
+		hdev->roce.client = client;
+
+		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
+			ret = hclgevf_init_roce_base_info(hdev);
+			if (ret)
+				return ret;
+
+			ret = client->ops->init_instance(&hdev->roce);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void hclgevf_uninit_instance(struct hclgevf_dev *hdev,
+				    struct hnae3_client *client)
+{
+	/* un-init roce, if it exists */
+	if (hdev->roce_client)
+		hdev->roce_client->ops->uninit_instance(&hdev->roce, 0);
+
+	/* un-init nic/unic, if this was not called by roce client */
+	if ((client->ops->uninit_instance) &&
+	    (client->type != HNAE3_CLIENT_ROCE))
+		client->ops->uninit_instance(&hdev->nic, 0);
+}
+
+static int hclgevf_register_client(struct hnae3_client *client,
+				   struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	return hclgevf_init_instance(hdev, client);
+}
+
+static void hclgevf_unregister_client(struct hnae3_client *client,
+				      struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_uninit_instance(hdev, client);
+}
+
+static int hclgevf_pci_init(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+	struct hclgevf_hw *hw;
+	int ret;
+
+	ret = pci_enable_device(pdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to enable PCI device\n");
+		goto err_no_drvdata;
+	}
+
+	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(&pdev->dev, "can't set consistent PCI DMA, exiting");
+		goto err_disable_device;
+	}
+
+	ret = pci_request_regions(pdev, HCLGEVF_DRIVER_NAME);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI request regions failed %d\n", ret);
+		goto err_disable_device;
+	}
+
+	pci_set_master(pdev);
+	hw = &hdev->hw;
+	hw->hdev = hdev;
+	hw->io_base = pcim_iomap(pdev, 2, 0);
+	if (!hw->io_base) {
+		dev_err(&pdev->dev, "can't map configuration register space\n");
+		ret = -ENOMEM;
+		goto err_clr_master;
+	}
+
+	return 0;
+
+err_clr_master:
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+err_disable_device:
+	pci_disable_device(pdev);
+err_no_drvdata:
+	pci_set_drvdata(pdev, NULL);
+	return ret;
+}
+
+static void hclgevf_pci_uninit(struct hclgevf_dev *hdev)
+{
+	struct pci_dev *pdev = hdev->pdev;
+
+	pci_clear_master(pdev);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+}
+
+static int hclgevf_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct pci_dev *pdev = ae_dev->pdev;
+	struct hclgevf_dev *hdev;
+	int ret;
+
+	hdev = devm_kzalloc(&pdev->dev, sizeof(*hdev), GFP_KERNEL);
+	if (!hdev)
+		return -ENOMEM;
+
+	hdev->pdev = pdev;
+	hdev->ae_dev = ae_dev;
+	ae_dev->priv = hdev;
+
+	ret = hclgevf_pci_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "PCI initialization failed\n");
+		return ret;
+	}
+
+	ret = hclgevf_init_msi(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init MSI/MSI-X\n", ret);
+		goto err_irq_init;
+	}
+
+	hclgevf_state_init(hdev);
+
+	ret = hclgevf_misc_irq_init(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n",
+			ret);
+		goto err_misc_irq_init;
+	}
+
+	ret = hclgevf_cmd_init(hdev);
+	if (ret)
+		goto err_cmd_init;
+
+	ret = hclgevf_configure(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_alloc_tqps(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to allocate TQPs\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_set_handle_info(hdev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to set handle info\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_enable_tso(hdev, true);
+	if (ret) {
+		dev_err(&pdev->dev, "failed(%d) to enable tso\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize VF's MTA */
+	hdev->accept_mta_mc = true;
+	ret = hclgevf_cfg_func_mta_filter(&hdev->nic, hdev->accept_mta_mc);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to set mta filter mode\n", ret);
+		goto err_config;
+	}
+
+	/* Initialize RSS for this VF */
+	ret = hclgevf_rss_init_hw(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize RSS\n", ret);
+		goto err_config;
+	}
+
+	ret = hclgevf_init_vlan_config(hdev);
+	if (ret) {
+		dev_err(&hdev->pdev->dev,
+			"failed(%d) to initialize VLAN config\n", ret);
+		goto err_config;
+	}
+
+	pr_info("finished initializing %s driver\n", HCLGEVF_DRIVER_NAME);
+
+	return 0;
+
+err_config:
+	hclgevf_cmd_uninit(hdev);
+err_cmd_init:
+	hclgevf_misc_irq_uninit(hdev);
+err_misc_irq_init:
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+err_irq_init:
+	hclgevf_pci_uninit(hdev);
+	return ret;
+}
+
+static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+{
+	struct hclgevf_dev *hdev = ae_dev->priv;
+
+	hclgevf_cmd_uninit(hdev);
+	hclgevf_misc_irq_uninit(hdev);
+	hclgevf_state_uninit(hdev);
+	hclgevf_uninit_msi(hdev);
+	hclgevf_pci_uninit(hdev);
+	ae_dev->priv = NULL;
+}
+
+static const struct hnae3_ae_ops hclgevf_ops = {
+	.init_ae_dev = hclgevf_init_ae_dev,
+	.uninit_ae_dev = hclgevf_uninit_ae_dev,
+	.init_client_instance = hclgevf_register_client,
+	.uninit_client_instance = hclgevf_unregister_client,
+	.start = hclgevf_ae_start,
+	.stop = hclgevf_ae_stop,
+	.map_ring_to_vector = hclgevf_map_ring_to_vector,
+	.unmap_ring_from_vector = hclgevf_unmap_ring_from_vector,
+	.get_vector = hclgevf_get_vector,
+	.reset_queue = hclgevf_reset_tqp,
+	.set_promisc_mode = hclgevf_set_promisc_mode,
+	.get_mac_addr = hclgevf_get_mac_addr,
+	.set_mac_addr = hclgevf_set_mac_addr,
+	.add_uc_addr = hclgevf_add_uc_addr,
+	.rm_uc_addr = hclgevf_rm_uc_addr,
+	.add_mc_addr = hclgevf_add_mc_addr,
+	.rm_mc_addr = hclgevf_rm_mc_addr,
+	.get_stats = hclgevf_get_stats,
+	.update_stats = hclgevf_update_stats,
+	.get_strings = hclgevf_get_strings,
+	.get_sset_count = hclgevf_get_sset_count,
+	.get_rss_key_size = hclgevf_get_rss_key_size,
+	.get_rss_indir_size = hclgevf_get_rss_indir_size,
+	.get_rss = hclgevf_get_rss,
+	.set_rss = hclgevf_set_rss,
+	.get_tc_size = hclgevf_get_tc_size,
+	.get_fw_version = hclgevf_get_fw_version,
+	.set_vlan_filter = hclgevf_set_vlan_filter,
+};
+
+static struct hnae3_ae_algo ae_algovf = {
+	.ops = &hclgevf_ops,
+	.name = HCLGEVF_NAME,
+	.pdev_id_table = ae_algovf_pci_tbl,
+};
+
+static int hclgevf_init(void)
+{
+	pr_info("%s is initializing\n", HCLGEVF_NAME);
+
+	return hnae3_register_ae_algo(&ae_algovf);
+}
+
+static void hclgevf_exit(void)
+{
+	hnae3_unregister_ae_algo(&ae_algovf);
+}
+module_init(hclgevf_init);
+module_exit(hclgevf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+MODULE_DESCRIPTION("HCLGEVF Driver");
+MODULE_VERSION(HCLGEVF_MOD_VERSION);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
new file mode 100644
index 0000000..4461433
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -0,0 +1,170 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __HCLGEVF_MAIN_H
+#define __HCLGEVF_MAIN_H
+#include <linux/fs.h>
+#include <linux/types.h>
+#include "hclge_mbx.h"
+#include "hclgevf_cmd.h"
+#include "hnae3.h"
+
+#define HCLGEVF_MOD_VERSION "v1.0"
+#define HCLGEVF_DRIVER_NAME "hclgevf"
+
+#define HCLGEVF_ROCEE_VECTOR_NUM	0
+#define HCLGEVF_MISC_VECTOR_NUM		0
+
+#define HCLGEVF_INVALID_VPORT		0xffff
+
+/* This number in actual depends upon the total number of VFs
+ * created by physical function. But the maximum number of
+ * possible vector-per-VF is {VFn(1-32), VECTn(32 + 1)}.
+ */
+#define HCLGEVF_MAX_VF_VECTOR_NUM	(32 + 1)
+
+#define HCLGEVF_VECTOR_REG_BASE		0x20000
+#define HCLGEVF_MISC_VECTOR_REG_BASE	0x20400
+#define HCLGEVF_VECTOR_REG_OFFSET	0x4
+#define HCLGEVF_VECTOR_VF_OFFSET		0x100000
+
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HCLGEVF_VECTOR0_CMDQ_SRC_REG	0x27100
+/* CMDQ register bits for RX event(=MBX event) */
+#define HCLGEVF_VECTOR0_RX_CMDQ_INT_B	1
+
+#define HCLGEVF_TQP_RESET_TRY_TIMES	10
+
+#define HCLGEVF_RSS_IND_TBL_SIZE		512
+#define HCLGEVF_RSS_SET_BITMAP_MSK	0xffff
+#define HCLGEVF_RSS_KEY_SIZE		40
+#define HCLGEVF_RSS_HASH_ALGO_TOEPLITZ	0
+#define HCLGEVF_RSS_HASH_ALGO_SIMPLE	1
+#define HCLGEVF_RSS_HASH_ALGO_SYMMETRIC	2
+#define HCLGEVF_RSS_HASH_ALGO_MASK	0xf
+#define HCLGEVF_RSS_CFG_TBL_NUM \
+	(HCLGEVF_RSS_IND_TBL_SIZE / HCLGEVF_RSS_CFG_TBL_SIZE)
+
+/* states of hclgevf device & tasks */
+enum hclgevf_states {
+	/* device states */
+	HCLGEVF_STATE_DOWN,
+	HCLGEVF_STATE_DISABLED,
+	/* task states */
+	HCLGEVF_STATE_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_SERVICE_SCHED,
+	HCLGEVF_STATE_MBX_HANDLING,
+};
+
+#define HCLGEVF_MPF_ENBALE 1
+
+struct hclgevf_mac {
+	u8 mac_addr[ETH_ALEN];
+	int link;
+};
+
+struct hclgevf_hw {
+	void __iomem *io_base;
+	int num_vec;
+	struct hclgevf_cmq cmq;
+	struct hclgevf_mac mac;
+	void *hdev; /* hchgevf device it is part of */
+};
+
+/* TQP stats */
+struct hlcgevf_tqp_stats {
+	/* query_tqp_tx_queue_statistics ,opcode id:  0x0B03 */
+	u64 rcb_tx_ring_pktnum_rcd; /* 32bit */
+	/* query_tqp_rx_queue_statistics ,opcode id:  0x0B13 */
+	u64 rcb_rx_ring_pktnum_rcd; /* 32bit */
+};
+
+struct hclgevf_tqp {
+	struct device *dev;	/* device for DMA mapping */
+	struct hnae3_queue q;
+	struct hlcgevf_tqp_stats tqp_stats;
+	u16 index;		/* global index in a NIC controller */
+
+	bool alloced;
+};
+
+struct hclgevf_cfg {
+	u8 vmdq_vport_num;
+	u8 tc_num;
+	u16 tqp_desc_num;
+	u16 rx_buf_len;
+	u8 phy_addr;
+	u8 media_type;
+	u8 mac_addr[ETH_ALEN];
+	u32 numa_node_map;
+};
+
+struct hclgevf_rss_cfg {
+	u8  rss_hash_key[HCLGEVF_RSS_KEY_SIZE]; /* user configured hash keys */
+	u32 hash_algo;
+	u32 rss_size;
+	u8 hw_tc_map;
+	u8  rss_indirection_tbl[HCLGEVF_RSS_IND_TBL_SIZE]; /* shadow table */
+};
+
+struct hclgevf_misc_vector {
+	u8 __iomem *addr;
+	int vector_irq;
+};
+
+struct hclgevf_dev {
+	struct pci_dev *pdev;
+	struct hnae3_ae_dev *ae_dev;
+	struct hclgevf_hw hw;
+	struct hclgevf_misc_vector misc_vector;
+	struct hclgevf_rss_cfg rss_cfg;
+	unsigned long state;
+
+	u32 fw_version;
+	u16 num_tqps;		/* num task queue pairs of this PF */
+
+	u16 alloc_rss_size;	/* allocated RSS task queue */
+	u16 rss_size_max;	/* HW defined max RSS task queue */
+
+	u16 num_alloc_vport;	/* num vports this driver supports */
+	u32 numa_node_mask;
+	u16 rx_buf_len;
+	u16 num_desc;
+	u8 hw_tc_map;
+
+	u16 num_msi;
+	u16 num_msi_left;
+	u16 num_msi_used;
+	u32 base_msi_vector;
+	u16 *vector_status;
+	int *vector_irq;
+
+	bool accept_mta_mc; /* whether to accept mta filter multicast */
+	struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */
+
+	struct timer_list service_timer;
+	struct work_struct service_task;
+	struct work_struct mbx_service_task;
+
+	struct hclgevf_tqp *htqp;
+
+	struct hnae3_handle nic;
+	struct hnae3_handle roce;
+
+	struct hnae3_client *nic_client;
+	struct hnae3_client *roce_client;
+	u32 flag;
+};
+
+int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
+			 const u8 *msg_data, u8 msg_len, bool need_resp,
+			 u8 *resp_data, u16 resp_len);
+void hclgevf_mbx_handler(struct hclgevf_dev *hdev);
+void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state);
+#endif
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 4/8] net: hns3: Add HNS3 VF driver to kernel build framework
  2017-12-08 21:16 ` Salil Mehta
  (?)
@ 2017-12-08 21:16     ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch introduces the new Makefiles and updates existing
Makefiles required to build the HNS3 Virtual Function driver.
This also updates the Kconfig for introduction of new menuconfig
entries related to VF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/net/ethernet/hisilicon/Kconfig             | 28 +++++++++++++++-------
 drivers/net/ethernet/hisilicon/hns3/Makefile       |  1 +
 .../net/ethernet/hisilicon/hns3/hns3vf/Makefile    |  8 +++++++
 3 files changed, 28 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile

diff --git a/drivers/net/ethernet/hisilicon/Kconfig b/drivers/net/ethernet/hisilicon/Kconfig
index 30000b6..8bcf470 100644
--- a/drivers/net/ethernet/hisilicon/Kconfig
+++ b/drivers/net/ethernet/hisilicon/Kconfig
@@ -94,15 +94,6 @@ config HNS3_HCLGE
 	  compatibility layer. The engine would be used in Hisilicon hip08 family of
 	  SoCs and further upcoming SoCs.
 
-config HNS3_ENET
-	tristate "Hisilicon HNS3 Ethernet Device Support"
-	depends on 64BIT && PCI
-	depends on HNS3 && HNS3_HCLGE
-	---help---
-	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
-	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
-	  devices and their associated operations.
-
 config HNS3_DCB
 	bool "Hisilicon HNS3 Data Center Bridge Support"
 	default n
@@ -112,4 +103,23 @@ config HNS3_DCB
 
 	  If unsure, say N.
 
+config HNS3_HCLGEVF
+    tristate "Hisilicon HNS3VF Acceleration Engine & Compatibility Layer Support"
+    depends on PCI_MSI
+    depends on HNS3
+	depends on HNS3_HCLGE
+    ---help---
+	  This selects the HNS3 VF drivers network acceleration engine & its hardware
+	  compatibility layer. The engine would be used in Hisilicon hip08 family of
+	  SoCs and further upcoming SoCs.
+
+config HNS3_ENET
+	tristate "Hisilicon HNS3 Ethernet Device Support"
+	depends on 64BIT && PCI
+	depends on HNS3
+	---help---
+	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
+	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
+	  devices and their associated operations.
+
 endif # NET_VENDOR_HISILICON
diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index a9349e1..244664b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -3,5 +3,6 @@
 #
 
 obj-$(CONFIG_HNS3) += hns3pf/
+obj-$(CONFIG_HNS3) += hns3vf/
 
 obj-$(CONFIG_HNS3) += hnae3.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
new file mode 100644
index 0000000..a513d00
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
@@ -0,0 +1,8 @@
+#
+# Makefile for the HISILICON network device drivers.
+#
+
+ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
+
+obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
+hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
\ No newline at end of file
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 4/8] net: hns3: Add HNS3 VF driver to kernel build framework
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch introduces the new Makefiles and updates existing
Makefiles required to build the HNS3 Virtual Function driver.
This also updates the Kconfig for introduction of new menuconfig
entries related to VF driver.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 drivers/net/ethernet/hisilicon/Kconfig             | 28 +++++++++++++++-------
 drivers/net/ethernet/hisilicon/hns3/Makefile       |  1 +
 .../net/ethernet/hisilicon/hns3/hns3vf/Makefile    |  8 +++++++
 3 files changed, 28 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile

diff --git a/drivers/net/ethernet/hisilicon/Kconfig b/drivers/net/ethernet/hisilicon/Kconfig
index 30000b6..8bcf470 100644
--- a/drivers/net/ethernet/hisilicon/Kconfig
+++ b/drivers/net/ethernet/hisilicon/Kconfig
@@ -94,15 +94,6 @@ config HNS3_HCLGE
 	  compatibility layer. The engine would be used in Hisilicon hip08 family of
 	  SoCs and further upcoming SoCs.
 
-config HNS3_ENET
-	tristate "Hisilicon HNS3 Ethernet Device Support"
-	depends on 64BIT && PCI
-	depends on HNS3 && HNS3_HCLGE
-	---help---
-	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
-	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
-	  devices and their associated operations.
-
 config HNS3_DCB
 	bool "Hisilicon HNS3 Data Center Bridge Support"
 	default n
@@ -112,4 +103,23 @@ config HNS3_DCB
 
 	  If unsure, say N.
 
+config HNS3_HCLGEVF
+    tristate "Hisilicon HNS3VF Acceleration Engine & Compatibility Layer Support"
+    depends on PCI_MSI
+    depends on HNS3
+	depends on HNS3_HCLGE
+    ---help---
+	  This selects the HNS3 VF drivers network acceleration engine & its hardware
+	  compatibility layer. The engine would be used in Hisilicon hip08 family of
+	  SoCs and further upcoming SoCs.
+
+config HNS3_ENET
+	tristate "Hisilicon HNS3 Ethernet Device Support"
+	depends on 64BIT && PCI
+	depends on HNS3
+	---help---
+	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
+	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
+	  devices and their associated operations.
+
 endif # NET_VENDOR_HISILICON
diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index a9349e1..244664b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -3,5 +3,6 @@
 #
 
 obj-$(CONFIG_HNS3) += hns3pf/
+obj-$(CONFIG_HNS3) += hns3vf/
 
 obj-$(CONFIG_HNS3) += hnae3.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
new file mode 100644
index 0000000..a513d00
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
@@ -0,0 +1,8 @@
+#
+# Makefile for the HISILICON network device drivers.
+#
+
+ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
+
+obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
+hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
\ No newline at end of file
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 4/8] net: hns3: Add HNS3 VF driver to kernel build framework
@ 2017-12-08 21:16     ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem-fT/PcQaiUtIeIZ0/mPfg9Q
  Cc: salil.mehta-hv44wF8Li93QT0dZR+AlfA,
	yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

This patch introduces the new Makefiles and updates existing
Makefiles required to build the HNS3 Virtual Function driver.
This also updates the Kconfig for introduction of new menuconfig
entries related to VF driver.

Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/net/ethernet/hisilicon/Kconfig             | 28 +++++++++++++++-------
 drivers/net/ethernet/hisilicon/hns3/Makefile       |  1 +
 .../net/ethernet/hisilicon/hns3/hns3vf/Makefile    |  8 +++++++
 3 files changed, 28 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile

diff --git a/drivers/net/ethernet/hisilicon/Kconfig b/drivers/net/ethernet/hisilicon/Kconfig
index 30000b6..8bcf470 100644
--- a/drivers/net/ethernet/hisilicon/Kconfig
+++ b/drivers/net/ethernet/hisilicon/Kconfig
@@ -94,15 +94,6 @@ config HNS3_HCLGE
 	  compatibility layer. The engine would be used in Hisilicon hip08 family of
 	  SoCs and further upcoming SoCs.
 
-config HNS3_ENET
-	tristate "Hisilicon HNS3 Ethernet Device Support"
-	depends on 64BIT && PCI
-	depends on HNS3 && HNS3_HCLGE
-	---help---
-	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
-	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
-	  devices and their associated operations.
-
 config HNS3_DCB
 	bool "Hisilicon HNS3 Data Center Bridge Support"
 	default n
@@ -112,4 +103,23 @@ config HNS3_DCB
 
 	  If unsure, say N.
 
+config HNS3_HCLGEVF
+    tristate "Hisilicon HNS3VF Acceleration Engine & Compatibility Layer Support"
+    depends on PCI_MSI
+    depends on HNS3
+	depends on HNS3_HCLGE
+    ---help---
+	  This selects the HNS3 VF drivers network acceleration engine & its hardware
+	  compatibility layer. The engine would be used in Hisilicon hip08 family of
+	  SoCs and further upcoming SoCs.
+
+config HNS3_ENET
+	tristate "Hisilicon HNS3 Ethernet Device Support"
+	depends on 64BIT && PCI
+	depends on HNS3
+	---help---
+	  This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
+	  family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
+	  devices and their associated operations.
+
 endif # NET_VENDOR_HISILICON
diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index a9349e1..244664b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -3,5 +3,6 @@
 #
 
 obj-$(CONFIG_HNS3) += hns3pf/
+obj-$(CONFIG_HNS3) += hns3vf/
 
 obj-$(CONFIG_HNS3) += hnae3.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
new file mode 100644
index 0000000..a513d00
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
@@ -0,0 +1,8 @@
+#
+# Makefile for the HISILICON network device drivers.
+#
+
+ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
+
+obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
+hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
\ No newline at end of file
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 5/8] net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC
  2017-12-08 21:16 ` Salil Mehta
@ 2017-12-08 21:16   ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

Most of the NAPI handling interface, skb buffer management,
management of the RX/TX descriptors, ethool interface etc.
has quite a bit of code which is common to VF and PF driver.

This patch makes the exisitng PF's HNS3 ENET driver as the
common ENET driver for both Virtual & Physical Function. This
will help in reduction of redundancy and better management of
code.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 drivers/net/ethernet/hisilicon/hns3/Makefile       |  6 +++++-
 drivers/net/ethernet/hisilicon/hns3/hnae3.c        | 14 ++++++++++++--
 drivers/net/ethernet/hisilicon/hns3/hnae3.h        |  7 ++++---
 .../hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c       |  2 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.c        |  2 ++
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.h        |  0
 .../hisilicon/hns3/{hns3pf => }/hns3_ethtool.c     | 22 +++++++++++++++++++++-
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |  5 -----
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  |  1 +
 9 files changed, 46 insertions(+), 13 deletions(-)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c (97%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.c (99%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.h (100%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_ethtool.c (97%)

diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index 244664b..48af9b1 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -1,8 +1,12 @@
 #
 # Makefile for the HISILICON network device drivers.
 #
-
 obj-$(CONFIG_HNS3) += hns3pf/
 obj-$(CONFIG_HNS3) += hns3vf/
 
 obj-$(CONFIG_HNS3) += hnae3.o
+
+obj-$(CONFIG_HNS3_ENET) += hns3.o
+hns3-objs = hns3_enet.o hns3_ethtool.o
+
+hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
index 5bcb223..02145f2 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
@@ -196,9 +196,18 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
 	const struct pci_device_id *id;
 	struct hnae3_ae_algo *ae_algo;
 	struct hnae3_client *client;
-	int ret = 0;
+	int ret = 0, lock_acquired;
+
+	/* we can get deadlocked if SRIOV is being enabled in context to probe
+	 * and probe gets called again in same context. This can happen when
+	 * pci_enable_sriov() is called to create VFs from PF probes context.
+	 * Therefore, for simplicity uniformly defering further probing in all
+	 * cases where we detect contention.
+	 */
+	lock_acquired = mutex_trylock(&hnae3_common_lock);
+	if (!lock_acquired)
+		return -EPROBE_DEFER;
 
-	mutex_lock(&hnae3_common_lock);
 	list_add_tail(&ae_dev->node, &hnae3_ae_dev_list);
 
 	/* Check if there are matched ae_algo */
@@ -211,6 +220,7 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
 
 		if (!ae_dev->ops) {
 			dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n");
+			ret = -EOPNOTSUPP;
 			goto out_err;
 		}
 
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index 67c59e1..a9e2b32 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -452,9 +452,10 @@ struct hnae3_unic_private_info {
 	struct hnae3_queue **tqp;  /* array base of all TQPs of this instance */
 };
 
-#define HNAE3_SUPPORT_MAC_LOOPBACK    1
-#define HNAE3_SUPPORT_PHY_LOOPBACK    2
-#define HNAE3_SUPPORT_SERDES_LOOPBACK 4
+#define HNAE3_SUPPORT_MAC_LOOPBACK    BIT(0)
+#define HNAE3_SUPPORT_PHY_LOOPBACK    BIT(1)
+#define HNAE3_SUPPORT_SERDES_LOOPBACK BIT(2)
+#define HNAE3_SUPPORT_VF	      BIT(3)
 
 struct hnae3_handle {
 	struct hnae3_client *client;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
similarity index 97%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
index 925619a..eb82700 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
@@ -93,7 +93,7 @@ void hns3_dcbnl_setup(struct hnae3_handle *handle)
 {
 	struct net_device *dev = handle->kinfo.netdev;
 
-	if (!handle->kinfo.dcb_ops)
+	if ((!handle->kinfo.dcb_ops) || (handle->flags & HNAE3_SUPPORT_VF))
 		return;
 
 	dev->dcbnl_ops = &hns3_dcbnl_ops;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
similarity index 99%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 5941509..c2c1323 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -52,6 +52,8 @@ static const struct pci_device_id hns3_pci_tbl[] = {
 	 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
 	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC),
 	 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
 	/* required last entry */
 	{0, }
 };
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
similarity index 100%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.h
rename to drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
similarity index 97%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index a21470c..65a69b4 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -849,6 +849,21 @@ static int hns3_nway_reset(struct net_device *netdev)
 	return genphy_restart_aneg(phy);
 }
 
+static const struct ethtool_ops hns3vf_ethtool_ops = {
+	.get_drvinfo = hns3_get_drvinfo,
+	.get_ringparam = hns3_get_ringparam,
+	.set_ringparam = hns3_set_ringparam,
+	.get_strings = hns3_get_strings,
+	.get_ethtool_stats = hns3_get_stats,
+	.get_sset_count = hns3_get_sset_count,
+	.get_rxnfc = hns3_get_rxnfc,
+	.get_rxfh_key_size = hns3_get_rss_key_size,
+	.get_rxfh_indir_size = hns3_get_rss_indir_size,
+	.get_rxfh = hns3_get_rss,
+	.set_rxfh = hns3_set_rss,
+	.get_link_ksettings = hns3_get_link_ksettings,
+};
+
 static const struct ethtool_ops hns3_ethtool_ops = {
 	.self_test = hns3_self_test,
 	.get_drvinfo = hns3_get_drvinfo,
@@ -872,5 +887,10 @@ static const struct ethtool_ops hns3_ethtool_ops = {
 
 void hns3_ethtool_set_ops(struct net_device *netdev)
 {
-	netdev->ethtool_ops = &hns3_ethtool_ops;
+	struct hnae3_handle *h = hns3_get_handle(netdev);
+
+	if (h->flags & HNAE3_SUPPORT_VF)
+		netdev->ethtool_ops = &hns3vf_ethtool_ops;
+	else
+		netdev->ethtool_ops = &hns3_ethtool_ops;
 }
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
index d2b20d0..d077fa0 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
@@ -8,8 +8,3 @@ obj-$(CONFIG_HNS3_HCLGE) += hclge.o
 hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o
 
 hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o
-
-obj-$(CONFIG_HNS3_ENET) += hns3.o
-hns3-objs = hns3_enet.o hns3_ethtool.o
-
-hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
index a878c63..fdfc8a7 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -316,6 +316,7 @@ static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
 	nic->ae_algo = &ae_algovf;
 	nic->pdev = hdev->pdev;
 	nic->numa_node_mask = hdev->numa_node_mask;
+	nic->flags |= HNAE3_SUPPORT_VF;
 
 	if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
 		dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 5/8] net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC
@ 2017-12-08 21:16   ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:16 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

Most of the NAPI handling interface, skb buffer management,
management of the RX/TX descriptors, ethool interface etc.
has quite a bit of code which is common to VF and PF driver.

This patch makes the exisitng PF's HNS3 ENET driver as the
common ENET driver for both Virtual & Physical Function. This
will help in reduction of redundancy and better management of
code.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 drivers/net/ethernet/hisilicon/hns3/Makefile       |  6 +++++-
 drivers/net/ethernet/hisilicon/hns3/hnae3.c        | 14 ++++++++++++--
 drivers/net/ethernet/hisilicon/hns3/hnae3.h        |  7 ++++---
 .../hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c       |  2 +-
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.c        |  2 ++
 .../hisilicon/hns3/{hns3pf => }/hns3_enet.h        |  0
 .../hisilicon/hns3/{hns3pf => }/hns3_ethtool.c     | 22 +++++++++++++++++++++-
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |  5 -----
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  |  1 +
 9 files changed, 46 insertions(+), 13 deletions(-)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_dcbnl.c (97%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.c (99%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_enet.h (100%)
 rename drivers/net/ethernet/hisilicon/hns3/{hns3pf => }/hns3_ethtool.c (97%)

diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index 244664b..48af9b1 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -1,8 +1,12 @@
 #
 # Makefile for the HISILICON network device drivers.
 #
-
 obj-$(CONFIG_HNS3) += hns3pf/
 obj-$(CONFIG_HNS3) += hns3vf/
 
 obj-$(CONFIG_HNS3) += hnae3.o
+
+obj-$(CONFIG_HNS3_ENET) += hns3.o
+hns3-objs = hns3_enet.o hns3_ethtool.o
+
+hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
index 5bcb223..02145f2 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
@@ -196,9 +196,18 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
 	const struct pci_device_id *id;
 	struct hnae3_ae_algo *ae_algo;
 	struct hnae3_client *client;
-	int ret = 0;
+	int ret = 0, lock_acquired;
+
+	/* we can get deadlocked if SRIOV is being enabled in context to probe
+	 * and probe gets called again in same context. This can happen when
+	 * pci_enable_sriov() is called to create VFs from PF probes context.
+	 * Therefore, for simplicity uniformly defering further probing in all
+	 * cases where we detect contention.
+	 */
+	lock_acquired = mutex_trylock(&hnae3_common_lock);
+	if (!lock_acquired)
+		return -EPROBE_DEFER;
 
-	mutex_lock(&hnae3_common_lock);
 	list_add_tail(&ae_dev->node, &hnae3_ae_dev_list);
 
 	/* Check if there are matched ae_algo */
@@ -211,6 +220,7 @@ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
 
 		if (!ae_dev->ops) {
 			dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n");
+			ret = -EOPNOTSUPP;
 			goto out_err;
 		}
 
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index 67c59e1..a9e2b32 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -452,9 +452,10 @@ struct hnae3_unic_private_info {
 	struct hnae3_queue **tqp;  /* array base of all TQPs of this instance */
 };
 
-#define HNAE3_SUPPORT_MAC_LOOPBACK    1
-#define HNAE3_SUPPORT_PHY_LOOPBACK    2
-#define HNAE3_SUPPORT_SERDES_LOOPBACK 4
+#define HNAE3_SUPPORT_MAC_LOOPBACK    BIT(0)
+#define HNAE3_SUPPORT_PHY_LOOPBACK    BIT(1)
+#define HNAE3_SUPPORT_SERDES_LOOPBACK BIT(2)
+#define HNAE3_SUPPORT_VF	      BIT(3)
 
 struct hnae3_handle {
 	struct hnae3_client *client;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
similarity index 97%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
index 925619a..eb82700 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_dcbnl.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
@@ -93,7 +93,7 @@ void hns3_dcbnl_setup(struct hnae3_handle *handle)
 {
 	struct net_device *dev = handle->kinfo.netdev;
 
-	if (!handle->kinfo.dcb_ops)
+	if ((!handle->kinfo.dcb_ops) || (handle->flags & HNAE3_SUPPORT_VF))
 		return;
 
 	dev->dcbnl_ops = &hns3_dcbnl_ops;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
similarity index 99%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 5941509..c2c1323 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -52,6 +52,8 @@ static const struct pci_device_id hns3_pci_tbl[] = {
 	 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
 	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC),
 	 HNAE3_DEV_SUPPORT_ROCE_DCB_BITS},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_VF), 0},
+	{PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_DCB_PFC_VF), 0},
 	/* required last entry */
 	{0, }
 };
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
similarity index 100%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.h
rename to drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
similarity index 97%
rename from drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
rename to drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index a21470c..65a69b4 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -849,6 +849,21 @@ static int hns3_nway_reset(struct net_device *netdev)
 	return genphy_restart_aneg(phy);
 }
 
+static const struct ethtool_ops hns3vf_ethtool_ops = {
+	.get_drvinfo = hns3_get_drvinfo,
+	.get_ringparam = hns3_get_ringparam,
+	.set_ringparam = hns3_set_ringparam,
+	.get_strings = hns3_get_strings,
+	.get_ethtool_stats = hns3_get_stats,
+	.get_sset_count = hns3_get_sset_count,
+	.get_rxnfc = hns3_get_rxnfc,
+	.get_rxfh_key_size = hns3_get_rss_key_size,
+	.get_rxfh_indir_size = hns3_get_rss_indir_size,
+	.get_rxfh = hns3_get_rss,
+	.set_rxfh = hns3_set_rss,
+	.get_link_ksettings = hns3_get_link_ksettings,
+};
+
 static const struct ethtool_ops hns3_ethtool_ops = {
 	.self_test = hns3_self_test,
 	.get_drvinfo = hns3_get_drvinfo,
@@ -872,5 +887,10 @@ static const struct ethtool_ops hns3_ethtool_ops = {
 
 void hns3_ethtool_set_ops(struct net_device *netdev)
 {
-	netdev->ethtool_ops = &hns3_ethtool_ops;
+	struct hnae3_handle *h = hns3_get_handle(netdev);
+
+	if (h->flags & HNAE3_SUPPORT_VF)
+		netdev->ethtool_ops = &hns3vf_ethtool_ops;
+	else
+		netdev->ethtool_ops = &hns3_ethtool_ops;
 }
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
index d2b20d0..d077fa0 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
@@ -8,8 +8,3 @@ obj-$(CONFIG_HNS3_HCLGE) += hclge.o
 hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o
 
 hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o
-
-obj-$(CONFIG_HNS3_ENET) += hns3.o
-hns3-objs = hns3_enet.o hns3_ethtool.o
-
-hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
index a878c63..fdfc8a7 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -316,6 +316,7 @@ static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
 	nic->ae_algo = &ae_algovf;
 	nic->pdev = hdev->pdev;
 	nic->numa_node_mask = hdev->numa_node_mask;
+	nic->flags |= HNAE3_SUPPORT_VF;
 
 	if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
 		dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 6/8] net: hns3: Add mailbox support to PF driver
  2017-12-08 21:16 ` Salil Mehta
@ 2017-12-08 21:17   ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

Command queue provides the provision of Mailbox command which
can be used for communication between PF and VF. PF handles
messages from various VFs for fetching various information like,
queue, vlan, link status related etc. It also handles the request
from various VFs to perform certain privileged operations.

This patch adds the support of a message handler for handling
such various command requests from VF.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |   2 +-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    |   1 +
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   2 +
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 310 +++++++++++++++++++++
 4 files changed, 314 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
index d077fa0..393a1b3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
@@ -5,6 +5,6 @@
 ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
 
 obj-$(CONFIG_HNS3_HCLGE) += hclge.o
-hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o
+hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o
 
 hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index d07c700..980fcdf 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -21,6 +21,7 @@
 #include "hclge_cmd.h"
 #include "hclge_dcb.h"
 #include "hclge_main.h"
+#include "hclge_mbx.h"
 #include "hclge_mdio.h"
 #include "hclge_tm.h"
 #include "hnae3.h"
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index aacec43..028817c 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -554,4 +554,6 @@ int hclge_set_vf_vlan_common(struct hclge_dev *vport, int vfid,
 
 int hclge_buffer_alloc(struct hclge_dev *hdev);
 int hclge_rss_init_hw(struct hclge_dev *hdev);
+
+void hclge_mbx_handler(struct hclge_dev *hdev);
 #endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
new file mode 100644
index 0000000..a993735
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -0,0 +1,310 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "hclge_main.h"
+#include "hclge_mbx.h"
+#include "hnae3.h"
+
+/* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF
+ * receives a mailbox message from VF.
+ * @vport: pointer to struct hclge_vport
+ * @vf_to_pf_req: pointer to hclge_mbx_vf_to_pf_cmd of the original mailbox
+ *		  message
+ * @resp_status: indicate to VF whether its request success(0) or failed.
+ */
+static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
+				struct hclge_mbx_vf_to_pf_cmd *vf_to_pf_req,
+				int resp_status,
+				u8 *resp_data, u16 resp_data_len)
+{
+	struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf;
+	struct hclge_dev *hdev = vport->back;
+	enum hclge_cmd_status status;
+	struct hclge_desc desc;
+
+	resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+
+	if (resp_data_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
+		dev_err(&hdev->pdev->dev,
+			"PF fail to gen resp to VF len %d exceeds max len %d\n",
+			resp_data_len,
+			HCLGE_MBX_MAX_RESP_DATA_SIZE);
+	}
+
+	hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+
+	resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
+	resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
+
+	resp_pf_to_vf->msg[0] = HCLGE_MBX_PF_VF_RESP;
+	resp_pf_to_vf->msg[1] = vf_to_pf_req->msg[0];
+	resp_pf_to_vf->msg[2] = vf_to_pf_req->msg[1];
+	resp_pf_to_vf->msg[3] = (resp_status == 0) ? 0 : 1;
+
+	if (resp_data && resp_data_len > 0)
+		memcpy(&resp_pf_to_vf->msg[4], resp_data, resp_data_len);
+
+	status = hclge_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"PF failed(=%d) to send response to VF\n", status);
+
+	return status;
+}
+
+static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
+			      u16 mbx_opcode, u8 dest_vfid)
+{
+	struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf;
+	struct hclge_dev *hdev = vport->back;
+	enum hclge_cmd_status status;
+	struct hclge_desc desc;
+
+	resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+
+	hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+
+	resp_pf_to_vf->dest_vfid = dest_vfid;
+	resp_pf_to_vf->msg_len = msg_len;
+	resp_pf_to_vf->msg[0] = mbx_opcode;
+
+	memcpy(&resp_pf_to_vf->msg[1], msg, msg_len);
+
+	status = hclge_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"PF failed(=%d) to send mailbox message to VF\n",
+			status);
+
+	return status;
+}
+
+static int hclge_set_vf_promisc_mode(struct hclge_vport *vport,
+				     struct hclge_mbx_vf_to_pf_cmd *req)
+{
+	bool en = req->msg[1] ? true : false;
+	struct hclge_promisc_param param;
+
+	/* always enable broadcast promisc bit */
+	hclge_promisc_param_init(&param, en, en, true, vport->vport_id);
+	return hclge_cmd_set_promisc_mode(vport->back, &param);
+}
+
+static int hclge_set_vf_uc_mac_addr(struct hclge_vport *vport,
+				    struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				    bool gen_resp)
+{
+	const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]);
+	struct hclge_dev *hdev = vport->back;
+	int status;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_MODIFY) {
+		const u8 *old_addr = (const u8 *)(&mbx_req->msg[8]);
+
+		hclge_rm_uc_addr_common(vport, old_addr);
+		status = hclge_add_uc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_ADD) {
+		status = hclge_add_uc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_REMOVE) {
+		status = hclge_rm_uc_addr_common(vport, mac_addr);
+	} else {
+		dev_err(&hdev->pdev->dev,
+			"failed to set unicast mac addr, unknown subcode %d\n",
+			mbx_req->msg[1]);
+		return -EIO;
+	}
+
+	if (gen_resp)
+		hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return 0;
+}
+
+static int hclge_set_vf_mc_mac_addr(struct hclge_vport *vport,
+				    struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				    bool gen_resp)
+{
+	const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]);
+	struct hclge_dev *hdev = vport->back;
+	int status;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_ADD) {
+		status = hclge_add_mc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_REMOVE) {
+		status = hclge_rm_mc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE) {
+		u8 func_id = vport->vport_id;
+		bool enable = mbx_req->msg[2];
+
+		status = hclge_cfg_func_mta_filter(hdev, func_id, enable);
+	} else {
+		dev_err(&hdev->pdev->dev,
+			"failed to set mcast mac addr, unknown subcode %d\n",
+			mbx_req->msg[1]);
+		return -EIO;
+	}
+
+	if (gen_resp)
+		hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return 0;
+}
+
+static int hclge_set_vf_vlan_cfg(struct hclge_vport *vport,
+				 struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				 bool gen_resp)
+{
+	struct hclge_dev *hdev = vport->back;
+	int status = 0;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_VLAN_FILTER) {
+		u16 vlan, proto;
+		bool is_kill;
+
+		is_kill = !!mbx_req->msg[2];
+		memcpy(&vlan, &mbx_req->msg[3], sizeof(vlan));
+		memcpy(&proto, &mbx_req->msg[5], sizeof(proto));
+		status = hclge_set_vf_vlan_common(hdev, vport->vport_id,
+						  is_kill, vlan, 0,
+						  cpu_to_be16(proto));
+	}
+
+	if (gen_resp)
+		status = hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return status;
+}
+
+static int hclge_get_vf_tcinfo(struct hclge_vport *vport,
+			       struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+			       bool gen_resp)
+{
+	struct hclge_dev *hdev = vport->back;
+	int ret;
+
+	ret = hclge_gen_resp_to_vf(vport, mbx_req, 0, &hdev->hw_tc_map,
+				   sizeof(u8));
+
+	return ret;
+}
+
+static int hclge_get_vf_queue_info(struct hclge_vport *vport,
+				   struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				   bool gen_resp)
+{
+#define HCLGE_TQPS_RSS_INFO_LEN		8
+	u8 resp_data[HCLGE_TQPS_RSS_INFO_LEN];
+	struct hclge_dev *hdev = vport->back;
+
+	/* get the queue related info */
+	memcpy(&resp_data[0], &vport->alloc_tqps, sizeof(u16));
+	memcpy(&resp_data[2], &hdev->rss_size_max, sizeof(u16));
+	memcpy(&resp_data[4], &hdev->num_desc, sizeof(u16));
+	memcpy(&resp_data[6], &hdev->rx_buf_len, sizeof(u16));
+
+	return hclge_gen_resp_to_vf(vport, mbx_req, 0, resp_data,
+				    HCLGE_TQPS_RSS_INFO_LEN);
+}
+
+static int hclge_get_link_info(struct hclge_vport *vport,
+			       struct hclge_mbx_vf_to_pf_cmd *mbx_req)
+{
+	struct hclge_dev *hdev = vport->back;
+	u16 link_status;
+	u8 msg_data[2];
+	u8 dest_vfid;
+
+	/* mac.link can only be 0 or 1 */
+	link_status = (u16)hdev->hw.mac.link;
+	memcpy(&msg_data[0], &link_status, sizeof(u16));
+	dest_vfid = mbx_req->mbx_src_vfid;
+
+	/* send this requested info to VF */
+	return hclge_send_mbx_msg(vport, msg_data, sizeof(u8),
+				  HCLGE_MBX_LINK_STAT_CHANGE, dest_vfid);
+}
+
+void hclge_mbx_handler(struct hclge_dev *hdev)
+{
+	struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq;
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclge_vport *vport;
+	struct hclge_desc *desc;
+	int ret;
+
+	/* handle all the mailbox requests in the queue */
+	while (hnae_get_bit(crq->desc[crq->next_to_use].flag,
+			    HCLGE_CMDQ_RX_OUTVLD_B)) {
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hclge_mbx_vf_to_pf_cmd *)desc->data;
+
+		vport = &hdev->vport[req->mbx_src_vfid];
+
+		switch (req->msg[0]) {
+		case HCLGE_MBX_SET_PROMISC_MODE:
+			ret = hclge_set_vf_promisc_mode(vport, req);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF promisc mode\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_UNICAST:
+			ret = hclge_set_vf_uc_mac_addr(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF UC MAC Addr\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_MULTICAST:
+			ret = hclge_set_vf_mc_mac_addr(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF MC MAC Addr\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_VLAN:
+			ret = hclge_set_vf_vlan_cfg(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to config VF's VLAN\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_QINFO:
+			ret = hclge_get_vf_queue_info(vport, req, true);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to get Q info for VF\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_TCINFO:
+			ret = hclge_get_vf_tcinfo(vport, req, true);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to get TC info for VF\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_LINK_STATUS:
+			ret = hclge_get_link_info(vport, req);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to get link stat for VF\n",
+					ret);
+			break;
+		default:
+			dev_err(&hdev->pdev->dev,
+				"un-supported mailbox message, code = %d\n",
+				req->msg[0]);
+			break;
+		}
+		hclge_mbx_ring_ptr_move_crq(crq);
+	}
+
+	/* Write back CMDQ_RQ header pointer, M7 need this pointer */
+	hclge_write_dev(&hdev->hw, HCLGE_NIC_CRQ_HEAD_REG, crq->next_to_use);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 6/8] net: hns3: Add mailbox support to PF driver
@ 2017-12-08 21:17   ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

Command queue provides the provision of Mailbox command which
can be used for communication between PF and VF. PF handles
messages from various VFs for fetching various information like,
queue, vlan, link status related etc. It also handles the request
from various VFs to perform certain privileged operations.

This patch adds the support of a message handler for handling
such various command requests from VF.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 .../net/ethernet/hisilicon/hns3/hns3pf/Makefile    |   2 +-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    |   1 +
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   2 +
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 310 +++++++++++++++++++++
 4 files changed, 314 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
index d077fa0..393a1b3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
@@ -5,6 +5,6 @@
 ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
 
 obj-$(CONFIG_HNS3_HCLGE) += hclge.o
-hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o
+hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o
 
 hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index d07c700..980fcdf 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -21,6 +21,7 @@
 #include "hclge_cmd.h"
 #include "hclge_dcb.h"
 #include "hclge_main.h"
+#include "hclge_mbx.h"
 #include "hclge_mdio.h"
 #include "hclge_tm.h"
 #include "hnae3.h"
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index aacec43..028817c 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -554,4 +554,6 @@ int hclge_set_vf_vlan_common(struct hclge_dev *vport, int vfid,
 
 int hclge_buffer_alloc(struct hclge_dev *hdev);
 int hclge_rss_init_hw(struct hclge_dev *hdev);
+
+void hclge_mbx_handler(struct hclge_dev *hdev);
 #endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
new file mode 100644
index 0000000..a993735
--- /dev/null
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -0,0 +1,310 @@
+/*
+ * Copyright (c) 2016-2017 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "hclge_main.h"
+#include "hclge_mbx.h"
+#include "hnae3.h"
+
+/* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF
+ * receives a mailbox message from VF.
+ * @vport: pointer to struct hclge_vport
+ * @vf_to_pf_req: pointer to hclge_mbx_vf_to_pf_cmd of the original mailbox
+ *		  message
+ * @resp_status: indicate to VF whether its request success(0) or failed.
+ */
+static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
+				struct hclge_mbx_vf_to_pf_cmd *vf_to_pf_req,
+				int resp_status,
+				u8 *resp_data, u16 resp_data_len)
+{
+	struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf;
+	struct hclge_dev *hdev = vport->back;
+	enum hclge_cmd_status status;
+	struct hclge_desc desc;
+
+	resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+
+	if (resp_data_len > HCLGE_MBX_MAX_RESP_DATA_SIZE) {
+		dev_err(&hdev->pdev->dev,
+			"PF fail to gen resp to VF len %d exceeds max len %d\n",
+			resp_data_len,
+			HCLGE_MBX_MAX_RESP_DATA_SIZE);
+	}
+
+	hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+
+	resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
+	resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
+
+	resp_pf_to_vf->msg[0] = HCLGE_MBX_PF_VF_RESP;
+	resp_pf_to_vf->msg[1] = vf_to_pf_req->msg[0];
+	resp_pf_to_vf->msg[2] = vf_to_pf_req->msg[1];
+	resp_pf_to_vf->msg[3] = (resp_status == 0) ? 0 : 1;
+
+	if (resp_data && resp_data_len > 0)
+		memcpy(&resp_pf_to_vf->msg[4], resp_data, resp_data_len);
+
+	status = hclge_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"PF failed(=%d) to send response to VF\n", status);
+
+	return status;
+}
+
+static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
+			      u16 mbx_opcode, u8 dest_vfid)
+{
+	struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf;
+	struct hclge_dev *hdev = vport->back;
+	enum hclge_cmd_status status;
+	struct hclge_desc desc;
+
+	resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+
+	hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+
+	resp_pf_to_vf->dest_vfid = dest_vfid;
+	resp_pf_to_vf->msg_len = msg_len;
+	resp_pf_to_vf->msg[0] = mbx_opcode;
+
+	memcpy(&resp_pf_to_vf->msg[1], msg, msg_len);
+
+	status = hclge_cmd_send(&hdev->hw, &desc, 1);
+	if (status)
+		dev_err(&hdev->pdev->dev,
+			"PF failed(=%d) to send mailbox message to VF\n",
+			status);
+
+	return status;
+}
+
+static int hclge_set_vf_promisc_mode(struct hclge_vport *vport,
+				     struct hclge_mbx_vf_to_pf_cmd *req)
+{
+	bool en = req->msg[1] ? true : false;
+	struct hclge_promisc_param param;
+
+	/* always enable broadcast promisc bit */
+	hclge_promisc_param_init(&param, en, en, true, vport->vport_id);
+	return hclge_cmd_set_promisc_mode(vport->back, &param);
+}
+
+static int hclge_set_vf_uc_mac_addr(struct hclge_vport *vport,
+				    struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				    bool gen_resp)
+{
+	const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]);
+	struct hclge_dev *hdev = vport->back;
+	int status;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_MODIFY) {
+		const u8 *old_addr = (const u8 *)(&mbx_req->msg[8]);
+
+		hclge_rm_uc_addr_common(vport, old_addr);
+		status = hclge_add_uc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_ADD) {
+		status = hclge_add_uc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_UC_REMOVE) {
+		status = hclge_rm_uc_addr_common(vport, mac_addr);
+	} else {
+		dev_err(&hdev->pdev->dev,
+			"failed to set unicast mac addr, unknown subcode %d\n",
+			mbx_req->msg[1]);
+		return -EIO;
+	}
+
+	if (gen_resp)
+		hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return 0;
+}
+
+static int hclge_set_vf_mc_mac_addr(struct hclge_vport *vport,
+				    struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				    bool gen_resp)
+{
+	const u8 *mac_addr = (const u8 *)(&mbx_req->msg[2]);
+	struct hclge_dev *hdev = vport->back;
+	int status;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_ADD) {
+		status = hclge_add_mc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_REMOVE) {
+		status = hclge_rm_mc_addr_common(vport, mac_addr);
+	} else if (mbx_req->msg[1] == HCLGE_MBX_MAC_VLAN_MC_FUNC_MTA_ENABLE) {
+		u8 func_id = vport->vport_id;
+		bool enable = mbx_req->msg[2];
+
+		status = hclge_cfg_func_mta_filter(hdev, func_id, enable);
+	} else {
+		dev_err(&hdev->pdev->dev,
+			"failed to set mcast mac addr, unknown subcode %d\n",
+			mbx_req->msg[1]);
+		return -EIO;
+	}
+
+	if (gen_resp)
+		hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return 0;
+}
+
+static int hclge_set_vf_vlan_cfg(struct hclge_vport *vport,
+				 struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				 bool gen_resp)
+{
+	struct hclge_dev *hdev = vport->back;
+	int status = 0;
+
+	if (mbx_req->msg[1] == HCLGE_MBX_VLAN_FILTER) {
+		u16 vlan, proto;
+		bool is_kill;
+
+		is_kill = !!mbx_req->msg[2];
+		memcpy(&vlan, &mbx_req->msg[3], sizeof(vlan));
+		memcpy(&proto, &mbx_req->msg[5], sizeof(proto));
+		status = hclge_set_vf_vlan_common(hdev, vport->vport_id,
+						  is_kill, vlan, 0,
+						  cpu_to_be16(proto));
+	}
+
+	if (gen_resp)
+		status = hclge_gen_resp_to_vf(vport, mbx_req, status, NULL, 0);
+
+	return status;
+}
+
+static int hclge_get_vf_tcinfo(struct hclge_vport *vport,
+			       struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+			       bool gen_resp)
+{
+	struct hclge_dev *hdev = vport->back;
+	int ret;
+
+	ret = hclge_gen_resp_to_vf(vport, mbx_req, 0, &hdev->hw_tc_map,
+				   sizeof(u8));
+
+	return ret;
+}
+
+static int hclge_get_vf_queue_info(struct hclge_vport *vport,
+				   struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+				   bool gen_resp)
+{
+#define HCLGE_TQPS_RSS_INFO_LEN		8
+	u8 resp_data[HCLGE_TQPS_RSS_INFO_LEN];
+	struct hclge_dev *hdev = vport->back;
+
+	/* get the queue related info */
+	memcpy(&resp_data[0], &vport->alloc_tqps, sizeof(u16));
+	memcpy(&resp_data[2], &hdev->rss_size_max, sizeof(u16));
+	memcpy(&resp_data[4], &hdev->num_desc, sizeof(u16));
+	memcpy(&resp_data[6], &hdev->rx_buf_len, sizeof(u16));
+
+	return hclge_gen_resp_to_vf(vport, mbx_req, 0, resp_data,
+				    HCLGE_TQPS_RSS_INFO_LEN);
+}
+
+static int hclge_get_link_info(struct hclge_vport *vport,
+			       struct hclge_mbx_vf_to_pf_cmd *mbx_req)
+{
+	struct hclge_dev *hdev = vport->back;
+	u16 link_status;
+	u8 msg_data[2];
+	u8 dest_vfid;
+
+	/* mac.link can only be 0 or 1 */
+	link_status = (u16)hdev->hw.mac.link;
+	memcpy(&msg_data[0], &link_status, sizeof(u16));
+	dest_vfid = mbx_req->mbx_src_vfid;
+
+	/* send this requested info to VF */
+	return hclge_send_mbx_msg(vport, msg_data, sizeof(u8),
+				  HCLGE_MBX_LINK_STAT_CHANGE, dest_vfid);
+}
+
+void hclge_mbx_handler(struct hclge_dev *hdev)
+{
+	struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq;
+	struct hclge_mbx_vf_to_pf_cmd *req;
+	struct hclge_vport *vport;
+	struct hclge_desc *desc;
+	int ret;
+
+	/* handle all the mailbox requests in the queue */
+	while (hnae_get_bit(crq->desc[crq->next_to_use].flag,
+			    HCLGE_CMDQ_RX_OUTVLD_B)) {
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hclge_mbx_vf_to_pf_cmd *)desc->data;
+
+		vport = &hdev->vport[req->mbx_src_vfid];
+
+		switch (req->msg[0]) {
+		case HCLGE_MBX_SET_PROMISC_MODE:
+			ret = hclge_set_vf_promisc_mode(vport, req);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF promisc mode\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_UNICAST:
+			ret = hclge_set_vf_uc_mac_addr(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF UC MAC Addr\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_MULTICAST:
+			ret = hclge_set_vf_mc_mac_addr(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to set VF MC MAC Addr\n",
+					ret);
+			break;
+		case HCLGE_MBX_SET_VLAN:
+			ret = hclge_set_vf_vlan_cfg(vport, req, false);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to config VF's VLAN\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_QINFO:
+			ret = hclge_get_vf_queue_info(vport, req, true);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to get Q info for VF\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_TCINFO:
+			ret = hclge_get_vf_tcinfo(vport, req, true);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF failed(%d) to get TC info for VF\n",
+					ret);
+			break;
+		case HCLGE_MBX_GET_LINK_STATUS:
+			ret = hclge_get_link_info(vport, req);
+			if (ret)
+				dev_err(&hdev->pdev->dev,
+					"PF fail(%d) to get link stat for VF\n",
+					ret);
+			break;
+		default:
+			dev_err(&hdev->pdev->dev,
+				"un-supported mailbox message, code = %d\n",
+				req->msg[0]);
+			break;
+		}
+		hclge_mbx_ring_ptr_move_crq(crq);
+	}
+
+	/* Write back CMDQ_RQ header pointer, M7 need this pointer */
+	hclge_write_dev(&hdev->hw, HCLGE_NIC_CRQ_HEAD_REG, crq->next_to_use);
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 7/8] net: hns3: Change PF to add ring-vect binding & resetQ to mailbox
  2017-12-08 21:16 ` Salil Mehta
@ 2017-12-08 21:17   ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch is required to support ring-vector binding and reset
of TQPs requested by the VF driver to the PF driver. Mailbox
handler is added with corresponding VF commands/messages to
handle the request.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Addressed some internal comments
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    | 138 +++++++--------------
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   7 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 106 ++++++++++++++++
 3 files changed, 159 insertions(+), 92 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 980fcdf..3b1fc49 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3256,49 +3256,48 @@ int hclge_rss_init_hw(struct hclge_dev *hdev)
 	return ret;
 }
 
-int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
-				   struct hnae3_ring_chain_node *ring_chain)
+int hclge_bind_ring_with_vector(struct hclge_vport *vport,
+				int vector_id, bool en,
+				struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_dev *hdev = vport->back;
-	struct hclge_ctrl_vector_chain_cmd *req;
 	struct hnae3_ring_chain_node *node;
 	struct hclge_desc desc;
-	int ret;
+	struct hclge_ctrl_vector_chain_cmd *req
+		= (struct hclge_ctrl_vector_chain_cmd *)desc.data;
+	enum hclge_cmd_status status;
+	enum hclge_opcode_type op;
+	u16 tqp_type_and_id;
 	int i;
 
-	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ADD_RING_TO_VECTOR, false);
-
-	req = (struct hclge_ctrl_vector_chain_cmd *)desc.data;
+	op = en ? HCLGE_OPC_ADD_RING_TO_VECTOR : HCLGE_OPC_DEL_RING_TO_VECTOR;
+	hclge_cmd_setup_basic_desc(&desc, op, false);
 	req->int_vector_id = vector_id;
 
 	i = 0;
 	for (node = ring_chain; node; node = node->next) {
-		u16 type_and_id = 0;
-
-		hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S,
+		tqp_type_and_id = le16_to_cpu(req->tqp_type_and_id[i]);
+		hnae_set_field(tqp_type_and_id,  HCLGE_INT_TYPE_M,
+			       HCLGE_INT_TYPE_S,
 			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S,
-			       node->tqp_index);
-		hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M,
-			       HCLGE_INT_GL_IDX_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		req->tqp_type_and_id[i] = cpu_to_le16(type_and_id);
-		req->vfid = vport->vport_id;
-
+		hnae_set_field(tqp_type_and_id, HCLGE_TQP_ID_M,
+			       HCLGE_TQP_ID_S, node->tqp_index);
+		req->tqp_type_and_id[i] = cpu_to_le16(tqp_type_and_id);
 		if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
 			req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
+			req->vfid = vport->vport_id;
 
-			ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-			if (ret) {
+			status = hclge_cmd_send(&hdev->hw, &desc, 1);
+			if (status) {
 				dev_err(&hdev->pdev->dev,
 					"Map TQP fail, status is %d.\n",
-					ret);
-				return ret;
+					status);
+				return -EIO;
 			}
 			i = 0;
 
 			hclge_cmd_setup_basic_desc(&desc,
-						   HCLGE_OPC_ADD_RING_TO_VECTOR,
+						   op,
 						   false);
 			req->int_vector_id = vector_id;
 		}
@@ -3306,21 +3305,21 @@ int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
 
 	if (i > 0) {
 		req->int_cause_num = i;
-
-		ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-		if (ret) {
+		req->vfid = vport->vport_id;
+		status = hclge_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
 			dev_err(&hdev->pdev->dev,
-				"Map TQP fail, status is %d.\n", ret);
-			return ret;
+				"Map TQP fail, status is %d.\n", status);
+			return -EIO;
 		}
 	}
 
 	return 0;
 }
 
-static int hclge_map_handle_ring_to_vector(
-		struct hnae3_handle *handle, int vector,
-		struct hnae3_ring_chain_node *ring_chain)
+static int hclge_map_ring_to_vector(struct hnae3_handle *handle,
+				    int vector,
+				    struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
@@ -3329,24 +3328,20 @@ static int hclge_map_handle_ring_to_vector(
 	vector_id = hclge_get_vector_index(hdev, vector);
 	if (vector_id < 0) {
 		dev_err(&hdev->pdev->dev,
-			"Get vector index fail. ret =%d\n", vector_id);
+			"Get vector index fail. vector_id =%d\n", vector_id);
 		return vector_id;
 	}
 
-	return hclge_map_vport_ring_to_vector(vport, vector_id, ring_chain);
+	return hclge_bind_ring_with_vector(vport, vector_id, true, ring_chain);
 }
 
-static int hclge_unmap_ring_from_vector(
-	struct hnae3_handle *handle, int vector,
-	struct hnae3_ring_chain_node *ring_chain)
+static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle,
+				       int vector,
+				       struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
-	struct hclge_ctrl_vector_chain_cmd *req;
-	struct hnae3_ring_chain_node *node;
-	struct hclge_desc desc;
-	int i, vector_id;
-	int ret;
+	int vector_id, ret;
 
 	vector_id = hclge_get_vector_index(hdev, vector);
 	if (vector_id < 0) {
@@ -3355,54 +3350,17 @@ static int hclge_unmap_ring_from_vector(
 		return vector_id;
 	}
 
-	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_DEL_RING_TO_VECTOR, false);
-
-	req = (struct hclge_ctrl_vector_chain_cmd *)desc.data;
-	req->int_vector_id = vector_id;
-
-	i = 0;
-	for (node = ring_chain; node; node = node->next) {
-		u16 type_and_id = 0;
-
-		hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S,
-			       node->tqp_index);
-		hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M,
-			       HCLGE_INT_GL_IDX_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-
-		req->tqp_type_and_id[i] = cpu_to_le16(type_and_id);
-		req->vfid = vport->vport_id;
-
-		if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
-			req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
-
-			ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-			if (ret) {
-				dev_err(&hdev->pdev->dev,
-					"Unmap TQP fail, status is %d.\n",
-					ret);
-				return ret;
-			}
-			i = 0;
-			hclge_cmd_setup_basic_desc(&desc,
-						   HCLGE_OPC_DEL_RING_TO_VECTOR,
-						   false);
-			req->int_vector_id = vector_id;
-		}
+	ret = hclge_bind_ring_with_vector(vport, vector_id, false, ring_chain);
+	if (ret) {
+		dev_err(&handle->pdev->dev,
+			"Unmap ring from vector fail. vectorid=%d, ret =%d\n",
+			vector_id,
+			ret);
+		return ret;
 	}
 
-	if (i > 0) {
-		req->int_cause_num = i;
-
-		ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-		if (ret) {
-			dev_err(&hdev->pdev->dev,
-				"Unmap TQP fail, status is %d.\n", ret);
-			return ret;
-		}
-	}
+	/* Free this MSIX or MSI vector */
+	hclge_free_vector(hdev, vector_id);
 
 	return 0;
 }
@@ -4423,7 +4381,7 @@ static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id)
 	return hnae_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
 }
 
-static void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
@@ -4995,8 +4953,8 @@ static const struct hnae3_ae_ops hclge_ops = {
 	.uninit_ae_dev = hclge_uninit_ae_dev,
 	.init_client_instance = hclge_init_client_instance,
 	.uninit_client_instance = hclge_uninit_client_instance,
-	.map_ring_to_vector = hclge_map_handle_ring_to_vector,
-	.unmap_ring_from_vector = hclge_unmap_ring_from_vector,
+	.map_ring_to_vector = hclge_map_ring_to_vector,
+	.unmap_ring_from_vector = hclge_unmap_ring_frm_vector,
 	.get_vector = hclge_get_vector,
 	.set_promisc_mode = hclge_set_promisc_mode,
 	.set_loopback = hclge_set_loopback,
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 028817c..c7c9a28 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -539,8 +539,10 @@ int hclge_cfg_func_mta_filter(struct hclge_dev *hdev,
 			      u8 func_id,
 			      bool enable);
 struct hclge_vport *hclge_get_vport(struct hnae3_handle *handle);
-int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector,
-				   struct hnae3_ring_chain_node *ring_chain);
+int hclge_bind_ring_with_vector(struct hclge_vport *vport,
+				int vector_id, bool en,
+				struct hnae3_ring_chain_node *ring_chain);
+
 static inline int hclge_get_queue_id(struct hnae3_queue *queue)
 {
 	struct hclge_tqp *tqp = container_of(queue, struct hclge_tqp, q);
@@ -556,4 +558,5 @@ int hclge_buffer_alloc(struct hclge_dev *hdev);
 int hclge_rss_init_hw(struct hclge_dev *hdev);
 
 void hclge_mbx_handler(struct hclge_dev *hdev);
+void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id);
 #endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
index a993735..7a82444 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -85,6 +85,91 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
 	return status;
 }
 
+static void hclge_free_vector_ring_chain(struct hnae3_ring_chain_node *head)
+{
+	struct hnae3_ring_chain_node *chain_tmp, *chain;
+
+	chain = head->next;
+
+	while (chain) {
+		chain_tmp = chain->next;
+		kzfree(chain);
+		chain = chain_tmp;
+	}
+}
+
+/* hclge_get_ring_chain_from_mbx: get ring type & tqpid from mailbox message
+ * msg[0]: opcode
+ * msg[1]: <not relevant to this function>
+ * msg[2]: ring_num
+ * msg[3]: first ring type (TX|RX)
+ * msg[4]: first tqp id
+ * msg[5] ~ msg[14]: other ring type and tqp id
+ */
+static int hclge_get_ring_chain_from_mbx(
+			struct hclge_mbx_vf_to_pf_cmd *req,
+			struct hnae3_ring_chain_node *ring_chain,
+			struct hclge_vport *vport)
+{
+#define HCLGE_RING_NODE_VARIABLE_NUM		3
+#define HCLGE_RING_MAP_MBX_BASIC_MSG_NUM	3
+	struct hnae3_ring_chain_node *cur_chain, *new_chain;
+	int ring_num;
+	int i;
+
+	ring_num = req->msg[2];
+
+	hnae_set_bit(ring_chain->flag, HNAE3_RING_TYPE_B, req->msg[3]);
+	ring_chain->tqp_index =
+			hclge_get_queue_id(vport->nic.kinfo.tqp[req->msg[4]]);
+
+	cur_chain = ring_chain;
+
+	for (i = 1; i < ring_num; i++) {
+		new_chain = kzalloc(sizeof(*new_chain), GFP_KERNEL);
+		if (!new_chain)
+			goto err;
+
+		hnae_set_bit(new_chain->flag, HNAE3_RING_TYPE_B,
+			     req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i +
+			     HCLGE_RING_MAP_MBX_BASIC_MSG_NUM]);
+
+		new_chain->tqp_index =
+		hclge_get_queue_id(vport->nic.kinfo.tqp
+			[req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i +
+			HCLGE_RING_MAP_MBX_BASIC_MSG_NUM + 1]]);
+
+		cur_chain->next = new_chain;
+		cur_chain = new_chain;
+	}
+
+	return 0;
+err:
+	hclge_free_vector_ring_chain(ring_chain);
+	return -ENOMEM;
+}
+
+static int hclge_map_unmap_ring_to_vf_vector(struct hclge_vport *vport, bool en,
+					     struct hclge_mbx_vf_to_pf_cmd *req)
+{
+	struct hnae3_ring_chain_node ring_chain;
+	int vector_id = req->msg[1];
+	int ret;
+
+	memset(&ring_chain, 0, sizeof(ring_chain));
+	ret = hclge_get_ring_chain_from_mbx(req, &ring_chain, vport);
+	if (ret)
+		return ret;
+
+	ret = hclge_bind_ring_with_vector(vport, vector_id, en, &ring_chain);
+	if (ret)
+		return ret;
+
+	hclge_free_vector_ring_chain(&ring_chain);
+
+	return 0;
+}
+
 static int hclge_set_vf_promisc_mode(struct hclge_vport *vport,
 				     struct hclge_mbx_vf_to_pf_cmd *req)
 {
@@ -230,6 +315,16 @@ static int hclge_get_link_info(struct hclge_vport *vport,
 				  HCLGE_MBX_LINK_STAT_CHANGE, dest_vfid);
 }
 
+static void hclge_reset_vf_queue(struct hclge_vport *vport,
+				 struct hclge_mbx_vf_to_pf_cmd *mbx_req)
+{
+	u16 queue_id;
+
+	memcpy(&queue_id, &mbx_req->msg[0], sizeof(queue_id));
+
+	hclge_reset_tqp(&vport->nic, queue_id);
+}
+
 void hclge_mbx_handler(struct hclge_dev *hdev)
 {
 	struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq;
@@ -247,6 +342,14 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
 		vport = &hdev->vport[req->mbx_src_vfid];
 
 		switch (req->msg[0]) {
+		case HCLGE_MBX_MAP_RING_TO_VECTOR:
+			ret = hclge_map_unmap_ring_to_vf_vector(vport, true,
+								req);
+			break;
+		case HCLGE_MBX_UNMAP_RING_TO_VECTOR:
+			ret = hclge_map_unmap_ring_to_vf_vector(vport, false,
+								req);
+			break;
 		case HCLGE_MBX_SET_PROMISC_MODE:
 			ret = hclge_set_vf_promisc_mode(vport, req);
 			if (ret)
@@ -296,6 +399,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
 					"PF fail(%d) to get link stat for VF\n",
 					ret);
 			break;
+		case HCLGE_MBX_QUEUE_RESET:
+			hclge_reset_vf_queue(vport, req);
+			break;
 		default:
 			dev_err(&hdev->pdev->dev,
 				"un-supported mailbox message, code = %d\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 7/8] net: hns3: Change PF to add ring-vect binding & resetQ to mailbox
@ 2017-12-08 21:17   ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

This patch is required to support ring-vector binding and reset
of TQPs requested by the VF driver to the PF driver. Mailbox
handler is added with corresponding VF commands/messages to
handle the request.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
Patch V2: Addressed some internal comments
Patch V1: Initial Submit
---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    | 138 +++++++--------------
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |   7 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 106 ++++++++++++++++
 3 files changed, 159 insertions(+), 92 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 980fcdf..3b1fc49 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3256,49 +3256,48 @@ int hclge_rss_init_hw(struct hclge_dev *hdev)
 	return ret;
 }
 
-int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
-				   struct hnae3_ring_chain_node *ring_chain)
+int hclge_bind_ring_with_vector(struct hclge_vport *vport,
+				int vector_id, bool en,
+				struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_dev *hdev = vport->back;
-	struct hclge_ctrl_vector_chain_cmd *req;
 	struct hnae3_ring_chain_node *node;
 	struct hclge_desc desc;
-	int ret;
+	struct hclge_ctrl_vector_chain_cmd *req
+		= (struct hclge_ctrl_vector_chain_cmd *)desc.data;
+	enum hclge_cmd_status status;
+	enum hclge_opcode_type op;
+	u16 tqp_type_and_id;
 	int i;
 
-	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ADD_RING_TO_VECTOR, false);
-
-	req = (struct hclge_ctrl_vector_chain_cmd *)desc.data;
+	op = en ? HCLGE_OPC_ADD_RING_TO_VECTOR : HCLGE_OPC_DEL_RING_TO_VECTOR;
+	hclge_cmd_setup_basic_desc(&desc, op, false);
 	req->int_vector_id = vector_id;
 
 	i = 0;
 	for (node = ring_chain; node; node = node->next) {
-		u16 type_and_id = 0;
-
-		hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S,
+		tqp_type_and_id = le16_to_cpu(req->tqp_type_and_id[i]);
+		hnae_set_field(tqp_type_and_id,  HCLGE_INT_TYPE_M,
+			       HCLGE_INT_TYPE_S,
 			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S,
-			       node->tqp_index);
-		hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M,
-			       HCLGE_INT_GL_IDX_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		req->tqp_type_and_id[i] = cpu_to_le16(type_and_id);
-		req->vfid = vport->vport_id;
-
+		hnae_set_field(tqp_type_and_id, HCLGE_TQP_ID_M,
+			       HCLGE_TQP_ID_S, node->tqp_index);
+		req->tqp_type_and_id[i] = cpu_to_le16(tqp_type_and_id);
 		if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
 			req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
+			req->vfid = vport->vport_id;
 
-			ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-			if (ret) {
+			status = hclge_cmd_send(&hdev->hw, &desc, 1);
+			if (status) {
 				dev_err(&hdev->pdev->dev,
 					"Map TQP fail, status is %d.\n",
-					ret);
-				return ret;
+					status);
+				return -EIO;
 			}
 			i = 0;
 
 			hclge_cmd_setup_basic_desc(&desc,
-						   HCLGE_OPC_ADD_RING_TO_VECTOR,
+						   op,
 						   false);
 			req->int_vector_id = vector_id;
 		}
@@ -3306,21 +3305,21 @@ int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
 
 	if (i > 0) {
 		req->int_cause_num = i;
-
-		ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-		if (ret) {
+		req->vfid = vport->vport_id;
+		status = hclge_cmd_send(&hdev->hw, &desc, 1);
+		if (status) {
 			dev_err(&hdev->pdev->dev,
-				"Map TQP fail, status is %d.\n", ret);
-			return ret;
+				"Map TQP fail, status is %d.\n", status);
+			return -EIO;
 		}
 	}
 
 	return 0;
 }
 
-static int hclge_map_handle_ring_to_vector(
-		struct hnae3_handle *handle, int vector,
-		struct hnae3_ring_chain_node *ring_chain)
+static int hclge_map_ring_to_vector(struct hnae3_handle *handle,
+				    int vector,
+				    struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
@@ -3329,24 +3328,20 @@ static int hclge_map_handle_ring_to_vector(
 	vector_id = hclge_get_vector_index(hdev, vector);
 	if (vector_id < 0) {
 		dev_err(&hdev->pdev->dev,
-			"Get vector index fail. ret =%d\n", vector_id);
+			"Get vector index fail. vector_id =%d\n", vector_id);
 		return vector_id;
 	}
 
-	return hclge_map_vport_ring_to_vector(vport, vector_id, ring_chain);
+	return hclge_bind_ring_with_vector(vport, vector_id, true, ring_chain);
 }
 
-static int hclge_unmap_ring_from_vector(
-	struct hnae3_handle *handle, int vector,
-	struct hnae3_ring_chain_node *ring_chain)
+static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle,
+				       int vector,
+				       struct hnae3_ring_chain_node *ring_chain)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
-	struct hclge_ctrl_vector_chain_cmd *req;
-	struct hnae3_ring_chain_node *node;
-	struct hclge_desc desc;
-	int i, vector_id;
-	int ret;
+	int vector_id, ret;
 
 	vector_id = hclge_get_vector_index(hdev, vector);
 	if (vector_id < 0) {
@@ -3355,54 +3350,17 @@ static int hclge_unmap_ring_from_vector(
 		return vector_id;
 	}
 
-	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_DEL_RING_TO_VECTOR, false);
-
-	req = (struct hclge_ctrl_vector_chain_cmd *)desc.data;
-	req->int_vector_id = vector_id;
-
-	i = 0;
-	for (node = ring_chain; node; node = node->next) {
-		u16 type_and_id = 0;
-
-		hnae_set_field(type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-		hnae_set_field(type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S,
-			       node->tqp_index);
-		hnae_set_field(type_and_id, HCLGE_INT_GL_IDX_M,
-			       HCLGE_INT_GL_IDX_S,
-			       hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
-
-		req->tqp_type_and_id[i] = cpu_to_le16(type_and_id);
-		req->vfid = vport->vport_id;
-
-		if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
-			req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
-
-			ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-			if (ret) {
-				dev_err(&hdev->pdev->dev,
-					"Unmap TQP fail, status is %d.\n",
-					ret);
-				return ret;
-			}
-			i = 0;
-			hclge_cmd_setup_basic_desc(&desc,
-						   HCLGE_OPC_DEL_RING_TO_VECTOR,
-						   false);
-			req->int_vector_id = vector_id;
-		}
+	ret = hclge_bind_ring_with_vector(vport, vector_id, false, ring_chain);
+	if (ret) {
+		dev_err(&handle->pdev->dev,
+			"Unmap ring from vector fail. vectorid=%d, ret =%d\n",
+			vector_id,
+			ret);
+		return ret;
 	}
 
-	if (i > 0) {
-		req->int_cause_num = i;
-
-		ret = hclge_cmd_send(&hdev->hw, &desc, 1);
-		if (ret) {
-			dev_err(&hdev->pdev->dev,
-				"Unmap TQP fail, status is %d.\n", ret);
-			return ret;
-		}
-	}
+	/* Free this MSIX or MSI vector */
+	hclge_free_vector(hdev, vector_id);
 
 	return 0;
 }
@@ -4423,7 +4381,7 @@ static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id)
 	return hnae_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
 }
 
-static void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
+void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
 {
 	struct hclge_vport *vport = hclge_get_vport(handle);
 	struct hclge_dev *hdev = vport->back;
@@ -4995,8 +4953,8 @@ static const struct hnae3_ae_ops hclge_ops = {
 	.uninit_ae_dev = hclge_uninit_ae_dev,
 	.init_client_instance = hclge_init_client_instance,
 	.uninit_client_instance = hclge_uninit_client_instance,
-	.map_ring_to_vector = hclge_map_handle_ring_to_vector,
-	.unmap_ring_from_vector = hclge_unmap_ring_from_vector,
+	.map_ring_to_vector = hclge_map_ring_to_vector,
+	.unmap_ring_from_vector = hclge_unmap_ring_frm_vector,
 	.get_vector = hclge_get_vector,
 	.set_promisc_mode = hclge_set_promisc_mode,
 	.set_loopback = hclge_set_loopback,
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 028817c..c7c9a28 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -539,8 +539,10 @@ int hclge_cfg_func_mta_filter(struct hclge_dev *hdev,
 			      u8 func_id,
 			      bool enable);
 struct hclge_vport *hclge_get_vport(struct hnae3_handle *handle);
-int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector,
-				   struct hnae3_ring_chain_node *ring_chain);
+int hclge_bind_ring_with_vector(struct hclge_vport *vport,
+				int vector_id, bool en,
+				struct hnae3_ring_chain_node *ring_chain);
+
 static inline int hclge_get_queue_id(struct hnae3_queue *queue)
 {
 	struct hclge_tqp *tqp = container_of(queue, struct hclge_tqp, q);
@@ -556,4 +558,5 @@ int hclge_buffer_alloc(struct hclge_dev *hdev);
 int hclge_rss_init_hw(struct hclge_dev *hdev);
 
 void hclge_mbx_handler(struct hclge_dev *hdev);
+void hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id);
 #endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
index a993735..7a82444 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -85,6 +85,91 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
 	return status;
 }
 
+static void hclge_free_vector_ring_chain(struct hnae3_ring_chain_node *head)
+{
+	struct hnae3_ring_chain_node *chain_tmp, *chain;
+
+	chain = head->next;
+
+	while (chain) {
+		chain_tmp = chain->next;
+		kzfree(chain);
+		chain = chain_tmp;
+	}
+}
+
+/* hclge_get_ring_chain_from_mbx: get ring type & tqpid from mailbox message
+ * msg[0]: opcode
+ * msg[1]: <not relevant to this function>
+ * msg[2]: ring_num
+ * msg[3]: first ring type (TX|RX)
+ * msg[4]: first tqp id
+ * msg[5] ~ msg[14]: other ring type and tqp id
+ */
+static int hclge_get_ring_chain_from_mbx(
+			struct hclge_mbx_vf_to_pf_cmd *req,
+			struct hnae3_ring_chain_node *ring_chain,
+			struct hclge_vport *vport)
+{
+#define HCLGE_RING_NODE_VARIABLE_NUM		3
+#define HCLGE_RING_MAP_MBX_BASIC_MSG_NUM	3
+	struct hnae3_ring_chain_node *cur_chain, *new_chain;
+	int ring_num;
+	int i;
+
+	ring_num = req->msg[2];
+
+	hnae_set_bit(ring_chain->flag, HNAE3_RING_TYPE_B, req->msg[3]);
+	ring_chain->tqp_index =
+			hclge_get_queue_id(vport->nic.kinfo.tqp[req->msg[4]]);
+
+	cur_chain = ring_chain;
+
+	for (i = 1; i < ring_num; i++) {
+		new_chain = kzalloc(sizeof(*new_chain), GFP_KERNEL);
+		if (!new_chain)
+			goto err;
+
+		hnae_set_bit(new_chain->flag, HNAE3_RING_TYPE_B,
+			     req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i +
+			     HCLGE_RING_MAP_MBX_BASIC_MSG_NUM]);
+
+		new_chain->tqp_index =
+		hclge_get_queue_id(vport->nic.kinfo.tqp
+			[req->msg[HCLGE_RING_NODE_VARIABLE_NUM * i +
+			HCLGE_RING_MAP_MBX_BASIC_MSG_NUM + 1]]);
+
+		cur_chain->next = new_chain;
+		cur_chain = new_chain;
+	}
+
+	return 0;
+err:
+	hclge_free_vector_ring_chain(ring_chain);
+	return -ENOMEM;
+}
+
+static int hclge_map_unmap_ring_to_vf_vector(struct hclge_vport *vport, bool en,
+					     struct hclge_mbx_vf_to_pf_cmd *req)
+{
+	struct hnae3_ring_chain_node ring_chain;
+	int vector_id = req->msg[1];
+	int ret;
+
+	memset(&ring_chain, 0, sizeof(ring_chain));
+	ret = hclge_get_ring_chain_from_mbx(req, &ring_chain, vport);
+	if (ret)
+		return ret;
+
+	ret = hclge_bind_ring_with_vector(vport, vector_id, en, &ring_chain);
+	if (ret)
+		return ret;
+
+	hclge_free_vector_ring_chain(&ring_chain);
+
+	return 0;
+}
+
 static int hclge_set_vf_promisc_mode(struct hclge_vport *vport,
 				     struct hclge_mbx_vf_to_pf_cmd *req)
 {
@@ -230,6 +315,16 @@ static int hclge_get_link_info(struct hclge_vport *vport,
 				  HCLGE_MBX_LINK_STAT_CHANGE, dest_vfid);
 }
 
+static void hclge_reset_vf_queue(struct hclge_vport *vport,
+				 struct hclge_mbx_vf_to_pf_cmd *mbx_req)
+{
+	u16 queue_id;
+
+	memcpy(&queue_id, &mbx_req->msg[0], sizeof(queue_id));
+
+	hclge_reset_tqp(&vport->nic, queue_id);
+}
+
 void hclge_mbx_handler(struct hclge_dev *hdev)
 {
 	struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq;
@@ -247,6 +342,14 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
 		vport = &hdev->vport[req->mbx_src_vfid];
 
 		switch (req->msg[0]) {
+		case HCLGE_MBX_MAP_RING_TO_VECTOR:
+			ret = hclge_map_unmap_ring_to_vf_vector(vport, true,
+								req);
+			break;
+		case HCLGE_MBX_UNMAP_RING_TO_VECTOR:
+			ret = hclge_map_unmap_ring_to_vf_vector(vport, false,
+								req);
+			break;
 		case HCLGE_MBX_SET_PROMISC_MODE:
 			ret = hclge_set_vf_promisc_mode(vport, req);
 			if (ret)
@@ -296,6 +399,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
 					"PF fail(%d) to get link stat for VF\n",
 					ret);
 			break;
+		case HCLGE_MBX_QUEUE_RESET:
+			hclge_reset_vf_queue(vport, req);
+			break;
 		default:
 			dev_err(&hdev->pdev->dev,
 				"un-supported mailbox message, code = %d\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 8/8] net: hns3: Add mailbox interrupt handling to PF driver
  2017-12-08 21:16 ` Salil Mehta
@ 2017-12-08 21:17   ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

All PF mailbox events are conveyed through a common interrupt
(vector 0). This interrupt vector is shared by reset and mailbox.

This patch adds the handling of mailbox interrupt event and its
deferred processing in context to a separate mailbox task.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    | 68 +++++++++++++++++++---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |  8 ++-
 2 files changed, 68 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 3b1fc49..e97fd66 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -2227,6 +2227,12 @@ static int hclge_mac_init(struct hclge_dev *hdev)
 	return hclge_cfg_func_mta_filter(hdev, 0, hdev->accept_mta_mc);
 }
 
+static void hclge_mbx_task_schedule(struct hclge_dev *hdev)
+{
+	if (!test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->mbx_service_task);
+}
+
 static void hclge_reset_task_schedule(struct hclge_dev *hdev)
 {
 	if (!test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
@@ -2372,9 +2378,18 @@ static void hclge_service_complete(struct hclge_dev *hdev)
 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 {
 	u32 rst_src_reg;
+	u32 cmdq_src_reg;
 
 	/* fetch the events from their corresponding regs */
 	rst_src_reg = hclge_read_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG);
+	cmdq_src_reg = hclge_read_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG);
+
+	/* Assumption: If by any chance reset and mailbox events are reported
+	 * together then we will only process reset event in this go and will
+	 * defer the processing of the mailbox events. Since, we would have not
+	 * cleared RX CMDQ event this time we would receive again another
+	 * interrupt from H/W just for the mailbox.
+	 */
 
 	/* check for vector0 reset event sources */
 	if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & rst_src_reg) {
@@ -2395,7 +2410,12 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 		return HCLGE_VECTOR0_EVENT_RST;
 	}
 
-	/* mailbox event sharing vector 0 interrupt would be placed here */
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) {
+		cmdq_src_reg &= ~BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B);
+		*clearval = cmdq_src_reg;
+		return HCLGE_VECTOR0_EVENT_MBX;
+	}
 
 	return HCLGE_VECTOR0_EVENT_OTHER;
 }
@@ -2403,10 +2423,14 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type,
 				    u32 regclr)
 {
-	if (event_type == HCLGE_VECTOR0_EVENT_RST)
+	switch (event_type) {
+	case HCLGE_VECTOR0_EVENT_RST:
 		hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr);
-
-	/* mailbox event sharing vector 0 interrupt would be placed here */
+		break;
+	case HCLGE_VECTOR0_EVENT_MBX:
+		hclge_write_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG, regclr);
+		break;
+	}
 }
 
 static void hclge_enable_vector(struct hclge_misc_vector *vector, bool enable)
@@ -2423,13 +2447,23 @@ static irqreturn_t hclge_misc_irq_handle(int irq, void *data)
 	hclge_enable_vector(&hdev->misc_vector, false);
 	event_cause = hclge_check_event_cause(hdev, &clearval);
 
-	/* vector 0 interrupt is shared with reset and mailbox source events.
-	 * For now, we are not handling mailbox events.
-	 */
+	/* vector 0 interrupt is shared with reset and mailbox source events.*/
 	switch (event_cause) {
 	case HCLGE_VECTOR0_EVENT_RST:
 		hclge_reset_task_schedule(hdev);
 		break;
+	case HCLGE_VECTOR0_EVENT_MBX:
+		/* If we are here then,
+		 * 1. Either we are not handling any mbx task and we are not
+		 *    scheduled as well
+		 *                        OR
+		 * 2. We could be handling a mbx task but nothing more is
+		 *    scheduled.
+		 * In both cases, we should schedule mbx task as there are more
+		 * mbx messages reported by this interrupt.
+		 */
+		hclge_mbx_task_schedule(hdev);
+
 	default:
 		dev_dbg(&hdev->pdev->dev,
 			"received unknown or unhandled event of vector0\n");
@@ -2708,6 +2742,21 @@ static void hclge_reset_service_task(struct work_struct *work)
 	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
 }
 
+static void hclge_mailbox_service_task(struct work_struct *work)
+{
+	struct hclge_dev *hdev =
+		container_of(work, struct hclge_dev, mbx_service_task);
+
+	if (test_and_set_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state))
+		return;
+
+	clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state);
+
+	hclge_mbx_handler(hdev);
+
+	clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state);
+}
+
 static void hclge_service_task(struct work_struct *work)
 {
 	struct hclge_dev *hdev =
@@ -4815,6 +4864,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
 	timer_setup(&hdev->service_timer, hclge_service_timer, 0);
 	INIT_WORK(&hdev->service_task, hclge_service_task);
 	INIT_WORK(&hdev->rst_service_task, hclge_reset_service_task);
+	INIT_WORK(&hdev->mbx_service_task, hclge_mailbox_service_task);
 
 	/* Enable MISC vector(vector0) */
 	hclge_enable_vector(&hdev->misc_vector, true);
@@ -4823,6 +4873,8 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
 	set_bit(HCLGE_STATE_DOWN, &hdev->state);
 	clear_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state);
 	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
+	clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state);
+	clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state);
 
 	pr_info("%s driver initialization finished.\n", HCLGE_DRIVER_NAME);
 	return 0;
@@ -4936,6 +4988,8 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
 		cancel_work_sync(&hdev->service_task);
 	if (hdev->rst_service_task.func)
 		cancel_work_sync(&hdev->rst_service_task);
+	if (hdev->mbx_service_task.func)
+		cancel_work_sync(&hdev->mbx_service_task);
 
 	if (mac->phydev)
 		mdiobus_unregister(mac->mdio_bus);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index c7c9a28..fb043b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -92,6 +92,11 @@
 #define HCLGE_VECTOR0_CORERESET_INT_B	6
 #define HCLGE_VECTOR0_IMPRESET_INT_B	7
 
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HCLGE_VECTOR0_CMDQ_SRC_REG	0x27100
+/* CMDQ register bits for RX event(=MBX event) */
+#define HCLGE_VECTOR0_RX_CMDQ_INT_B	1
+
 enum HCLGE_DEV_STATE {
 	HCLGE_STATE_REINITING,
 	HCLGE_STATE_DOWN,
@@ -101,8 +106,8 @@ enum HCLGE_DEV_STATE {
 	HCLGE_STATE_SERVICE_SCHED,
 	HCLGE_STATE_RST_SERVICE_SCHED,
 	HCLGE_STATE_RST_HANDLING,
+	HCLGE_STATE_MBX_SERVICE_SCHED,
 	HCLGE_STATE_MBX_HANDLING,
-	HCLGE_STATE_MBX_IRQ,
 	HCLGE_STATE_MAX
 };
 
@@ -479,6 +484,7 @@ struct hclge_dev {
 	struct timer_list service_timer;
 	struct work_struct service_task;
 	struct work_struct rst_service_task;
+	struct work_struct mbx_service_task;
 
 	bool cur_promisc;
 	int num_alloc_vfs;	/* Actual number of VFs allocated */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 net-next 8/8] net: hns3: Add mailbox interrupt handling to PF driver
@ 2017-12-08 21:17   ` Salil Mehta
  0 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-08 21:17 UTC (permalink / raw)
  To: davem
  Cc: salil.mehta, yisen.zhuang, lipeng321, mehta.salil.lnk, netdev,
	linux-kernel, linux-rdma, linuxarm

All PF mailbox events are conveyed through a common interrupt
(vector 0). This interrupt vector is shared by reset and mailbox.

This patch adds the handling of mailbox interrupt event and its
deferred processing in context to a separate mailbox task.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: lipeng <lipeng321@huawei.com>
---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    | 68 +++++++++++++++++++---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |  8 ++-
 2 files changed, 68 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 3b1fc49..e97fd66 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -2227,6 +2227,12 @@ static int hclge_mac_init(struct hclge_dev *hdev)
 	return hclge_cfg_func_mta_filter(hdev, 0, hdev->accept_mta_mc);
 }
 
+static void hclge_mbx_task_schedule(struct hclge_dev *hdev)
+{
+	if (!test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state))
+		schedule_work(&hdev->mbx_service_task);
+}
+
 static void hclge_reset_task_schedule(struct hclge_dev *hdev)
 {
 	if (!test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
@@ -2372,9 +2378,18 @@ static void hclge_service_complete(struct hclge_dev *hdev)
 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 {
 	u32 rst_src_reg;
+	u32 cmdq_src_reg;
 
 	/* fetch the events from their corresponding regs */
 	rst_src_reg = hclge_read_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG);
+	cmdq_src_reg = hclge_read_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG);
+
+	/* Assumption: If by any chance reset and mailbox events are reported
+	 * together then we will only process reset event in this go and will
+	 * defer the processing of the mailbox events. Since, we would have not
+	 * cleared RX CMDQ event this time we would receive again another
+	 * interrupt from H/W just for the mailbox.
+	 */
 
 	/* check for vector0 reset event sources */
 	if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & rst_src_reg) {
@@ -2395,7 +2410,12 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 		return HCLGE_VECTOR0_EVENT_RST;
 	}
 
-	/* mailbox event sharing vector 0 interrupt would be placed here */
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_reg) {
+		cmdq_src_reg &= ~BIT(HCLGE_VECTOR0_RX_CMDQ_INT_B);
+		*clearval = cmdq_src_reg;
+		return HCLGE_VECTOR0_EVENT_MBX;
+	}
 
 	return HCLGE_VECTOR0_EVENT_OTHER;
 }
@@ -2403,10 +2423,14 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
 static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type,
 				    u32 regclr)
 {
-	if (event_type == HCLGE_VECTOR0_EVENT_RST)
+	switch (event_type) {
+	case HCLGE_VECTOR0_EVENT_RST:
 		hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr);
-
-	/* mailbox event sharing vector 0 interrupt would be placed here */
+		break;
+	case HCLGE_VECTOR0_EVENT_MBX:
+		hclge_write_dev(&hdev->hw, HCLGE_VECTOR0_CMDQ_SRC_REG, regclr);
+		break;
+	}
 }
 
 static void hclge_enable_vector(struct hclge_misc_vector *vector, bool enable)
@@ -2423,13 +2447,23 @@ static irqreturn_t hclge_misc_irq_handle(int irq, void *data)
 	hclge_enable_vector(&hdev->misc_vector, false);
 	event_cause = hclge_check_event_cause(hdev, &clearval);
 
-	/* vector 0 interrupt is shared with reset and mailbox source events.
-	 * For now, we are not handling mailbox events.
-	 */
+	/* vector 0 interrupt is shared with reset and mailbox source events.*/
 	switch (event_cause) {
 	case HCLGE_VECTOR0_EVENT_RST:
 		hclge_reset_task_schedule(hdev);
 		break;
+	case HCLGE_VECTOR0_EVENT_MBX:
+		/* If we are here then,
+		 * 1. Either we are not handling any mbx task and we are not
+		 *    scheduled as well
+		 *                        OR
+		 * 2. We could be handling a mbx task but nothing more is
+		 *    scheduled.
+		 * In both cases, we should schedule mbx task as there are more
+		 * mbx messages reported by this interrupt.
+		 */
+		hclge_mbx_task_schedule(hdev);
+
 	default:
 		dev_dbg(&hdev->pdev->dev,
 			"received unknown or unhandled event of vector0\n");
@@ -2708,6 +2742,21 @@ static void hclge_reset_service_task(struct work_struct *work)
 	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
 }
 
+static void hclge_mailbox_service_task(struct work_struct *work)
+{
+	struct hclge_dev *hdev =
+		container_of(work, struct hclge_dev, mbx_service_task);
+
+	if (test_and_set_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state))
+		return;
+
+	clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state);
+
+	hclge_mbx_handler(hdev);
+
+	clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state);
+}
+
 static void hclge_service_task(struct work_struct *work)
 {
 	struct hclge_dev *hdev =
@@ -4815,6 +4864,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
 	timer_setup(&hdev->service_timer, hclge_service_timer, 0);
 	INIT_WORK(&hdev->service_task, hclge_service_task);
 	INIT_WORK(&hdev->rst_service_task, hclge_reset_service_task);
+	INIT_WORK(&hdev->mbx_service_task, hclge_mailbox_service_task);
 
 	/* Enable MISC vector(vector0) */
 	hclge_enable_vector(&hdev->misc_vector, true);
@@ -4823,6 +4873,8 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
 	set_bit(HCLGE_STATE_DOWN, &hdev->state);
 	clear_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state);
 	clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state);
+	clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state);
+	clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state);
 
 	pr_info("%s driver initialization finished.\n", HCLGE_DRIVER_NAME);
 	return 0;
@@ -4936,6 +4988,8 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
 		cancel_work_sync(&hdev->service_task);
 	if (hdev->rst_service_task.func)
 		cancel_work_sync(&hdev->rst_service_task);
+	if (hdev->mbx_service_task.func)
+		cancel_work_sync(&hdev->mbx_service_task);
 
 	if (mac->phydev)
 		mdiobus_unregister(mac->mdio_bus);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index c7c9a28..fb043b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -92,6 +92,11 @@
 #define HCLGE_VECTOR0_CORERESET_INT_B	6
 #define HCLGE_VECTOR0_IMPRESET_INT_B	7
 
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HCLGE_VECTOR0_CMDQ_SRC_REG	0x27100
+/* CMDQ register bits for RX event(=MBX event) */
+#define HCLGE_VECTOR0_RX_CMDQ_INT_B	1
+
 enum HCLGE_DEV_STATE {
 	HCLGE_STATE_REINITING,
 	HCLGE_STATE_DOWN,
@@ -101,8 +106,8 @@ enum HCLGE_DEV_STATE {
 	HCLGE_STATE_SERVICE_SCHED,
 	HCLGE_STATE_RST_SERVICE_SCHED,
 	HCLGE_STATE_RST_HANDLING,
+	HCLGE_STATE_MBX_SERVICE_SCHED,
 	HCLGE_STATE_MBX_HANDLING,
-	HCLGE_STATE_MBX_IRQ,
 	HCLGE_STATE_MAX
 };
 
@@ -479,6 +484,7 @@ struct hclge_dev {
 	struct timer_list service_timer;
 	struct work_struct service_task;
 	struct work_struct rst_service_task;
+	struct work_struct mbx_service_task;
 
 	bool cur_promisc;
 	int num_alloc_vfs;	/* Actual number of VFs allocated */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
  2017-12-08 21:16     ` Salil Mehta
@ 2017-12-09  0:16         ` Philippe Ombredanne
  -1 siblings, 0 replies; 24+ messages in thread
From: Philippe Ombredanne @ 2017-12-09  0:16 UTC (permalink / raw)
  To: Salil Mehta
  Cc: David S. Miller, yisen.zhuang-hv44wF8Li93QT0dZR+AlfA,
	lipeng321-hv44wF8Li93QT0dZR+AlfA,
	mehta.salil.lnk-Re5JQEeQqe8AvxtiuMwx3w,
	netdev-u79uwXL29TY76Z2rM5mHXA, LKML,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linuxarm-hv44wF8Li93QT0dZR+AlfA

Sali,

On Fri, Dec 8, 2017 at 10:16 PM, Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org> wrote:
> This patch adds the support of the mailbox to the VF driver. The
> mailbox shall be used as an interface to communicate with the
> PF driver for various purposes like {set|get} MAC related
> operations, reset, link status etc. The mailbox supports both
> synchronous and asynchronous command send to PF driver.
>
> Signed-off-by: Salil Mehta <salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Signed-off-by: lipeng <lipeng321-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
[...]
> --- /dev/null
> +++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
> @@ -0,0 +1,94 @@
> +/*
> + * Copyright (c) 2016-2017 Hisilicon Limited.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */

Why not use the new SPDX ids?
e.g.
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/* Copyright (c) 2016-2017 Hisilicon Limited. */

See Linus posts and Thomas doc patches for details.

-- 
Cordially
Philippe Ombredanne
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
@ 2017-12-09  0:16         ` Philippe Ombredanne
  0 siblings, 0 replies; 24+ messages in thread
From: Philippe Ombredanne @ 2017-12-09  0:16 UTC (permalink / raw)
  To: Salil Mehta
  Cc: David S. Miller, yisen.zhuang, lipeng321, mehta.salil.lnk,
	netdev, LKML, linux-rdma, linuxarm

Sali,

On Fri, Dec 8, 2017 at 10:16 PM, Salil Mehta <salil.mehta@huawei.com> wrote:
> This patch adds the support of the mailbox to the VF driver. The
> mailbox shall be used as an interface to communicate with the
> PF driver for various purposes like {set|get} MAC related
> operations, reset, link status etc. The mailbox supports both
> synchronous and asynchronous command send to PF driver.
>
> Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> Signed-off-by: lipeng <lipeng321@huawei.com>
[...]
> --- /dev/null
> +++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
> @@ -0,0 +1,94 @@
> +/*
> + * Copyright (c) 2016-2017 Hisilicon Limited.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */

Why not use the new SPDX ids?
e.g.
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/* Copyright (c) 2016-2017 Hisilicon Limited. */

See Linus posts and Thomas doc patches for details.

-- 
Cordially
Philippe Ombredanne

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver
  2017-12-09  0:16         ` Philippe Ombredanne
  (?)
@ 2017-12-11 16:04         ` Salil Mehta
  -1 siblings, 0 replies; 24+ messages in thread
From: Salil Mehta @ 2017-12-11 16:04 UTC (permalink / raw)
  To: Philippe Ombredanne
  Cc: David S. Miller, Zhuangyuzeng (Yisen), lipeng (Y),
	mehta.salil.lnk, netdev, LKML, linux-rdma, Linuxarm

Hi Philippe,

> -----Original Message-----
> From: Philippe Ombredanne [mailto:pombredanne@nexb.com]
> Sent: Saturday, December 09, 2017 12:17 AM
> To: Salil Mehta <salil.mehta@huawei.com>
> Cc: David S. Miller <davem@davemloft.net>; Zhuangyuzeng (Yisen)
> <yisen.zhuang@huawei.com>; lipeng (Y) <lipeng321@huawei.com>;
> mehta.salil.lnk@gmail.com; netdev@vger.kernel.org; LKML <linux-
> kernel@vger.kernel.org>; linux-rdma@vger.kernel.org; Linuxarm
> <linuxarm@huawei.com>
> Subject: Re: [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to
> VF driver
> 
> Sali,

Sali ---> Salil

Thanks
> 
> On Fri, Dec 8, 2017 at 10:16 PM, Salil Mehta <salil.mehta@huawei.com>
> wrote:
> > This patch adds the support of the mailbox to the VF driver. The
> > mailbox shall be used as an interface to communicate with the
> > PF driver for various purposes like {set|get} MAC related
> > operations, reset, link status etc. The mailbox supports both
> > synchronous and asynchronous command send to PF driver.
> >
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > Signed-off-by: lipeng <lipeng321@huawei.com>
> [...]
> > --- /dev/null
> > +++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
> > @@ -0,0 +1,94 @@
> > +/*
> > + * Copyright (c) 2016-2017 Hisilicon Limited.
> > + *
> > + * This program is free software; you can redistribute it and/or
> modify
> > + * it under the terms of the GNU General Public License as published
> by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + */
> 
> Why not use the new SPDX ids?
We can. I will change the headers for files [.c .h Makefile] part of HNS3
VF driver change in next V3 patch submit.

> e.g.
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/* Copyright (c) 2016-2017 Hisilicon Limited. */
> 
> See Linus posts and Thomas doc patches for details.
Sure.

Thanks!
> 
> --
> Cordially
> Philippe Ombredanne

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-12-11 16:04 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-08 21:16 [PATCH V2 net-next 0/8] Hisilicon Network Subsystem 3 VF Ethernet Driver Salil Mehta
2017-12-08 21:16 ` Salil Mehta
2017-12-08 21:16 ` [PATCH V2 net-next 1/8] net: hns3: Add HNS3 VF IMP(Integrated Management Proc) cmd interface Salil Mehta
2017-12-08 21:16   ` Salil Mehta
     [not found] ` <20171208211702.20104-1-salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-08 21:16   ` [PATCH V2 net-next 2/8] net: hns3: Add mailbox support to VF driver Salil Mehta
2017-12-08 21:16     ` Salil Mehta
2017-12-08 21:16     ` Salil Mehta
     [not found]     ` <20171208211702.20104-3-salil.mehta-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-12-09  0:16       ` Philippe Ombredanne
2017-12-09  0:16         ` Philippe Ombredanne
2017-12-11 16:04         ` Salil Mehta
2017-12-08 21:16   ` [PATCH V2 net-next 3/8] net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support Salil Mehta
2017-12-08 21:16     ` Salil Mehta
2017-12-08 21:16     ` Salil Mehta
2017-12-08 21:16   ` [PATCH V2 net-next 4/8] net: hns3: Add HNS3 VF driver to kernel build framework Salil Mehta
2017-12-08 21:16     ` Salil Mehta
2017-12-08 21:16     ` Salil Mehta
2017-12-08 21:16 ` [PATCH V2 net-next 5/8] net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC Salil Mehta
2017-12-08 21:16   ` Salil Mehta
2017-12-08 21:17 ` [PATCH V2 net-next 6/8] net: hns3: Add mailbox support to PF driver Salil Mehta
2017-12-08 21:17   ` Salil Mehta
2017-12-08 21:17 ` [PATCH V2 net-next 7/8] net: hns3: Change PF to add ring-vect binding & resetQ to mailbox Salil Mehta
2017-12-08 21:17   ` Salil Mehta
2017-12-08 21:17 ` [PATCH V2 net-next 8/8] net: hns3: Add mailbox interrupt handling to PF driver Salil Mehta
2017-12-08 21:17   ` Salil Mehta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.