dev.dpdk.org archive mirror
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver
@ 2019-08-23 13:46 Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 01/22] net/hns3: add hardware registers definition Wei Hu (Xavier)
                   ` (22 more replies)
  0 siblings, 23 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

The Hisilicon Network Subsystem is a long term evolution IP which is
supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.

This series add DPDK rte_ethdev poll mode driver for hns3(Hisilicon
Network Subsystem 3) network engine.

Wei Hu (Xavier) (22):
  net/hns3: add hardware registers definition
  net/hns3: add some definitions for data structure and macro
  net/hns3: register hns3 PMD driver
  net/hns3: add support for cmd of hns3 PMD driver
  net/hns3: add the initialization of hns3 PMD driver
  net/hns3: add support for MAC address related operations
  net/hns3: add support for some misc operations
  net/hns3: add support for link update operation
  net/hns3: add support for flow directory of hns3 PMD driver
  net/hns3: add support for RSS of hns3 PMD driver
  net/hns3: add support for flow control of hns3 PMD driver
  net/hns3: add support for VLAN of hns3 PMD driver
  net/hns3: add support for mailbox of hns3 PMD driver
  net/hns3: add support for hns3 VF PMD driver
  net/hns3: add package and queue related operation
  net/hns3: add start stop configure promiscuous ops
  net/hns3: add dump register ops for hns3 PMD driver
  net/hns3: add abnormal interrupt process for hns3 PMD driver
  net/hns3: add stats related ops for hns3 PMD driver
  net/hns3: add reset related process for hns3 PMD driver
  net/hns3: add multiple process support for hns3 PMD driver
  net/hns3: add hns3 build files

 MAINTAINERS                                  |    7 +
 config/common_armv8a_linux                   |    5 +
 config/common_base                           |    5 +
 config/defconfig_arm64-armv8a-linuxapp-clang |    2 +
 doc/guides/nics/features/hns3.ini            |   38 +
 doc/guides/nics/hns3.rst                     |   55 +
 drivers/net/Makefile                         |    1 +
 drivers/net/hns3/Makefile                    |   43 +
 drivers/net/hns3/hns3_cmd.c                  |  559 +++
 drivers/net/hns3/hns3_cmd.h                  |  752 ++++
 drivers/net/hns3/hns3_dcb.c                  | 1650 +++++++++
 drivers/net/hns3/hns3_dcb.h                  |  166 +
 drivers/net/hns3/hns3_ethdev.c               | 4961 ++++++++++++++++++++++++++
 drivers/net/hns3/hns3_ethdev.h               |  637 ++++
 drivers/net/hns3/hns3_ethdev_vf.c            | 1739 +++++++++
 drivers/net/hns3/hns3_fdir.c                 | 1065 ++++++
 drivers/net/hns3/hns3_fdir.h                 |  203 ++
 drivers/net/hns3/hns3_flow.c                 | 1878 ++++++++++
 drivers/net/hns3/hns3_intr.c                 | 1168 ++++++
 drivers/net/hns3/hns3_intr.h                 |   79 +
 drivers/net/hns3/hns3_logs.h                 |   34 +
 drivers/net/hns3/hns3_mbx.c                  |  363 ++
 drivers/net/hns3/hns3_mbx.h                  |  136 +
 drivers/net/hns3/hns3_mp.c                   |  219 ++
 drivers/net/hns3/hns3_mp.h                   |   14 +
 drivers/net/hns3/hns3_regs.c                 |  378 ++
 drivers/net/hns3/hns3_regs.h                 |   99 +
 drivers/net/hns3/hns3_rss.c                  | 1388 +++++++
 drivers/net/hns3/hns3_rss.h                  |  139 +
 drivers/net/hns3/hns3_rxtx.c                 | 1344 +++++++
 drivers/net/hns3/hns3_rxtx.h                 |  287 ++
 drivers/net/hns3/hns3_stats.c                |  847 +++++
 drivers/net/hns3/hns3_stats.h                |  146 +
 drivers/net/hns3/meson.build                 |   19 +
 drivers/net/hns3/rte_pmd_hns3_version.map    |    3 +
 drivers/net/meson.build                      |    1 +
 mk/rte.app.mk                                |    1 +
 37 files changed, 20431 insertions(+)
 create mode 100644 doc/guides/nics/features/hns3.ini
 create mode 100644 doc/guides/nics/hns3.rst
 create mode 100644 drivers/net/hns3/Makefile
 create mode 100644 drivers/net/hns3/hns3_cmd.c
 create mode 100644 drivers/net/hns3/hns3_cmd.h
 create mode 100644 drivers/net/hns3/hns3_dcb.c
 create mode 100644 drivers/net/hns3/hns3_dcb.h
 create mode 100644 drivers/net/hns3/hns3_ethdev.c
 create mode 100644 drivers/net/hns3/hns3_ethdev.h
 create mode 100644 drivers/net/hns3/hns3_ethdev_vf.c
 create mode 100644 drivers/net/hns3/hns3_fdir.c
 create mode 100644 drivers/net/hns3/hns3_fdir.h
 create mode 100644 drivers/net/hns3/hns3_flow.c
 create mode 100644 drivers/net/hns3/hns3_intr.c
 create mode 100644 drivers/net/hns3/hns3_intr.h
 create mode 100644 drivers/net/hns3/hns3_logs.h
 create mode 100644 drivers/net/hns3/hns3_mbx.c
 create mode 100644 drivers/net/hns3/hns3_mbx.h
 create mode 100644 drivers/net/hns3/hns3_mp.c
 create mode 100644 drivers/net/hns3/hns3_mp.h
 create mode 100644 drivers/net/hns3/hns3_regs.c
 create mode 100644 drivers/net/hns3/hns3_regs.h
 create mode 100644 drivers/net/hns3/hns3_rss.c
 create mode 100644 drivers/net/hns3/hns3_rss.h
 create mode 100644 drivers/net/hns3/hns3_rxtx.c
 create mode 100644 drivers/net/hns3/hns3_rxtx.h
 create mode 100644 drivers/net/hns3/hns3_stats.c
 create mode 100644 drivers/net/hns3/hns3_stats.h
 create mode 100644 drivers/net/hns3/meson.build
 create mode 100644 drivers/net/hns3/rte_pmd_hns3_version.map

-- 
2.7.4


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 01/22] net/hns3: add hardware registers definition
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro Wei Hu (Xavier)
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

The Hisilicon Network Subsytem is a long term evolution IP which is
supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.

This patch adds hardware definition header file for hns3(Hisilicon
Network Subsystem 3) PMD driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_regs.h | 98 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_regs.h

diff --git a/drivers/net/hns3/hns3_regs.h b/drivers/net/hns3/hns3_regs.h
new file mode 100644
index 0000000..5a4f315
--- /dev/null
+++ b/drivers/net/hns3/hns3_regs.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_REGS_H_
+#define _HNS3_REGS_H_
+
+/* bar registers for cmdq */
+#define HNS3_CMDQ_TX_ADDR_L_REG		0x27000
+#define HNS3_CMDQ_TX_ADDR_H_REG		0x27004
+#define HNS3_CMDQ_TX_DEPTH_REG		0x27008
+#define HNS3_CMDQ_TX_TAIL_REG		0x27010
+#define HNS3_CMDQ_TX_HEAD_REG		0x27014
+#define HNS3_CMDQ_RX_ADDR_L_REG		0x27018
+#define HNS3_CMDQ_RX_ADDR_H_REG		0x2701c
+#define HNS3_CMDQ_RX_DEPTH_REG		0x27020
+#define HNS3_CMDQ_RX_TAIL_REG		0x27024
+#define HNS3_CMDQ_RX_HEAD_REG		0x27028
+#define HNS3_CMDQ_INTR_STS_REG		0x27104
+#define HNS3_CMDQ_INTR_EN_REG		0x27108
+#define HNS3_CMDQ_INTR_GEN_REG		0x2710C
+
+/* Vector0 interrupt CMDQ event source register(RW) */
+#define HNS3_VECTOR0_CMDQ_SRC_REG	0x27100
+/* Vector0 interrupt CMDQ event status register(RO) */
+#define HNS3_VECTOR0_CMDQ_STAT_REG	0x27104
+
+#define HNS3_VECTOR0_OTHER_INT_STS_REG	0x20800
+
+#define HNS3_MISC_VECTOR_REG_BASE	0x20400
+#define HNS3_VECTOR0_OTER_EN_REG	0x20600
+#define HNS3_MISC_RESET_STS_REG		0x20700
+#define HNS3_GLOBAL_RESET_REG		0x20A00
+#define HNS3_FUN_RST_ING		0x20C00
+#define HNS3_GRO_EN_REG			0x28000
+
+/* Vector0 register bits for reset */
+#define HNS3_VECTOR0_FUNCRESET_INT_B	0
+#define HNS3_VECTOR0_GLOBALRESET_INT_B	5
+#define HNS3_VECTOR0_CORERESET_INT_B	6
+#define HNS3_VECTOR0_IMPRESET_INT_B	7
+
+/* CMDQ register bits for RX event(=MBX event) */
+#define HNS3_VECTOR0_RX_CMDQ_INT_B	1
+#define HNS3_VECTOR0_REG_MSIX_MASK	0x1FF00
+/* RST register bits for RESET event */
+#define HNS3_VECTOR0_RST_INT_B	2
+
+#define HNS3_VF_RST_ING			0x07008
+#define HNS3_VF_RST_ING_BIT		BIT(16)
+
+/* bar registers for rcb */
+#define HNS3_RING_RX_BASEADDR_L_REG		0x00000
+#define HNS3_RING_RX_BASEADDR_H_REG		0x00004
+#define HNS3_RING_RX_BD_NUM_REG			0x00008
+#define HNS3_RING_RX_BD_LEN_REG			0x0000C
+#define HNS3_RING_RX_MERGE_EN_REG		0x00014
+#define HNS3_RING_RX_TAIL_REG			0x00018
+#define HNS3_RING_RX_HEAD_REG			0x0001C
+#define HNS3_RING_RX_FBDNUM_REG			0x00020
+#define HNS3_RING_RX_OFFSET_REG			0x00024
+#define HNS3_RING_RX_FBD_OFFSET_REG		0x00028
+#define HNS3_RING_RX_PKTNUM_RECORD_REG		0x0002C
+#define HNS3_RING_RX_STASH_REG			0x00030
+#define HNS3_RING_RX_BD_ERR_REG			0x00034
+
+#define HNS3_RING_TX_BASEADDR_L_REG		0x00040
+#define HNS3_RING_TX_BASEADDR_H_REG		0x00044
+#define HNS3_RING_TX_BD_NUM_REG			0x00048
+#define HNS3_RING_TX_PRIORITY_REG		0x0004C
+#define HNS3_RING_TX_TC_REG			0x00050
+#define HNS3_RING_TX_MERGE_EN_REG		0x00054
+#define HNS3_RING_TX_TAIL_REG			0x00058
+#define HNS3_RING_TX_HEAD_REG			0x0005C
+#define HNS3_RING_TX_FBDNUM_REG			0x00060
+#define HNS3_RING_TX_OFFSET_REG			0x00064
+#define HNS3_RING_TX_EBD_NUM_REG		0x00068
+#define HNS3_RING_TX_PKTNUM_RECORD_REG		0x0006C
+#define HNS3_RING_TX_EBD_OFFSET_REG		0x00070
+#define HNS3_RING_TX_BD_ERR_REG			0x00074
+
+#define HNS3_RING_EN_REG			0x00090
+
+#define HNS3_RING_EN_B				0
+
+#define HNS3_TQP_REG_OFFSET			0x80000
+#define HNS3_TQP_REG_SIZE			0x200
+
+/* bar registers for tqp interrupt */
+#define HNS3_TQP_INTR_CTRL_REG			0x20000
+#define HNS3_TQP_INTR_GL0_REG			0x20100
+#define HNS3_TQP_INTR_GL1_REG			0x20200
+#define HNS3_TQP_INTR_GL2_REG			0x20300
+#define HNS3_TQP_INTR_RL_REG			0x20900
+
+#define HNS3_TQP_INTR_REG_SIZE			4
+
+#endif /* _HNS3_REGS_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 01/22] net/hns3: add hardware registers definition Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30  8:25   ` Gavin Hu (Arm Technology China)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver Wei Hu (Xavier)
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds some data structure definitions, macro definitions and
inline functions for hns3 PMD drivers.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.h | 609 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 609 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_ethdev.h

diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
new file mode 100644
index 0000000..bfb54f2
--- /dev/null
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -0,0 +1,609 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_ETHDEV_H_
+#define _HNS3_ETHDEV_H_
+
+#include <sys/time.h>
+#include <rte_alarm.h>
+
+/* Vendor ID */
+#define PCI_VENDOR_ID_HUAWEI			0x19e5
+
+/* Device IDs */
+#define HNS3_DEV_ID_GE				0xA220
+#define HNS3_DEV_ID_25GE			0xA221
+#define HNS3_DEV_ID_25GE_RDMA			0xA222
+#define HNS3_DEV_ID_50GE_RDMA			0xA224
+#define HNS3_DEV_ID_100G_RDMA_MACSEC		0xA226
+#define HNS3_DEV_ID_100G_VF			0xA22E
+#define HNS3_DEV_ID_100G_RDMA_PFC_VF		0xA22F
+
+#define HNS3_UC_MACADDR_NUM		96
+#define HNS3_MC_MACADDR_NUM		128
+
+#define HNS3_MAX_BD_SIZE		65535
+#define HNS3_MAX_TX_BD_PER_PKT		8
+#define HNS3_MAX_FRAME_LEN		9728
+#define HNS3_MIN_FRAME_LEN		64
+#define HNS3_VLAN_TAG_SIZE		4
+#define HNS3_DEFAULT_RX_BUF_LEN		2048
+
+#define HNS3_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HNS3_VLAN_TAG_SIZE * 2)
+#define HNS3_PKTLEN_TO_MTU(pktlen)	((pktlen) - HNS3_ETH_OVERHEAD)
+#define HNS3_MAX_MTU	(HNS3_MAX_FRAME_LEN - HNS3_ETH_OVERHEAD)
+#define HNS3_DEFAULT_MTU		1500UL
+#define HNS3_DEFAULT_FRAME_LEN		(HNS3_DEFAULT_MTU + HNS3_ETH_OVERHEAD)
+
+#define HNS3_4_TCS			4
+#define HNS3_8_TCS			8
+#define HNS3_MAX_TC_NUM			8
+
+#define HNS3_MAX_PF_NUM			8
+#define HNS3_UMV_TBL_SIZE		3072
+#define HNS3_DEFAULT_UMV_SPACE_PER_PF \
+	(HNS3_UMV_TBL_SIZE / HNS3_MAX_PF_NUM)
+
+#define HNS3_PF_CFG_BLOCK_SIZE		32
+#define HNS3_PF_CFG_DESC_NUM \
+	(HNS3_PF_CFG_BLOCK_SIZE / HNS3_CFG_RD_LEN_BYTES)
+
+#define HNS3_DEFAULT_ENABLE_PFC_NUM	0
+
+#define HNS3_INTR_UNREG_FAIL_RETRY_CNT	5
+#define HNS3_INTR_UNREG_FAIL_DELAY_MS	500
+
+#define HNS3_QUIT_RESET_CNT		10
+#define HNS3_QUIT_RESET_DELAY_MS	100
+
+#define HNS3_POLL_RESPONE_MS		1
+
+#define HNS3_MAX_USER_PRIO		8
+#define HNS3_PG_NUM			4
+enum hns3_fc_mode {
+	HNS3_FC_NONE,
+	HNS3_FC_RX_PAUSE,
+	HNS3_FC_TX_PAUSE,
+	HNS3_FC_FULL,
+	HNS3_FC_DEFAULT
+};
+
+#define HNS3_SCH_MODE_SP	0
+#define HNS3_SCH_MODE_DWRR	1
+struct hns3_pg_info {
+	uint8_t pg_id;
+	uint8_t pg_sch_mode;  /* 0: sp; 1: dwrr */
+	uint8_t tc_bit_map;
+	uint32_t bw_limit;
+	uint8_t tc_dwrr[HNS3_MAX_TC_NUM];
+};
+
+struct hns3_tc_info {
+	uint8_t tc_id;
+	uint8_t tc_sch_mode;  /* 0: sp; 1: dwrr */
+	uint8_t pgid;
+	uint32_t bw_limit;
+	uint8_t up_to_tc_map; /* user priority maping on the TC */
+};
+
+struct hns3_dcb_info {
+	uint8_t num_tc;
+	uint8_t num_pg;     /* It must be 1 if vNET-Base schd */
+	uint8_t pg_dwrr[HNS3_PG_NUM];
+	uint8_t prio_tc[HNS3_MAX_USER_PRIO];
+	struct hns3_pg_info pg_info[HNS3_PG_NUM];
+	struct hns3_tc_info tc_info[HNS3_MAX_TC_NUM];
+	uint8_t hw_pfc_map; /* Allow for packet drop or not on this TC */
+	uint8_t pfc_en; /* Pfc enabled or not for user priority */
+};
+
+enum hns3_fc_status {
+	HNS3_FC_STATUS_NONE,
+	HNS3_FC_STATUS_MAC_PAUSE,
+	HNS3_FC_STATUS_PFC,
+};
+
+struct hns3_tc_queue_info {
+	uint8_t	tqp_offset;     /* TQP offset from base TQP */
+	uint8_t	tqp_count;      /* Total TQPs */
+	uint8_t	tc;             /* TC index */
+	bool enable;            /* If this TC is enable or not */
+};
+
+struct hns3_cfg {
+	uint8_t vmdq_vport_num;
+	uint8_t tc_num;
+	uint16_t tqp_desc_num;
+	uint16_t rx_buf_len;
+	uint16_t rss_size_max;
+	uint8_t phy_addr;
+	uint8_t media_type;
+	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t default_speed;
+	uint32_t numa_node_map;
+	uint8_t speed_ability;
+	uint16_t umv_space;
+};
+
+/* mac media type */
+enum hns3_media_type {
+	HNS3_MEDIA_TYPE_UNKNOWN,
+	HNS3_MEDIA_TYPE_FIBER,
+	HNS3_MEDIA_TYPE_COPPER,
+	HNS3_MEDIA_TYPE_BACKPLANE,
+	HNS3_MEDIA_TYPE_NONE,
+};
+
+struct hns3_mac {
+	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+	bool default_addr_setted; /* whether default addr(mac_addr) is setted */
+	uint8_t media_type;
+	uint8_t phy_addr;
+	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+};
+
+
+/* Primary process maintains driver state in main thread.
+ *
+ * +---------------+
+ * | UNINITIALIZED |<-----------+
+ * +---------------+		|
+ *	|.eth_dev_init		|.eth_dev_uninit
+ *	V			|
+ * +---------------+------------+
+ * |  INITIALIZED  |
+ * +---------------+<-----------<---------------+
+ *	|.dev_configure		|		|
+ *	V			|failed		|
+ * +---------------+------------+		|
+ * |  CONFIGURING  |				|
+ * +---------------+----+			|
+ *	|success	|			|
+ *	|		|		+---------------+
+ *	|		|		|    CLOSING    |
+ *	|		|		+---------------+
+ *	|		|			^
+ *	V		|.dev_configure		|
+ * +---------------+----+			|.dev_close
+ * |  CONFIGURED   |----------------------------+
+ * +---------------+<-----------+
+ *	|.dev_start		|
+ *	V			|
+ * +---------------+		|
+ * |   STARTING    |------------^
+ * +---------------+ failed	|
+ *	|success		|
+ *	|		+---------------+
+ *	|		|   STOPPING    |
+ *	|		+---------------+
+ *	|			^
+ *	V			|.dev_stop
+ * +---------------+------------+
+ * |    STARTED    |
+ * +---------------+
+ */
+enum hns3_adapter_state {
+	HNS3_NIC_UNINITIALIZED = 0,
+	HNS3_NIC_INITIALIZED,
+	HNS3_NIC_CONFIGURING,
+	HNS3_NIC_CONFIGURED,
+	HNS3_NIC_STARTING,
+	HNS3_NIC_STARTED,
+	HNS3_NIC_STOPPING,
+	HNS3_NIC_CLOSING,
+	HNS3_NIC_CLOSED,
+	HNS3_NIC_REMOVED,
+	HNS3_NIC_NSTATES
+};
+
+/* Reset various stages, execute in order */
+enum hns3_reset_stage {
+	/* Stop query services, stop transceiver, disable MAC */
+	RESET_STAGE_DOWN,
+	/* Clear reset completion flags, disable send command */
+	RESET_STAGE_PREWAIT,
+	/* Inform IMP to start resetting */
+	RESET_STAGE_REQ_HW_RESET,
+	/* Waiting for hardware reset to complete */
+	RESET_STAGE_WAIT,
+	/* Reinitialize hardware */
+	RESET_STAGE_DEV_INIT,
+	/* Restore user settings and enable MAC */
+	RESET_STAGE_RESTORE,
+	/* Restart query services, start transceiver */
+	RESET_STAGE_DONE,
+	/* Not in reset state */
+	RESET_STAGE_NONE,
+};
+
+enum hns3_reset_level {
+	HNS3_NONE_RESET,
+	HNS3_VF_FUNC_RESET, /* A VF function reset */
+	/*
+	 * All VFs under a PF perform function reset.
+	 * Kernel PF driver use mailbox to inform DPDK VF to do reset, the value
+	 * of the reset level and the one defined in kernel driver should be
+	 * same.
+	 */
+	HNS3_VF_PF_FUNC_RESET = 2,
+	/*
+	 * All VFs under a PF perform FLR reset.
+	 * Kernel PF driver use mailbox to inform DPDK VF to do reset, the value
+	 * of the reset level and the one defined in kernel driver should be
+	 * same.
+	 */
+	HNS3_VF_FULL_RESET = 3,
+	HNS3_FLR_RESET,     /* A VF perform FLR reset */
+	/* All VFs under the rootport perform a global or IMP reset */
+	HNS3_VF_RESET,
+	HNS3_FUNC_RESET,    /* A PF function reset */
+	/* All PFs under the rootport perform a global reset */
+	HNS3_GLOBAL_RESET,
+	HNS3_IMP_RESET,     /* All PFs under the rootport perform a IMP reset */
+	HNS3_MAX_RESET
+};
+
+enum hns3_wait_result {
+	HNS3_WAIT_UNKNOWN,
+	HNS3_WAIT_REQUEST,
+	HNS3_WAIT_SUCCESS,
+	HNS3_WAIT_TIMEOUT
+};
+
+#define HNS3_RESET_SYNC_US 100000
+
+struct hns3_reset_stats {
+	uint64_t request_cnt; /* Total request reset times */
+	uint64_t global_cnt;  /* Total GLOBAL reset times */
+	uint64_t imp_cnt;     /* Total IMP reset times */
+	uint64_t exec_cnt;    /* Total reset executive times */
+	uint64_t success_cnt; /* Total reset successful times */
+	uint64_t fail_cnt;    /* Total reset failed times */
+	uint64_t merge_cnt;   /* Total merged in high reset times */
+};
+
+typedef bool (*check_completion_func)(struct hns3_hw *hw);
+
+struct hns3_wait_data {
+	void *hns;
+	uint64_t end_ms;
+	uint64_t interval;
+	int16_t count;
+	enum hns3_wait_result result;
+	check_completion_func check_completion;
+};
+
+struct hns3_reset_ops {
+	void (*reset_service)(void *arg);
+	int (*stop_service)(struct hns3_adapter *hns);
+	int (*prepare_reset)(struct hns3_adapter *hns);
+	int (*wait_hardware_ready)(struct hns3_adapter *hns);
+	int (*reinit_dev)(struct hns3_adapter *hns);
+	int (*restore_conf)(struct hns3_adapter *hns);
+	int (*start_service)(struct hns3_adapter *hns);
+};
+
+enum hns3_schedule {
+	SCHEDULE_NONE,
+	SCHEDULE_PENDING,
+	SCHEDULE_REQUESTED,
+	SCHEDULE_DEFERRED,
+};
+
+struct hns3_reset_data {
+	enum hns3_reset_stage stage;
+	rte_atomic16_t schedule;
+	/* Reset flag, covering the entire reset process */
+	rte_atomic16_t resetting;
+	/* Used to disable sending cmds during reset */
+	rte_atomic16_t disable_cmd;
+	/* The reset level being processed */
+	enum hns3_reset_level level;
+	/* Reset level set, each bit represents a reset level */
+	uint64_t pending;
+	/* Request reset level set, from interrupt or mailbox */
+	uint64_t request;
+	int attempts; /* Reset failure retry */
+	int retries;  /* Timeout failure retry in reset_post */
+	/*
+	 * At the time of global or IMP reset, the command cannot be sent to
+	 * stop the tx/rx queues. Tx/Rx queues may be access mbuf during the
+	 * reset process, so the mbuf is required to be released after the reset
+	 * is completed.The mbuf_deferred_free is used to mark whether mbuf
+	 * needs to be released.
+	 */
+	bool mbuf_deferred_free;
+	struct timeval start_time;
+	struct hns3_reset_stats stats;
+	const struct hns3_reset_ops *ops;
+	struct hns3_wait_data *wait_data;
+};
+
+struct hns3_hw {
+	struct rte_eth_dev_data *data;
+	void *io_base;
+	struct hns3_mac mac;
+	unsigned int secondary_cnt; /* Number of secondary processes init'd. */
+	uint32_t fw_version;
+
+	uint16_t num_msi;
+	uint16_t total_tqps_num;    /* total task queue pairs of this PF */
+	uint16_t tqps_num;          /* num task queue pairs of this function */
+	uint16_t rss_size_max;      /* HW defined max RSS task queue */
+	uint16_t rx_buf_len;
+	uint16_t num_tx_desc;       /* desc num of per tx queue */
+	uint16_t num_rx_desc;       /* desc num of per rx queue */
+
+	struct rte_ether_addr mc_addrs[HNS3_MC_MACADDR_NUM];
+	int mc_addrs_num; /* Multicast mac addresses number */
+
+	uint8_t num_tc;             /* Total number of enabled TCs */
+	uint8_t hw_tc_map;
+	enum hns3_fc_mode current_mode;
+	enum hns3_fc_mode requested_mode;
+	struct hns3_dcb_info dcb_info;
+	enum hns3_fc_status current_fc_status; /* current flow control status */
+	struct hns3_tc_queue_info tc_queue[HNS3_MAX_TC_NUM];
+	uint16_t alloc_tqps;
+	uint16_t alloc_rss_size;    /* Queue number per TC */
+
+	uint32_t flag;
+	/*
+	 * PMD setup and configuration is not thread safe. Since it is not
+	 * performance sensitive, it is better to guarantee thread-safety
+	 * and add device level lock. Adapter control operations which
+	 * change its state should acquire the lock.
+	 */
+	rte_spinlock_t lock;
+	enum hns3_adapter_state adapter_state;
+	struct hns3_reset_data reset;
+};
+
+#define HNS3_FLAG_TC_BASE_SCH_MODE		1
+#define HNS3_FLAG_VNET_BASE_SCH_MODE		2
+
+struct hns3_err_msix_intr_stats {
+	uint64_t mac_afifo_tnl_intr_cnt;
+	uint64_t ppu_mpf_abnormal_intr_st2_cnt;
+	uint64_t ssu_port_based_pf_intr_cnt;
+	uint64_t ppp_pf_abnormal_intr_cnt;
+	uint64_t ppu_pf_abnormal_intr_cnt;
+};
+
+/* vlan entry information. */
+struct hns3_user_vlan_table {
+	LIST_ENTRY(hns3_user_vlan_table) next;
+	bool hd_tbl_status;
+	uint16_t vlan_id;
+};
+
+struct hns3_port_base_vlan_config {
+	uint16_t state;
+	uint16_t pvid;
+};
+
+/* Vlan tag configuration for RX direction */
+struct hns3_rx_vtag_cfg {
+	uint8_t rx_vlan_offload_en; /* Whether enable rx vlan offload */
+	uint8_t strip_tag1_en;      /* Whether strip inner vlan tag */
+	uint8_t strip_tag2_en;      /* Whether strip outer vlan tag */
+	uint8_t vlan1_vlan_prionly; /* Inner VLAN Tag up to descriptor Enable */
+	uint8_t vlan2_vlan_prionly; /* Outer VLAN Tag up to descriptor Enable */
+};
+
+/* Vlan tag configuration for TX direction */
+struct hns3_tx_vtag_cfg {
+	bool accept_tag1;           /* Whether accept tag1 packet from host */
+	bool accept_untag1;         /* Whether accept untag1 packet from host */
+	bool accept_tag2;
+	bool accept_untag2;
+	bool insert_tag1_en;        /* Whether insert inner vlan tag */
+	bool insert_tag2_en;        /* Whether insert outer vlan tag */
+	uint16_t default_tag1;      /* The default inner vlan tag to insert */
+	uint16_t default_tag2;      /* The default outer vlan tag to insert */
+};
+
+struct hns3_vtag_cfg {
+	struct hns3_rx_vtag_cfg rx_vcfg;
+	struct hns3_tx_vtag_cfg tx_vcfg;
+};
+
+/* Request types for IPC. */
+enum hns3_mp_req_type {
+	HNS3_MP_REQ_START_RXTX = 1,
+	HNS3_MP_REQ_STOP_RXTX,
+	HNS3_MP_REQ_MAX
+};
+
+/* Pameters for IPC. */
+struct hns3_mp_param {
+	enum hns3_mp_req_type type;
+	int port_id;
+	int result;
+};
+
+/* Request timeout for IPC. */
+#define HNS3_MP_REQ_TIMEOUT_SEC 5
+
+/* Key string for IPC. */
+#define HNS3_MP_NAME "net_hns3_mp"
+
+struct hns3_pf {
+	struct hns3_adapter *adapter;
+	bool is_main_pf;
+
+	uint32_t pkt_buf_size; /* Total pf buf size for tx/rx */
+	uint32_t tx_buf_size; /* Tx buffer size for each TC */
+	uint32_t dv_buf_size; /* Dv buffer size for each TC */
+
+	uint16_t mps; /* Max packet size */
+
+	uint8_t tx_sch_mode;
+	uint8_t tc_max; /* max number of tc driver supported */
+	uint8_t local_max_tc; /* max number of local tc */
+	uint8_t pfc_max;
+	uint8_t prio_tc[HNS3_MAX_USER_PRIO]; /* TC indexed by prio */
+	uint16_t pause_time;
+	bool support_fc_autoneg;       /* support FC autonegotiate */
+
+	uint16_t wanted_umv_size;
+	uint16_t max_umv_size;
+	uint16_t used_umv_size;
+
+	/* Statistics information for abnormal interrupt */
+	struct hns3_err_msix_intr_stats abn_int_stats;
+
+	bool support_sfp_query;
+
+	struct hns3_vtag_cfg vtag_config;
+	struct hns3_port_base_vlan_config port_base_vlan_cfg;
+	LIST_HEAD(vlan_tbl, hns3_user_vlan_table) vlan_list;
+};
+
+struct hns3_vf {
+	struct hns3_adapter *adapter;
+};
+
+struct hns3_adapter {
+	struct hns3_hw hw;
+
+	/* Specific for PF or VF */
+	bool is_vf; /* false - PF, true - VF */
+	union {
+		struct hns3_pf pf;
+		struct hns3_vf vf;
+	};
+};
+
+#define HNS3_DEV_SUPPORT_DCB_B			0x0
+
+#define hns3_dev_dcb_supported(hw) \
+	hns3_get_bit((hw)->flag, HNS3_DEV_SUPPORT_DCB_B)
+
+#define HNS3_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct hns3_adapter *)adapter)->hw)
+#define HNS3_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct hns3_adapter *)adapter)
+#define HNS3_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct hns3_adapter *)adapter)->pf)
+#define HNS3VF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct hns3_adapter *)adapter)->vf)
+#define HNS3_DEV_HW_TO_ADAPTER(hw) \
+	container_of(hw, struct hns3_adapter, hw)
+
+#define hns3_set_field(origin, mask, shift, val) \
+	do { \
+		(origin) &= (~(mask)); \
+		(origin) |= ((val) << (shift)) & (mask); \
+	} while (0)
+#define hns3_get_field(origin, mask, shift) \
+	(((origin) & (mask)) >> (shift))
+#define hns3_set_bit(origin, shift, val) \
+	hns3_set_field((origin), (0x1UL << (shift)), (shift), (val))
+#define hns3_get_bit(origin, shift) \
+	hns3_get_field((origin), (0x1UL << (shift)), (shift))
+
+/*
+ * upper_32_bits - return bits 32-63 of a number
+ * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
+ * the "right shift count >= width of type" warning when that quantity is
+ * 32-bits.
+ */
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+
+/* lower_32_bits - return bits 0-31 of a number */
+#define lower_32_bits(n) ((uint32_t)(n))
+
+#define BIT(nr) (1UL << (nr))
+
+#define BITS_PER_LONG	(__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+	(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+#define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y))
+#define rounddown(x, y) ((x) - ((x) % (y)))
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+
+#define max_t(type, x, y) ({                    \
+	type __max1 = (x);                      \
+	type __max2 = (y);                      \
+	__max1 > __max2 ? __max1 : __max2; })
+
+static inline void hns3_write_reg(void *base, uint32_t reg, uint32_t value)
+{
+	rte_write32(value, (volatile void *)((char *)base + reg));
+}
+
+static inline uint32_t hns3_read_reg(void *base, uint32_t reg)
+{
+	return rte_read32((volatile void *)((char *)base + reg));
+}
+
+#define hns3_write_dev(a, reg, value) \
+	hns3_write_reg((a)->io_base, (reg), (value))
+
+#define hns3_read_dev(a, reg) \
+	hns3_read_reg((a)->io_base, (reg))
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+#define NEXT_ITEM_OF_ACTION(act, actions, index)                        \
+	do {								\
+		act = (actions) + (index);				\
+		while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {	\
+			(index)++;					\
+			act = actions + index;				\
+		}							\
+	} while (0)
+
+#define MSEC_PER_SEC              1000L
+#define USEC_PER_MSEC             1000L
+
+static inline uint64_t
+get_timeofday_ms(void)
+{
+	struct timeval tv;
+
+	(void)gettimeofday(&tv, NULL);
+
+	return (uint64_t)tv.tv_sec * MSEC_PER_SEC + tv.tv_usec / USEC_PER_MSEC;
+}
+
+static inline uint64_t
+hns3_atomic_test_bit(unsigned int nr, volatile uint64_t *addr)
+{
+	uint64_t res;
+
+	rte_mb();
+	res = ((*addr) & (1UL << nr)) != 0;
+	rte_mb();
+	return res;
+}
+
+static inline void
+hns3_atomic_set_bit(unsigned int nr, volatile uint64_t *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+hns3_atomic_clear_bit(unsigned int nr, volatile uint64_t *addr)
+{
+	__sync_fetch_and_and(addr, ~(1UL << nr));
+}
+
+static inline int64_t
+hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr)
+{
+	uint64_t mask = (1UL << nr);
+
+	return __sync_fetch_and_and(addr, ~mask) & mask;
+}
+
+#endif /* _HNS3_ETHDEV_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 01/22] net/hns3: add hardware registers definition Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:01   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of " Wei Hu (Xavier)
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch registers hns3 PMD driver and adds the definition for log
interfaces.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 141 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_logs.h   |  34 ++++++++++
 2 files changed, 175 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_ethdev.c
 create mode 100644 drivers/net/hns3/hns3_logs.h

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
new file mode 100644
index 0000000..0587a9c
--- /dev/null
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <errno.h>
+#include <stdarg.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <arpa/inet.h>
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_interrupts.h>
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_pci.h>
+
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+#include "hns3_regs.h"
+
+int hns3_logtype_init;
+int hns3_logtype_driver;
+
+static void
+hns3_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	hw->adapter_state = HNS3_NIC_CLOSED;
+}
+
+static const struct eth_dev_ops hns3_eth_dev_ops = {
+	.dev_close          = hns3_dev_close,
+};
+
+static int
+hns3_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct rte_device *dev = eth_dev->device;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev);
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t device_id = pci_dev->id.device_id;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	eth_dev->dev_ops = &hns3_eth_dev_ops;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hns->is_vf = false;
+	hw->data = eth_dev->data;
+	hw->adapter_state = HNS3_NIC_INITIALIZED;
+
+	return 0;
+}
+
+static int
+hns3_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
+	if (hw->adapter_state < HNS3_NIC_CLOSING)
+		hns3_dev_close(eth_dev);
+
+	hw->adapter_state = HNS3_NIC_REMOVED;
+	return 0;
+}
+
+static int
+eth_hns3_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		   struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct hns3_adapter),
+					     hns3_dev_init);
+}
+
+static int
+eth_hns3_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, hns3_dev_uninit);
+}
+
+static const struct rte_pci_id pci_id_hns3_map[] = {
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_GE) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_25GE) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_25GE_RDMA) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_50GE_RDMA) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_RDMA_MACSEC) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct rte_pci_driver rte_hns3_pmd = {
+	.id_table = pci_id_hns3_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = eth_hns3_pci_probe,
+	.remove = eth_hns3_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_hns3, rte_hns3_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_hns3, pci_id_hns3_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_hns3, "* igb_uio | vfio-pci");
+
+RTE_INIT(hns3_init_log)
+{
+	hns3_logtype_init = rte_log_register("pmd.net.hns3.init");
+	if (hns3_logtype_init >= 0)
+		rte_log_set_level(hns3_logtype_init, RTE_LOG_NOTICE);
+	hns3_logtype_driver = rte_log_register("pmd.net.hns3.driver");
+	if (hns3_logtype_driver >= 0)
+		rte_log_set_level(hns3_logtype_driver, RTE_LOG_NOTICE);
+}
diff --git a/drivers/net/hns3/hns3_logs.h b/drivers/net/hns3/hns3_logs.h
new file mode 100644
index 0000000..8d069a2
--- /dev/null
+++ b/drivers/net/hns3/hns3_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_LOGS_H_
+#define _HNS3_LOGS_H_
+
+extern int hns3_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, hns3_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int hns3_logtype_driver;
+#define PMD_DRV_LOG_RAW(hw, level, fmt, args...) \
+	rte_log(level, hns3_logtype_driver, "%s %s(): " fmt, \
+		(hw)->data->name, __func__, ## args)
+
+#define hns3_err(hw, fmt, args...) \
+	PMD_DRV_LOG_RAW(hw, RTE_LOG_ERR, fmt "\n", ## args)
+
+#define hns3_warn(hw, fmt, args...) \
+	 PMD_DRV_LOG_RAW(hw, RTE_LOG_WARNING, fmt "\n", ## args)
+
+#define hns3_notice(hw, fmt, args...) \
+	PMD_DRV_LOG_RAW(hw, RTE_LOG_NOTICE, fmt "\n", ## args)
+
+#define hns3_info(hw, fmt, args...) \
+	 PMD_DRV_LOG_RAW(hw, RTE_LOG_INFO, fmt "\n", ## args)
+
+#define hns3_dbg(hw, fmt, args...) \
+	 PMD_DRV_LOG_RAW(hw, RTE_LOG_DEBUG, fmt "\n", ## args)
+
+#endif /* _HNS3_LOGS_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (2 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:02   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 05/22] net/hns3: add the initialization " Wei Hu (Xavier)
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for cmd of hns3 PMD driver, driver can interact
with firmware through command to complete hardware configuration.

Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c    | 524 ++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_cmd.h    | 752 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_ethdev.c |  70 ++++
 drivers/net/hns3/hns3_ethdev.h |   2 +-
 4 files changed, 1347 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/hns3/hns3_cmd.c
 create mode 100644 drivers/net/hns3/hns3_cmd.h

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
new file mode 100644
index 0000000..f272374
--- /dev/null
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <errno.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_alarm.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_io.h>
+
+#include "hns3_cmd.h"
+#include "hns3_ethdev.h"
+#include "hns3_regs.h"
+#include "hns3_logs.h"
+
+#define hns3_is_csq(ring) ((ring)->flag & HNS3_TYPE_CSQ)
+
+#define cmq_ring_to_dev(ring)   (&(ring)->dev->pdev->dev)
+
+static int
+hns3_ring_space(struct hns3_cmq_ring *ring)
+{
+	int ntu = ring->next_to_use;
+	int ntc = ring->next_to_clean;
+	int used = (ntu - ntc + ring->desc_num) % ring->desc_num;
+
+	return ring->desc_num - used - 1;
+}
+
+static bool
+is_valid_csq_clean_head(struct hns3_cmq_ring *ring, int head)
+{
+	int ntu = ring->next_to_use;
+	int ntc = ring->next_to_clean;
+
+	if (ntu > ntc)
+		return head >= ntc && head <= ntu;
+
+	return head >= ntc || head <= ntu;
+}
+
+/*
+ * hns3_allocate_dma_mem - Specific memory alloc for command function.
+ * Malloc a memzone, which is a contiguous portion of physical memory identified
+ * by a name.
+ * @ring: pointer to the ring structure
+ * @size: size of memory requested
+ * @alignment: what to align the allocation to
+ */
+static int
+hns3_allocate_dma_mem(struct hns3_hw *hw, struct hns3_cmq_ring *ring,
+		      uint64_t size, uint32_t alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	snprintf(z_name, sizeof(z_name), "hns3_dma_%" PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY,
+					 RTE_MEMZONE_IOVA_CONTIG, alignment,
+					 RTE_PGSIZE_2M);
+	if (mz == NULL)
+		return -ENOMEM;
+
+	ring->buf_size = size;
+	ring->desc = mz->addr;
+	ring->desc_dma_addr = mz->iova;
+	ring->zone = (const void *)mz;
+	hns3_dbg(hw, "memzone %s allocated with physical address: %" PRIu64,
+		 mz->name, ring->desc_dma_addr);
+
+	return 0;
+}
+
+static void
+hns3_free_dma_mem(struct hns3_hw *hw, struct hns3_cmq_ring *ring)
+{
+	hns3_dbg(hw, "memzone %s to be freed with physical address: %" PRIu64,
+		 ((const struct rte_memzone *)ring->zone)->name,
+		 ring->desc_dma_addr);
+	rte_memzone_free((const struct rte_memzone *)ring->zone);
+	ring->buf_size = 0;
+	ring->desc = NULL;
+	ring->desc_dma_addr = 0;
+	ring->zone = NULL;
+}
+
+static int
+hns3_alloc_cmd_desc(struct hns3_hw *hw, struct hns3_cmq_ring *ring)
+{
+	int size  = ring->desc_num * sizeof(struct hns3_cmd_desc);
+
+	if (hns3_allocate_dma_mem(hw, ring, size, HNS3_CMD_DESC_ALIGNMENT)) {
+		hns3_err(hw, "allocate dma mem failed");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+hns3_free_cmd_desc(struct hns3_hw *hw, struct hns3_cmq_ring *ring)
+{
+	if (ring->desc)
+		hns3_free_dma_mem(hw, ring);
+}
+
+static int
+hns3_alloc_cmd_queue(struct hns3_hw *hw, int ring_type)
+{
+	struct hns3_cmq_ring *ring =
+		(ring_type == HNS3_TYPE_CSQ) ? &hw->cmq.csq : &hw->cmq.crq;
+	int ret;
+
+	ring->ring_type = ring_type;
+	ring->hw = hw;
+
+	ret = hns3_alloc_cmd_desc(hw, ring);
+	if (ret)
+		hns3_err(hw, "descriptor %s alloc error %d",
+			    (ring_type == HNS3_TYPE_CSQ) ? "CSQ" : "CRQ", ret);
+
+	return ret;
+}
+
+void
+hns3_cmd_reuse_desc(struct hns3_cmd_desc *desc, bool is_read)
+{
+	desc->flag = rte_cpu_to_le_16(HNS3_CMD_FLAG_NO_INTR | HNS3_CMD_FLAG_IN);
+	if (is_read)
+		desc->flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_WR);
+	else
+		desc->flag &= rte_cpu_to_le_16(~HNS3_CMD_FLAG_WR);
+}
+
+void
+hns3_cmd_setup_basic_desc(struct hns3_cmd_desc *desc,
+			  enum hns3_opcode_type opcode, bool is_read)
+{
+	memset((void *)desc, 0, sizeof(struct hns3_cmd_desc));
+	desc->opcode = rte_cpu_to_le_16(opcode);
+	desc->flag = rte_cpu_to_le_16(HNS3_CMD_FLAG_NO_INTR | HNS3_CMD_FLAG_IN);
+
+	if (is_read)
+		desc->flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_WR);
+}
+
+static void
+hns3_cmd_clear_regs(struct hns3_hw *hw)
+{
+	hns3_write_dev(hw, HNS3_CMDQ_TX_ADDR_L_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_TX_ADDR_H_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_TX_DEPTH_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_TX_HEAD_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_TX_TAIL_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_RX_ADDR_L_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_RX_ADDR_H_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_RX_DEPTH_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, 0);
+	hns3_write_dev(hw, HNS3_CMDQ_RX_TAIL_REG, 0);
+}
+
+static void
+hns3_cmd_config_regs(struct hns3_cmq_ring *ring)
+{
+	uint64_t dma = ring->desc_dma_addr;
+
+	if (ring->ring_type == HNS3_TYPE_CSQ) {
+		hns3_write_dev(ring->hw, HNS3_CMDQ_TX_ADDR_L_REG,
+			       lower_32_bits(dma));
+		hns3_write_dev(ring->hw, HNS3_CMDQ_TX_ADDR_H_REG,
+			       upper_32_bits(dma));
+		hns3_write_dev(ring->hw, HNS3_CMDQ_TX_DEPTH_REG,
+			       ring->desc_num >> HNS3_NIC_CMQ_DESC_NUM_S |
+			       HNS3_NIC_SW_RST_RDY);
+		hns3_write_dev(ring->hw, HNS3_CMDQ_TX_HEAD_REG, 0);
+		hns3_write_dev(ring->hw, HNS3_CMDQ_TX_TAIL_REG, 0);
+	} else {
+		hns3_write_dev(ring->hw, HNS3_CMDQ_RX_ADDR_L_REG,
+			       lower_32_bits(dma));
+		hns3_write_dev(ring->hw, HNS3_CMDQ_RX_ADDR_H_REG,
+			       upper_32_bits(dma));
+		hns3_write_dev(ring->hw, HNS3_CMDQ_RX_DEPTH_REG,
+			       ring->desc_num >> HNS3_NIC_CMQ_DESC_NUM_S);
+		hns3_write_dev(ring->hw, HNS3_CMDQ_RX_HEAD_REG, 0);
+		hns3_write_dev(ring->hw, HNS3_CMDQ_RX_TAIL_REG, 0);
+	}
+}
+
+static void
+hns3_cmd_init_regs(struct hns3_hw *hw)
+{
+	hns3_cmd_config_regs(&hw->cmq.csq);
+	hns3_cmd_config_regs(&hw->cmq.crq);
+}
+
+static int
+hns3_cmd_csq_clean(struct hns3_hw *hw)
+{
+	struct hns3_cmq_ring *csq = &hw->cmq.csq;
+	uint32_t head;
+	int clean;
+
+	head = hns3_read_dev(hw, HNS3_CMDQ_TX_HEAD_REG);
+
+	if (!is_valid_csq_clean_head(csq, head)) {
+		struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+		uint32_t global;
+		uint32_t fun_rst;
+		hns3_err(hw, "wrong cmd head (%d, %d-%d)", head,
+			    csq->next_to_use, csq->next_to_clean);
+		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		return -EIO;
+	}
+
+	clean = (head - csq->next_to_clean + csq->desc_num) % csq->desc_num;
+	csq->next_to_clean = head;
+	return clean;
+}
+
+static int
+hns3_cmd_csq_done(struct hns3_hw *hw)
+{
+	uint32_t head = hns3_read_dev(hw, HNS3_CMDQ_TX_HEAD_REG);
+
+	return head == hw->cmq.csq.next_to_use;
+}
+
+static bool
+hns3_is_special_opcode(uint16_t opcode)
+{
+	/*
+	 * These commands have several descriptors,
+	 * and use the first one to save opcode and return value.
+	 */
+	uint16_t spec_opcode[] = {HNS3_OPC_STATS_64_BIT,
+			     HNS3_OPC_STATS_32_BIT,
+			     HNS3_OPC_STATS_MAC,
+			     HNS3_OPC_STATS_MAC_ALL,
+			     HNS3_OPC_QUERY_32_BIT_REG,
+			     HNS3_OPC_QUERY_64_BIT_REG};
+	uint32_t i;
+
+	for (i = 0; i < ARRAY_SIZE(spec_opcode); i++)
+		if (spec_opcode[i] == opcode)
+			return true;
+
+	return false;
+}
+
+static int
+hns3_cmd_get_hardware_reply(struct hns3_hw *hw,
+			    struct hns3_cmd_desc *desc, int num, int ntc)
+{
+	uint16_t opcode, desc_ret;
+	int current_ntc = ntc;
+	int retval = 0;
+	int handle;
+
+	opcode = rte_le_to_cpu_16(desc[0].opcode);
+	handle = 0;
+	while (handle < num) {
+		/* Get the result of hardware write back */
+		desc[handle] = hw->cmq.csq.desc[current_ntc];
+
+		if (likely(!hns3_is_special_opcode(opcode)))
+			desc_ret = rte_le_to_cpu_16(desc[handle].retval);
+		else
+			desc_ret = rte_le_to_cpu_16(desc[0].retval);
+
+		switch (desc_ret) {
+		case HNS3_CMD_EXEC_SUCCESS:
+			retval = 0;
+			break;
+		case HNS3_CMD_NO_AUTH:
+			retval = -EPERM;
+			break;
+		case HNS3_CMD_NOT_SUPPORTED:
+			retval = -EOPNOTSUPP;
+			break;
+		default:
+			retval = -EIO;
+		}
+		hw->cmq.last_status = desc_ret;
+		current_ntc++;
+		handle++;
+		if (current_ntc == hw->cmq.csq.desc_num)
+			current_ntc = 0;
+	}
+
+	return retval;
+}
+
+static int hns3_cmd_poll_reply(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	uint32_t timeout = 0;
+
+	do {
+		if (hns3_cmd_csq_done(hw))
+			return 0;
+
+		if (rte_atomic16_read(&hw->reset.disable_cmd)) {
+			hns3_err(hw,
+				 "Don't wait for reply because of disable_cmd");
+			return -EBUSY;
+		}
+
+		rte_delay_us(1);
+		timeout++;
+	} while (timeout < hw->cmq.tx_timeout);
+	hns3_err(hw, "Wait for reply timeout");
+	return -ETIME;
+}
+
+/*
+ * hns3_cmd_send - send command to command queue
+ * @hw: pointer to the hw struct
+ * @desc: prefilled descriptor for describing the command
+ * @num : the number of descriptors to be sent
+ *
+ * This is the main send command for command queue, it
+ * sends the queue, cleans the queue, etc
+ */
+int
+hns3_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc, int num)
+{
+	struct hns3_cmd_desc *desc_to_use;
+	int handle = 0;
+	int retval;
+	uint32_t ntc;
+
+	if (rte_atomic16_read(&hw->reset.disable_cmd))
+		return -EBUSY;
+
+	rte_spinlock_lock(&hw->cmq.csq.lock);
+
+	/* Clean the command send queue */
+	retval = hns3_cmd_csq_clean(hw);
+	if (retval < 0) {
+		rte_spinlock_unlock(&hw->cmq.csq.lock);
+		return retval;
+	}
+
+	if (num > hns3_ring_space(&hw->cmq.csq)) {
+		rte_spinlock_unlock(&hw->cmq.csq.lock);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Record the location of desc in the ring for this time
+	 * which will be use for hardware to write back
+	 */
+	ntc = hw->cmq.csq.next_to_use;
+
+	while (handle < num) {
+		desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use];
+		*desc_to_use = desc[handle];
+		(hw->cmq.csq.next_to_use)++;
+		if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num)
+			hw->cmq.csq.next_to_use = 0;
+		handle++;
+	}
+
+	/* Write to hardware */
+	hns3_write_dev(hw, HNS3_CMDQ_TX_TAIL_REG, hw->cmq.csq.next_to_use);
+
+	/*
+	 * If the command is sync, wait for the firmware to write back,
+	 * if multi descriptors to be sent, use the first one to check.
+	 */
+	if (HNS3_CMD_SEND_SYNC(rte_le_to_cpu_16(desc->flag))) {
+		retval = hns3_cmd_poll_reply(hw);
+		if (!retval)
+			retval = hns3_cmd_get_hardware_reply(hw, desc, num,
+							     ntc);
+	}
+
+	rte_spinlock_unlock(&hw->cmq.csq.lock);
+	return retval;
+}
+
+static enum hns3_cmd_status
+hns3_cmd_query_firmware_version(struct hns3_hw *hw, uint32_t *version)
+{
+	struct hns3_query_version_cmd *resp;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_FW_VER, 1);
+	resp = (struct hns3_query_version_cmd *)desc.data;
+
+	/* Initialize the cmd function */
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret == 0)
+		*version = rte_le_to_cpu_32(resp->firmware);
+
+	return ret;
+}
+
+int
+hns3_cmd_init_queue(struct hns3_hw *hw)
+{
+	int ret;
+
+	/* Setup the lock for command queue */
+	rte_spinlock_init(&hw->cmq.csq.lock);
+	rte_spinlock_init(&hw->cmq.crq.lock);
+
+	/*
+	 * Clear up all command register,
+	 * in case there are some residual values
+	 */
+	hns3_cmd_clear_regs(hw);
+
+	/* Setup the queue entries for use cmd queue */
+	hw->cmq.csq.desc_num = HNS3_NIC_CMQ_DESC_NUM;
+	hw->cmq.crq.desc_num = HNS3_NIC_CMQ_DESC_NUM;
+
+	/* Setup Tx write back timeout */
+	hw->cmq.tx_timeout = HNS3_CMDQ_TX_TIMEOUT;
+
+	/* Setup queue rings */
+	ret = hns3_alloc_cmd_queue(hw, HNS3_TYPE_CSQ);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "CSQ ring setup error %d", ret);
+		return ret;
+	}
+
+	ret = hns3_alloc_cmd_queue(hw, HNS3_TYPE_CRQ);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "CRQ ring setup error %d", ret);
+		goto err_crq;
+	}
+
+	return 0;
+
+err_crq:
+	hns3_free_cmd_desc(hw, &hw->cmq.csq);
+
+	return ret;
+}
+
+int
+hns3_cmd_init(struct hns3_hw *hw)
+{
+	int ret;
+
+	rte_spinlock_lock(&hw->cmq.csq.lock);
+	rte_spinlock_lock(&hw->cmq.crq.lock);
+
+	hw->cmq.csq.next_to_clean = 0;
+	hw->cmq.csq.next_to_use = 0;
+	hw->cmq.crq.next_to_clean = 0;
+	hw->cmq.crq.next_to_use = 0;
+	hw->mbx_resp.head = 0;
+	hw->mbx_resp.tail = 0;
+	hw->mbx_resp.lost = 0;
+	hns3_cmd_init_regs(hw);
+
+	rte_spinlock_unlock(&hw->cmq.crq.lock);
+	rte_spinlock_unlock(&hw->cmq.csq.lock);
+
+	rte_atomic16_clear(&hw->reset.disable_cmd);
+
+	ret = hns3_cmd_query_firmware_version(hw, &hw->fw_version);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "firmware version query failed %d", ret);
+		goto err_cmd_init;
+	}
+
+	PMD_INIT_LOG(INFO, "The firmware version is %08x", hw->fw_version);
+
+	return 0;
+
+err_cmd_init:
+	hns3_cmd_uninit(hw);
+	return ret;
+}
+
+static void
+hns3_destroy_queue(struct hns3_hw *hw, struct hns3_cmq_ring *ring)
+{
+	rte_spinlock_lock(&ring->lock);
+
+	hns3_free_cmd_desc(hw, ring);
+
+	rte_spinlock_unlock(&ring->lock);
+}
+
+void
+hns3_cmd_destroy_queue(struct hns3_hw *hw)
+{
+	hns3_destroy_queue(hw, &hw->cmq.csq);
+	hns3_destroy_queue(hw, &hw->cmq.crq);
+}
+
+void
+hns3_cmd_uninit(struct hns3_hw *hw)
+{
+	rte_spinlock_lock(&hw->cmq.csq.lock);
+	rte_spinlock_lock(&hw->cmq.crq.lock);
+	rte_atomic16_set(&hw->reset.disable_cmd, 1);
+	hns3_cmd_clear_regs(hw);
+	rte_spinlock_unlock(&hw->cmq.crq.lock);
+	rte_spinlock_unlock(&hw->cmq.csq.lock);
+}
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h
new file mode 100644
index 0000000..b72d5cb
--- /dev/null
+++ b/drivers/net/hns3/hns3_cmd.h
@@ -0,0 +1,752 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_CMD_H_
+#define _HNS3_CMD_H_
+
+#define HNS3_CMDQ_TX_TIMEOUT		30000
+#define HNS3_CMDQ_RX_INVLD_B		0
+#define HNS3_CMDQ_RX_OUTVLD_B		1
+#define HNS3_CMD_DESC_ALIGNMENT		4096
+#define HNS3_QUEUE_ID_MASK		0x1ff
+#define HNS3_CMD_FLAG_NEXT		BIT(2)
+
+struct hns3_hw;
+
+#define HNS3_CMD_DESC_DATA_NUM	6
+struct hns3_cmd_desc {
+	uint16_t opcode;
+	uint16_t flag;
+	uint16_t retval;
+	uint16_t rsv;
+	uint32_t data[HNS3_CMD_DESC_DATA_NUM];
+};
+
+struct hns3_cmq_ring {
+	uint64_t desc_dma_addr;
+	struct hns3_cmd_desc *desc;
+	struct hns3_hw *hw;
+
+	uint16_t buf_size;
+	uint16_t desc_num;       /* max number of cmq descriptor */
+	uint32_t next_to_use;
+	uint32_t next_to_clean;
+	uint8_t ring_type;       /* cmq ring type */
+	rte_spinlock_t lock;     /* Command queue lock */
+
+	const void *zone;        /* memory zone */
+};
+
+enum hns3_cmd_return_status {
+	HNS3_CMD_EXEC_SUCCESS	= 0,
+	HNS3_CMD_NO_AUTH	= 1,
+	HNS3_CMD_NOT_SUPPORTED	= 2,
+	HNS3_CMD_QUEUE_FULL	= 3,
+};
+
+enum hns3_cmd_status {
+	HNS3_STATUS_SUCCESS	= 0,
+	HNS3_ERR_CSQ_FULL	= -1,
+	HNS3_ERR_CSQ_TIMEOUT	= -2,
+	HNS3_ERR_CSQ_ERROR	= -3,
+};
+
+struct hns3_misc_vector {
+	uint8_t *addr;
+	int vector_irq;
+};
+
+struct hns3_cmq {
+	struct hns3_cmq_ring csq;
+	struct hns3_cmq_ring crq;
+	uint16_t tx_timeout;
+	enum hns3_cmd_status last_status;
+};
+
+enum hns3_opcode_type {
+	/* Generic commands */
+	HNS3_OPC_QUERY_FW_VER           = 0x0001,
+	HNS3_OPC_CFG_RST_TRIGGER        = 0x0020,
+	HNS3_OPC_GBL_RST_STATUS         = 0x0021,
+	HNS3_OPC_QUERY_FUNC_STATUS      = 0x0022,
+	HNS3_OPC_QUERY_PF_RSRC          = 0x0023,
+	HNS3_OPC_GET_CFG_PARAM          = 0x0025,
+	HNS3_OPC_PF_RST_DONE            = 0x0026,
+
+	HNS3_OPC_STATS_64_BIT           = 0x0030,
+	HNS3_OPC_STATS_32_BIT           = 0x0031,
+	HNS3_OPC_STATS_MAC              = 0x0032,
+	HNS3_OPC_QUERY_MAC_REG_NUM      = 0x0033,
+	HNS3_OPC_STATS_MAC_ALL          = 0x0034,
+
+	HNS3_OPC_QUERY_REG_NUM          = 0x0040,
+	HNS3_OPC_QUERY_32_BIT_REG       = 0x0041,
+	HNS3_OPC_QUERY_64_BIT_REG       = 0x0042,
+
+	/* MAC command */
+	HNS3_OPC_CONFIG_MAC_MODE        = 0x0301,
+	HNS3_OPC_QUERY_LINK_STATUS      = 0x0307,
+	HNS3_OPC_CONFIG_MAX_FRM_SIZE    = 0x0308,
+	HNS3_OPC_CONFIG_SPEED_DUP       = 0x0309,
+	HNS3_MAC_COMMON_INT_EN          = 0x030E,
+
+	/* PFC/Pause commands */
+	HNS3_OPC_CFG_MAC_PAUSE_EN       = 0x0701,
+	HNS3_OPC_CFG_PFC_PAUSE_EN       = 0x0702,
+	HNS3_OPC_CFG_MAC_PARA           = 0x0703,
+	HNS3_OPC_CFG_PFC_PARA           = 0x0704,
+	HNS3_OPC_QUERY_MAC_TX_PKT_CNT   = 0x0705,
+	HNS3_OPC_QUERY_MAC_RX_PKT_CNT   = 0x0706,
+	HNS3_OPC_QUERY_PFC_TX_PKT_CNT   = 0x0707,
+	HNS3_OPC_QUERY_PFC_RX_PKT_CNT   = 0x0708,
+	HNS3_OPC_PRI_TO_TC_MAPPING      = 0x0709,
+	HNS3_OPC_QOS_MAP                = 0x070A,
+
+	/* ETS/scheduler commands */
+	HNS3_OPC_TM_PG_TO_PRI_LINK      = 0x0804,
+	HNS3_OPC_TM_QS_TO_PRI_LINK      = 0x0805,
+	HNS3_OPC_TM_NQ_TO_QS_LINK       = 0x0806,
+	HNS3_OPC_TM_RQ_TO_QS_LINK       = 0x0807,
+	HNS3_OPC_TM_PORT_WEIGHT         = 0x0808,
+	HNS3_OPC_TM_PG_WEIGHT           = 0x0809,
+	HNS3_OPC_TM_QS_WEIGHT           = 0x080A,
+	HNS3_OPC_TM_PRI_WEIGHT          = 0x080B,
+	HNS3_OPC_TM_PRI_C_SHAPPING      = 0x080C,
+	HNS3_OPC_TM_PRI_P_SHAPPING      = 0x080D,
+	HNS3_OPC_TM_PG_C_SHAPPING       = 0x080E,
+	HNS3_OPC_TM_PG_P_SHAPPING       = 0x080F,
+	HNS3_OPC_TM_PORT_SHAPPING       = 0x0810,
+	HNS3_OPC_TM_PG_SCH_MODE_CFG     = 0x0812,
+	HNS3_OPC_TM_PRI_SCH_MODE_CFG    = 0x0813,
+	HNS3_OPC_TM_QS_SCH_MODE_CFG     = 0x0814,
+	HNS3_OPC_TM_BP_TO_QSET_MAPPING  = 0x0815,
+	HNS3_OPC_ETS_TC_WEIGHT          = 0x0843,
+	HNS3_OPC_QSET_DFX_STS           = 0x0844,
+	HNS3_OPC_PRI_DFX_STS            = 0x0845,
+	HNS3_OPC_PG_DFX_STS             = 0x0846,
+	HNS3_OPC_PORT_DFX_STS           = 0x0847,
+	HNS3_OPC_SCH_NQ_CNT             = 0x0848,
+	HNS3_OPC_SCH_RQ_CNT             = 0x0849,
+	HNS3_OPC_TM_INTERNAL_STS        = 0x0850,
+	HNS3_OPC_TM_INTERNAL_CNT        = 0x0851,
+	HNS3_OPC_TM_INTERNAL_STS_1      = 0x0852,
+
+	/* Mailbox cmd */
+	HNS3_OPC_MBX_VF_TO_PF           = 0x2001,
+
+	/* Packet buffer allocate commands */
+	HNS3_OPC_TX_BUFF_ALLOC          = 0x0901,
+	HNS3_OPC_RX_PRIV_BUFF_ALLOC     = 0x0902,
+	HNS3_OPC_RX_PRIV_WL_ALLOC       = 0x0903,
+	HNS3_OPC_RX_COM_THRD_ALLOC      = 0x0904,
+	HNS3_OPC_RX_COM_WL_ALLOC        = 0x0905,
+
+	/* SSU module INT commands */
+	HNS3_SSU_ECC_INT_CMD            = 0x0989,
+	HNS3_SSU_COMMON_INT_CMD         = 0x098C,
+
+	/* TQP management command */
+	HNS3_OPC_SET_TQP_MAP            = 0x0A01,
+
+	/* TQP commands */
+	HNS3_OPC_QUERY_TX_STATUS        = 0x0B03,
+	HNS3_OPC_QUERY_RX_STATUS        = 0x0B13,
+	HNS3_OPC_CFG_COM_TQP_QUEUE      = 0x0B20,
+	HNS3_OPC_RESET_TQP_QUEUE        = 0x0B22,
+
+	/* PPU module intr commands */
+	HNS3_PPU_MPF_ECC_INT_CMD        = 0x0B40,
+	HNS3_PPU_MPF_OTHER_INT_CMD      = 0x0B41,
+	HNS3_PPU_PF_OTHER_INT_CMD       = 0x0B42,
+
+	/* TSO command */
+	HNS3_OPC_TSO_GENERIC_CONFIG     = 0x0C01,
+	HNS3_OPC_GRO_GENERIC_CONFIG     = 0x0C10,
+
+	/* RSS commands */
+	HNS3_OPC_RSS_GENERIC_CONFIG     = 0x0D01,
+	HNS3_OPC_RSS_INPUT_TUPLE        = 0x0D02,
+	HNS3_OPC_RSS_INDIR_TABLE        = 0x0D07,
+	HNS3_OPC_RSS_TC_MODE            = 0x0D08,
+
+	/* Promisuous mode command */
+	HNS3_OPC_CFG_PROMISC_MODE       = 0x0E01,
+
+	/* Vlan offload commands */
+	HNS3_OPC_VLAN_PORT_TX_CFG       = 0x0F01,
+	HNS3_OPC_VLAN_PORT_RX_CFG       = 0x0F02,
+
+	/* MAC commands */
+	HNS3_OPC_MAC_VLAN_ADD           = 0x1000,
+	HNS3_OPC_MAC_VLAN_REMOVE        = 0x1001,
+	HNS3_OPC_MAC_VLAN_TYPE_ID       = 0x1002,
+	HNS3_OPC_MAC_VLAN_INSERT        = 0x1003,
+	HNS3_OPC_MAC_VLAN_ALLOCATE      = 0x1004,
+	HNS3_OPC_MAC_ETHTYPE_ADD        = 0x1010,
+
+	/* VLAN commands */
+	HNS3_OPC_VLAN_FILTER_CTRL       = 0x1100,
+	HNS3_OPC_VLAN_FILTER_PF_CFG     = 0x1101,
+	HNS3_OPC_VLAN_FILTER_VF_CFG     = 0x1102,
+
+	/* Flow Director command */
+	HNS3_OPC_FD_MODE_CTRL           = 0x1200,
+	HNS3_OPC_FD_GET_ALLOCATION      = 0x1201,
+	HNS3_OPC_FD_KEY_CONFIG          = 0x1202,
+	HNS3_OPC_FD_TCAM_OP             = 0x1203,
+	HNS3_OPC_FD_AD_OP               = 0x1204,
+	HNS3_OPC_FD_COUNTER_OP          = 0x1205,
+
+	/* SFP command */
+	HNS3_OPC_SFP_GET_SPEED          = 0x7104,
+
+	/* Error INT commands */
+	HNS3_QUERY_MSIX_INT_STS_BD_NUM          = 0x1513,
+	HNS3_QUERY_CLEAR_ALL_MPF_MSIX_INT       = 0x1514,
+	HNS3_QUERY_CLEAR_ALL_PF_MSIX_INT        = 0x1515,
+
+	/* PPP module intr commands */
+	HNS3_PPP_CMD0_INT_CMD                   = 0x2100,
+	HNS3_PPP_CMD1_INT_CMD                   = 0x2101,
+};
+
+#define HNS3_CMD_FLAG_IN	BIT(0)
+#define HNS3_CMD_FLAG_OUT	BIT(1)
+#define HNS3_CMD_FLAG_NEXT	BIT(2)
+#define HNS3_CMD_FLAG_WR	BIT(3)
+#define HNS3_CMD_FLAG_NO_INTR	BIT(4)
+#define HNS3_CMD_FLAG_ERR_INTR	BIT(5)
+
+#define HNS3_BUF_SIZE_UNIT	256
+#define HNS3_BUF_MUL_BY		2
+#define HNS3_BUF_DIV_BY		2
+#define NEED_RESERVE_TC_NUM	2
+#define BUF_MAX_PERCENT		100
+#define BUF_RESERVE_PERCENT	90
+
+#define HNS3_MAX_TC_NUM		8
+#define HNS3_TC0_PRI_BUF_EN_B	15 /* Bit 15 indicate enable or not */
+#define HNS3_BUF_UNIT_S		7  /* Buf size is united by 128 bytes */
+#define HNS3_TX_BUFF_RSV_NUM	8
+struct hns3_tx_buff_alloc_cmd {
+	uint16_t tx_pkt_buff[HNS3_MAX_TC_NUM];
+	uint8_t tx_buff_rsv[HNS3_TX_BUFF_RSV_NUM];
+};
+
+struct hns3_rx_priv_buff_cmd {
+	uint16_t buf_num[HNS3_MAX_TC_NUM];
+	uint16_t shared_buf;
+	uint8_t rsv[6];
+};
+
+struct hns3_query_version_cmd {
+	uint32_t firmware;
+	uint32_t firmware_rsv[5];
+};
+
+#define HNS3_RX_PRIV_EN_B	15
+#define HNS3_TC_NUM_ONE_DESC	4
+struct hns3_priv_wl {
+	uint16_t high;
+	uint16_t low;
+};
+
+struct hns3_rx_priv_wl_buf {
+	struct hns3_priv_wl tc_wl[HNS3_TC_NUM_ONE_DESC];
+};
+
+struct hns3_rx_com_thrd {
+	struct hns3_priv_wl com_thrd[HNS3_TC_NUM_ONE_DESC];
+};
+
+struct hns3_rx_com_wl {
+	struct hns3_priv_wl com_wl;
+};
+
+struct hns3_waterline {
+	uint32_t low;
+	uint32_t high;
+};
+
+struct hns3_tc_thrd {
+	uint32_t low;
+	uint32_t high;
+};
+
+struct hns3_priv_buf {
+	struct hns3_waterline wl; /* Waterline for low and high */
+	uint32_t buf_size;        /* TC private buffer size */
+	uint32_t tx_buf_size;
+	uint32_t enable;          /* Enable TC private buffer or not */
+};
+
+struct hns3_shared_buf {
+	struct hns3_waterline self;
+	struct hns3_tc_thrd tc_thrd[HNS3_MAX_TC_NUM];
+	uint32_t buf_size;
+};
+
+struct hns3_pkt_buf_alloc {
+	struct hns3_priv_buf priv_buf[HNS3_MAX_TC_NUM];
+	struct hns3_shared_buf s_buf;
+};
+
+#define HNS3_RX_COM_WL_EN_B	15
+struct hns3_rx_com_wl_buf_cmd {
+	uint16_t high_wl;
+	uint16_t low_wl;
+	uint8_t rsv[20];
+};
+
+#define HNS3_RX_PKT_EN_B	15
+struct hns3_rx_pkt_buf_cmd {
+	uint16_t high_pkt;
+	uint16_t low_pkt;
+	uint8_t rsv[20];
+};
+
+#define HNS3_PF_STATE_DONE_B	0
+#define HNS3_PF_STATE_MAIN_B	1
+#define HNS3_PF_STATE_BOND_B	2
+#define HNS3_PF_STATE_MAC_N_B	6
+#define HNS3_PF_MAC_NUM_MASK	0x3
+#define HNS3_PF_STATE_MAIN	BIT(HNS3_PF_STATE_MAIN_B)
+#define HNS3_PF_STATE_DONE	BIT(HNS3_PF_STATE_DONE_B)
+#define HNS3_VF_RST_STATE_NUM	4
+struct hns3_func_status_cmd {
+	uint32_t vf_rst_state[HNS3_VF_RST_STATE_NUM];
+	uint8_t pf_state;
+	uint8_t mac_id;
+	uint8_t rsv1;
+	uint8_t pf_cnt_in_mac;
+	uint8_t pf_num;
+	uint8_t vf_num;
+	uint8_t rsv[2];
+};
+
+#define HNS3_PF_VEC_NUM_S		0
+#define HNS3_PF_VEC_NUM_M		GENMASK(7, 0)
+struct hns3_pf_res_cmd {
+	uint16_t tqp_num;
+	uint16_t buf_size;
+	uint16_t msixcap_localid_ba_nic;
+	uint16_t msixcap_localid_ba_rocee;
+	uint16_t pf_intr_vector_number;
+	uint16_t pf_own_fun_number;
+	uint16_t tx_buf_size;
+	uint16_t dv_buf_size;
+	uint32_t rsv[2];
+};
+
+#define HNS3_UMV_SPC_ALC_B	0
+struct hns3_umv_spc_alc_cmd {
+	uint8_t allocate;
+	uint8_t rsv1[3];
+	uint32_t space_size;
+	uint8_t rsv2[16];
+};
+
+#define HNS3_CFG_OFFSET_S		0
+#define HNS3_CFG_OFFSET_M		GENMASK(19, 0)
+#define HNS3_CFG_RD_LEN_S		24
+#define HNS3_CFG_RD_LEN_M		GENMASK(27, 24)
+#define HNS3_CFG_RD_LEN_BYTES		16
+#define HNS3_CFG_RD_LEN_UNIT		4
+
+#define HNS3_CFG_VMDQ_S			0
+#define HNS3_CFG_VMDQ_M			GENMASK(7, 0)
+#define HNS3_CFG_TC_NUM_S		8
+#define HNS3_CFG_TC_NUM_M		GENMASK(15, 8)
+#define HNS3_CFG_TQP_DESC_N_S		16
+#define HNS3_CFG_TQP_DESC_N_M		GENMASK(31, 16)
+#define HNS3_CFG_PHY_ADDR_S		0
+#define HNS3_CFG_PHY_ADDR_M		GENMASK(7, 0)
+#define HNS3_CFG_MEDIA_TP_S		8
+#define HNS3_CFG_MEDIA_TP_M		GENMASK(15, 8)
+#define HNS3_CFG_RX_BUF_LEN_S		16
+#define HNS3_CFG_RX_BUF_LEN_M		GENMASK(31, 16)
+#define HNS3_CFG_MAC_ADDR_H_S		0
+#define HNS3_CFG_MAC_ADDR_H_M		GENMASK(15, 0)
+#define HNS3_CFG_DEFAULT_SPEED_S	16
+#define HNS3_CFG_DEFAULT_SPEED_M	GENMASK(23, 16)
+#define HNS3_CFG_RSS_SIZE_S		24
+#define HNS3_CFG_RSS_SIZE_M		GENMASK(31, 24)
+#define HNS3_CFG_SPEED_ABILITY_S	0
+#define HNS3_CFG_SPEED_ABILITY_M	GENMASK(7, 0)
+#define HNS3_CFG_UMV_TBL_SPACE_S	16
+#define HNS3_CFG_UMV_TBL_SPACE_M	GENMASK(31, 16)
+
+#define HNS3_ACCEPT_TAG1_B		0
+#define HNS3_ACCEPT_UNTAG1_B		1
+#define HNS3_PORT_INS_TAG1_EN_B		2
+#define HNS3_PORT_INS_TAG2_EN_B		3
+#define HNS3_CFG_NIC_ROCE_SEL_B		4
+#define HNS3_ACCEPT_TAG2_B		5
+#define HNS3_ACCEPT_UNTAG2_B		6
+
+#define HNS3_REM_TAG1_EN_B		0
+#define HNS3_REM_TAG2_EN_B		1
+#define HNS3_SHOW_TAG1_EN_B		2
+#define HNS3_SHOW_TAG2_EN_B		3
+
+/* Factor used to calculate offset and bitmap of VF num */
+#define HNS3_VF_NUM_PER_CMD             64
+#define HNS3_VF_NUM_PER_BYTE            8
+
+struct hns3_cfg_param_cmd {
+	uint32_t offset;
+	uint32_t rsv;
+	uint32_t param[4];
+};
+
+#define HNS3_VPORT_VTAG_RX_CFG_CMD_VF_BITMAP_NUM	8
+struct hns3_vport_vtag_rx_cfg_cmd {
+	uint8_t vport_vlan_cfg;
+	uint8_t vf_offset;
+	uint8_t rsv1[6];
+	uint8_t vf_bitmap[HNS3_VPORT_VTAG_RX_CFG_CMD_VF_BITMAP_NUM];
+	uint8_t rsv2[8];
+};
+
+struct hns3_vport_vtag_tx_cfg_cmd {
+	uint8_t vport_vlan_cfg;
+	uint8_t vf_offset;
+	uint8_t rsv1[2];
+	uint16_t def_vlan_tag1;
+	uint16_t def_vlan_tag2;
+	uint8_t vf_bitmap[8];
+	uint8_t rsv2[8];
+};
+
+
+struct hns3_vlan_filter_ctrl_cmd {
+	uint8_t vlan_type;
+	uint8_t vlan_fe;
+	uint8_t rsv1[2];
+	uint8_t vf_id;
+	uint8_t rsv2[19];
+};
+
+#define HNS3_VLAN_OFFSET_BITMAP_NUM	20
+struct hns3_vlan_filter_pf_cfg_cmd {
+	uint8_t vlan_offset;
+	uint8_t vlan_cfg;
+	uint8_t rsv[2];
+	uint8_t vlan_offset_bitmap[HNS3_VLAN_OFFSET_BITMAP_NUM];
+};
+
+#define HNS3_VLAN_FILTER_VF_CFG_CMD_VF_BITMAP_NUM	16
+struct hns3_vlan_filter_vf_cfg_cmd {
+	uint16_t vlan_id;
+	uint8_t  resp_code;
+	uint8_t  rsv;
+	uint8_t  vlan_cfg;
+	uint8_t  rsv1[3];
+	uint8_t  vf_bitmap[HNS3_VLAN_FILTER_VF_CFG_CMD_VF_BITMAP_NUM];
+};
+
+struct hns3_tx_vlan_type_cfg_cmd {
+	uint16_t ot_vlan_type;
+	uint16_t in_vlan_type;
+	uint8_t rsv[20];
+};
+
+struct hns3_rx_vlan_type_cfg_cmd {
+	uint16_t ot_fst_vlan_type;
+	uint16_t ot_sec_vlan_type;
+	uint16_t in_fst_vlan_type;
+	uint16_t in_sec_vlan_type;
+	uint8_t rsv[16];
+};
+
+#define HNS3_TSO_MSS_MIN_S	0
+#define HNS3_TSO_MSS_MIN_M	GENMASK(13, 0)
+
+#define HNS3_TSO_MSS_MAX_S	16
+#define HNS3_TSO_MSS_MAX_M	GENMASK(29, 16)
+
+struct hns3_cfg_tso_status_cmd {
+	rte_le16_t tso_mss_min;
+	rte_le16_t tso_mss_max;
+	uint8_t rsv[20];
+};
+
+#define HNS3_GRO_EN_B		0
+struct hns3_cfg_gro_status_cmd {
+	rte_le16_t gro_en;
+	uint8_t rsv[22];
+};
+
+#define HNS3_TSO_MSS_MIN	256
+#define HNS3_TSO_MSS_MAX	9668
+
+#define HNS3_RSS_HASH_KEY_OFFSET_B	4
+
+#define HNS3_RSS_CFG_TBL_SIZE	16
+#define HNS3_RSS_HASH_KEY_NUM	16
+/* Configure the algorithm mode and Hash Key, opcode:0x0D01 */
+struct hns3_rss_generic_config_cmd {
+	/* Hash_algorithm(8.0~8.3), hash_key_offset(8.4~8.7) */
+	uint8_t hash_config;
+	uint8_t rsv[7];
+	uint8_t hash_key[HNS3_RSS_HASH_KEY_NUM];
+};
+
+/* Configure the tuple selection for RSS hash input, opcode:0x0D02 */
+struct hns3_rss_input_tuple_cmd {
+	uint8_t ipv4_tcp_en;
+	uint8_t ipv4_udp_en;
+	uint8_t ipv4_sctp_en;
+	uint8_t ipv4_fragment_en;
+	uint8_t ipv6_tcp_en;
+	uint8_t ipv6_udp_en;
+	uint8_t ipv6_sctp_en;
+	uint8_t ipv6_fragment_en;
+	uint8_t rsv[16];
+};
+
+#define HNS3_RSS_CFG_TBL_SIZE	16
+
+/* Configure the indirection table, opcode:0x0D07 */
+struct hns3_rss_indirection_table_cmd {
+	uint16_t start_table_index;  /* Bit3~0 must be 0x0. */
+	uint16_t rss_set_bitmap;
+	uint8_t rsv[4];
+	uint8_t rss_result[HNS3_RSS_CFG_TBL_SIZE];
+};
+
+#define HNS3_RSS_TC_OFFSET_S		0
+#define HNS3_RSS_TC_OFFSET_M		(0x3ff << HNS3_RSS_TC_OFFSET_S)
+#define HNS3_RSS_TC_SIZE_S		12
+#define HNS3_RSS_TC_SIZE_M		(0x7 << HNS3_RSS_TC_SIZE_S)
+#define HNS3_RSS_TC_VALID_B		15
+
+/* Configure the tc_size and tc_offset, opcode:0x0D08 */
+struct hns3_rss_tc_mode_cmd {
+	uint16_t rss_tc_mode[HNS3_MAX_TC_NUM];
+	uint8_t rsv[8];
+};
+
+#define HNS3_LINK_STATUS_UP_B	0
+#define HNS3_LINK_STATUS_UP_M	BIT(HNS3_LINK_STATUS_UP_B)
+struct hns3_link_status_cmd {
+	uint8_t status;
+	uint8_t rsv[23];
+};
+
+struct hns3_promisc_param {
+	uint8_t vf_id;
+	uint8_t enable;
+};
+
+#define HNS3_PROMISC_TX_EN_B	BIT(4)
+#define HNS3_PROMISC_RX_EN_B	BIT(5)
+#define HNS3_PROMISC_EN_B	1
+#define HNS3_PROMISC_EN_ALL	0x7
+#define HNS3_PROMISC_EN_UC	0x1
+#define HNS3_PROMISC_EN_MC	0x2
+#define HNS3_PROMISC_EN_BC	0x4
+struct hns3_promisc_cfg_cmd {
+	uint8_t flag;
+	uint8_t vf_id;
+	uint16_t rsv0;
+	uint8_t rsv1[20];
+};
+
+enum hns3_promisc_type {
+	HNS3_UNICAST	= 1,
+	HNS3_MULTICAST	= 2,
+	HNS3_BROADCAST	= 3,
+};
+
+#define HNS3_MAC_TX_EN_B		6
+#define HNS3_MAC_RX_EN_B		7
+#define HNS3_MAC_PAD_TX_B		11
+#define HNS3_MAC_PAD_RX_B		12
+#define HNS3_MAC_1588_TX_B		13
+#define HNS3_MAC_1588_RX_B		14
+#define HNS3_MAC_APP_LP_B		15
+#define HNS3_MAC_LINE_LP_B		16
+#define HNS3_MAC_FCS_TX_B		17
+#define HNS3_MAC_RX_OVERSIZE_TRUNCATE_B	18
+#define HNS3_MAC_RX_FCS_STRIP_B		19
+#define HNS3_MAC_RX_FCS_B		20
+#define HNS3_MAC_TX_UNDER_MIN_ERR_B	21
+#define HNS3_MAC_TX_OVERSIZE_TRUNCATE_B	22
+
+struct hns3_config_mac_mode_cmd {
+	uint32_t txrx_pad_fcs_loop_en;
+	uint8_t  rsv[20];
+};
+
+#define HNS3_CFG_SPEED_10M		6
+#define HNS3_CFG_SPEED_100M		7
+#define HNS3_CFG_SPEED_1G		0
+#define HNS3_CFG_SPEED_10G		1
+#define HNS3_CFG_SPEED_25G		2
+#define HNS3_CFG_SPEED_40G		3
+#define HNS3_CFG_SPEED_50G		4
+#define HNS3_CFG_SPEED_100G		5
+
+#define HNS3_CFG_SPEED_S		0
+#define HNS3_CFG_SPEED_M		GENMASK(5, 0)
+#define HNS3_CFG_DUPLEX_B		7
+#define HNS3_CFG_DUPLEX_M		BIT(HNS3_CFG_DUPLEX_B)
+
+#define HNS3_CFG_MAC_SPEED_CHANGE_EN_B	0
+
+struct hns3_config_mac_speed_dup_cmd {
+	uint8_t speed_dup;
+	uint8_t mac_change_fec_en;
+	uint8_t rsv[22];
+};
+
+#define HNS3_RING_ID_MASK		GENMASK(9, 0)
+#define HNS3_TQP_ENABLE_B		0
+
+#define HNS3_MAC_CFG_AN_EN_B		0
+#define HNS3_MAC_CFG_AN_INT_EN_B	1
+#define HNS3_MAC_CFG_AN_INT_MSK_B	2
+#define HNS3_MAC_CFG_AN_INT_CLR_B	3
+#define HNS3_MAC_CFG_AN_RST_B		4
+
+#define HNS3_MAC_CFG_AN_EN	BIT(HNS3_MAC_CFG_AN_EN_B)
+
+struct hns3_config_auto_neg_cmd {
+	uint32_t  cfg_an_cmd_flag;
+	uint8_t   rsv[20];
+};
+
+struct hns3_sfp_speed_cmd {
+	uint32_t  sfp_speed;
+	uint32_t  rsv[5];
+};
+
+#define HNS3_MAC_MGR_MASK_VLAN_B		BIT(0)
+#define HNS3_MAC_MGR_MASK_MAC_B			BIT(1)
+#define HNS3_MAC_MGR_MASK_ETHERTYPE_B		BIT(2)
+#define HNS3_MAC_ETHERTYPE_LLDP			0x88cc
+
+struct hns3_mac_mgr_tbl_entry_cmd {
+	uint8_t   flags;
+	uint8_t   resp_code;
+	uint16_t  vlan_tag;
+	uint32_t  mac_addr_hi32;
+	uint16_t  mac_addr_lo16;
+	uint16_t  rsv1;
+	uint16_t  ethter_type;
+	uint16_t  egress_port;
+	uint16_t  egress_queue;
+	uint8_t   sw_port_id_aware;
+	uint8_t   rsv2;
+	uint8_t   i_port_bitmap;
+	uint8_t   i_port_direction;
+	uint8_t   rsv3[2];
+};
+
+struct hns3_cfg_com_tqp_queue_cmd {
+	uint16_t tqp_id;
+	uint16_t stream_id;
+	uint8_t enable;
+	uint8_t rsv[19];
+};
+
+#define HNS3_TQP_MAP_TYPE_PF		0
+#define HNS3_TQP_MAP_TYPE_VF		1
+#define HNS3_TQP_MAP_TYPE_B		0
+#define HNS3_TQP_MAP_EN_B		1
+
+struct hns3_tqp_map_cmd {
+	uint16_t tqp_id;        /* Absolute tqp id for in this pf */
+	uint8_t tqp_vf;         /* VF id */
+	uint8_t tqp_flag;       /* Indicate it's pf or vf tqp */
+	uint16_t tqp_vid;       /* Virtual id in this pf/vf */
+	uint8_t rsv[18];
+};
+
+struct hns3_config_max_frm_size_cmd {
+	uint16_t max_frm_size;
+	uint8_t min_frm_size;
+	uint8_t rsv[21];
+};
+
+enum hns3_mac_vlan_tbl_opcode {
+	HNS3_MAC_VLAN_ADD,      /* Add new or modify mac_vlan */
+	HNS3_MAC_VLAN_UPDATE,   /* Modify other fields of this table */
+	HNS3_MAC_VLAN_REMOVE,   /* Remove a entry through mac_vlan key */
+	HNS3_MAC_VLAN_LKUP,     /* Lookup a entry through mac_vlan key */
+};
+
+enum hns3_mac_vlan_add_resp_code {
+	HNS3_ADD_UC_OVERFLOW = 2,  /* ADD failed for UC overflow */
+	HNS3_ADD_MC_OVERFLOW,      /* ADD failed for MC overflow */
+};
+
+#define HNS3_MC_MAC_VLAN_ADD_DESC_NUM	3
+
+#define HNS3_MAC_VLAN_BIT0_EN_B		0
+#define HNS3_MAC_VLAN_BIT1_EN_B		1
+#define HNS3_MAC_EPORT_SW_EN_B		12
+#define HNS3_MAC_EPORT_TYPE_B		11
+#define HNS3_MAC_EPORT_VFID_S		3
+#define HNS3_MAC_EPORT_VFID_M		GENMASK(10, 3)
+#define HNS3_MAC_EPORT_PFID_S		0
+#define HNS3_MAC_EPORT_PFID_M		GENMASK(2, 0)
+struct hns3_mac_vlan_tbl_entry_cmd {
+	uint8_t	  flags;
+	uint8_t   resp_code;
+	uint16_t  vlan_tag;
+	uint32_t  mac_addr_hi32;
+	uint16_t  mac_addr_lo16;
+	uint16_t  rsv1;
+	uint8_t   entry_type;
+	uint8_t   mc_mac_en;
+	uint16_t  egress_port;
+	uint16_t  egress_queue;
+	uint8_t   rsv2[6];
+};
+
+#define HNS3_TQP_RESET_B	0
+struct hns3_reset_tqp_queue_cmd {
+	uint16_t tqp_id;
+	uint8_t reset_req;
+	uint8_t ready_to_reset;
+	uint8_t rsv[20];
+};
+
+#define HNS3_CFG_RESET_MAC_B		3
+#define HNS3_CFG_RESET_FUNC_B		7
+struct hns3_reset_cmd {
+	uint8_t mac_func_reset;
+	uint8_t fun_reset_vfid;
+	uint8_t rsv[22];
+};
+
+#define HNS3_DEFAULT_TX_BUF		0x4000    /* 16k  bytes */
+#define HNS3_TOTAL_PKT_BUF		0x108000  /* 1.03125M bytes */
+#define HNS3_DEFAULT_DV			0xA000    /* 40k byte */
+#define HNS3_DEFAULT_NON_DCB_DV		0x7800    /* 30K byte */
+#define HNS3_NON_DCB_ADDITIONAL_BUF	0x1400    /* 5120 byte */
+
+#define HNS3_TYPE_CRQ			0
+#define HNS3_TYPE_CSQ			1
+
+#define HNS3_NIC_SW_RST_RDY_B		16
+#define HNS3_NIC_SW_RST_RDY			BIT(HNS3_NIC_SW_RST_RDY_B)
+#define HNS3_NIC_CMQ_DESC_NUM		1024
+#define HNS3_NIC_CMQ_DESC_NUM_S		3
+
+#define HNS3_CMD_SEND_SYNC(flag) \
+	((flag) & HNS3_CMD_FLAG_NO_INTR)
+
+void hns3_cmd_reuse_desc(struct hns3_cmd_desc *desc, bool is_read);
+void hns3_cmd_setup_basic_desc(struct hns3_cmd_desc *desc,
+				enum hns3_opcode_type opcode, bool is_read);
+int hns3_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc, int num);
+int hns3_cmd_init_queue(struct hns3_hw *hw);
+int hns3_cmd_init(struct hns3_hw *hw);
+void hns3_cmd_destroy_queue(struct hns3_hw *hw);
+void hns3_cmd_uninit(struct hns3_hw *hw);
+
+#endif /* _HNS3_CMD_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 0587a9c..4f4de6d 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -29,6 +29,7 @@
 #include <rte_log.h>
 #include <rte_pci.h>
 
+#include "hns3_cmd.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_regs.h"
@@ -36,12 +37,70 @@
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
+static int
+hns3_init_pf(struct rte_eth_dev *eth_dev)
+{
+	struct rte_device *dev = eth_dev->device;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev);
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get hardware io base address from pcie BAR2 IO space */
+	hw->io_base = pci_dev->mem_resource[2].addr;
+
+	/* Firmware command queue initialize */
+	ret = hns3_cmd_init_queue(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init cmd queue: %d", ret);
+		goto err_cmd_init_queue;
+	}
+
+	hns3_clear_all_event_cause(hw);
+
+	/* Firmware command initialize */
+	ret = hns3_cmd_init(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init cmd: %d", ret);
+		goto err_cmd_init;
+	}
+
+	return 0;
+
+err_cmd_init:
+	hns3_cmd_destroy_queue(hw);
+
+err_cmd_init_queue:
+	hw->io_base = NULL;
+
+	return ret;
+}
+
+static void
+hns3_uninit_pf(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct rte_device *dev = eth_dev->device;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev);
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hns3_cmd_uninit(hw);
+	hns3_cmd_destroy_queue(hw);
+	hw->io_base = NULL;
+}
+
 static void
 hns3_dev_close(struct rte_eth_dev *eth_dev)
 {
 	struct hns3_adapter *hns = eth_dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
+	hw->adapter_state = HNS3_NIC_CLOSING;
+	hns3_uninit_pf(eth_dev);
 	hw->adapter_state = HNS3_NIC_CLOSED;
 }
 
@@ -69,9 +128,20 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 
 	hns->is_vf = false;
 	hw->data = eth_dev->data;
+
+	ret = hns3_init_pf(eth_dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init pf: %d", ret);
+		goto err_init_pf;
+	}
+
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
 
 	return 0;
+
+err_init_pf:
+	eth_dev->dev_ops = NULL;
+	return ret;
 }
 
 static int
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index bfb54f2..84fcf34 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -39,7 +39,6 @@
 
 #define HNS3_4_TCS			4
 #define HNS3_8_TCS			8
-#define HNS3_MAX_TC_NUM			8
 
 #define HNS3_MAX_PF_NUM			8
 #define HNS3_UMV_TBL_SIZE		3072
@@ -327,6 +326,7 @@ struct hns3_reset_data {
 struct hns3_hw {
 	struct rte_eth_dev_data *data;
 	void *io_base;
+	struct hns3_cmq cmq;
 	struct hns3_mac mac;
 	unsigned int secondary_cnt; /* Number of secondary processes init'd. */
 	uint32_t fw_version;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 05/22] net/hns3: add the initialization of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (3 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of " Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations Wei Hu (Xavier)
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds the initialization of hns3 PF PMD driver.
It gets configuration from IMP such as queue information,
configures queue, inits mac, inits manage table, disables
gro etc.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 1497 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_ethdev.h |    3 +
 2 files changed, 1500 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 4f4de6d..3b5deb0 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -34,10 +34,1469 @@
 #include "hns3_logs.h"
 #include "hns3_regs.h"
 
+#define HNS3_DEFAULT_PORT_CONF_BURST_SIZE	32
+#define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
+
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
 static int
+hns3_config_tso(struct hns3_hw *hw, unsigned int tso_mss_min,
+		unsigned int tso_mss_max)
+{
+	struct hns3_cfg_tso_status_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint16_t tso_mss;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TSO_GENERIC_CONFIG, false);
+
+	req = (struct hns3_cfg_tso_status_cmd *)desc.data;
+
+	tso_mss = 0;
+	hns3_set_field(tso_mss, HNS3_TSO_MSS_MIN_M, HNS3_TSO_MSS_MIN_S,
+		       tso_mss_min);
+	req->tso_mss_min = rte_cpu_to_le_16(tso_mss);
+
+	tso_mss = 0;
+	hns3_set_field(tso_mss, HNS3_TSO_MSS_MIN_M, HNS3_TSO_MSS_MIN_S,
+		       tso_mss_max);
+	req->tso_mss_max = rte_cpu_to_le_16(tso_mss);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+int
+hns3_config_gro(struct hns3_hw *hw, bool en)
+{
+	struct hns3_cfg_gro_status_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GRO_GENERIC_CONFIG, false);
+	req = (struct hns3_cfg_gro_status_cmd *)desc.data;
+
+	req->gro_en = rte_cpu_to_le_16(en ? 1 : 0);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "GRO hardware config cmd failed, ret = %d\n", ret);
+
+	return ret;
+}
+
+static int
+hns3_set_umv_space(struct hns3_hw *hw, uint16_t space_size,
+		   uint16_t *allocated_size, bool is_alloc)
+{
+	struct hns3_umv_spc_alc_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	req = (struct hns3_umv_spc_alc_cmd *)desc.data;
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_ALLOCATE, false);
+	hns3_set_bit(req->allocate, HNS3_UMV_SPC_ALC_B, is_alloc ? 0 : 1);
+	req->space_size = rte_cpu_to_le_32(space_size);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "%s umv space failed for cmd_send, ret =%d",
+			     is_alloc ? "allocate" : "free", ret);
+		return ret;
+	}
+
+	if (is_alloc && allocated_size)
+		*allocated_size = rte_le_to_cpu_32(desc.data[1]);
+
+	return 0;
+}
+
+static int
+hns3_init_umv_space(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint16_t allocated_size = 0;
+	int ret;
+
+	ret = hns3_set_umv_space(hw, pf->wanted_umv_size, &allocated_size,
+				 true);
+	if (ret)
+		return ret;
+
+	if (allocated_size < pf->wanted_umv_size)
+		PMD_INIT_LOG(WARNING, "Alloc umv space failed, want %u, get %u",
+			     pf->wanted_umv_size, allocated_size);
+
+	pf->max_umv_size = (!!allocated_size) ? allocated_size :
+						pf->wanted_umv_size;
+	pf->used_umv_size = 0;
+	return 0;
+}
+
+static int
+hns3_uninit_umv_space(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	if (pf->max_umv_size == 0)
+		return 0;
+
+	ret = hns3_set_umv_space(hw, pf->max_umv_size, NULL, false);
+	if (ret)
+		return ret;
+
+	pf->max_umv_size = 0;
+
+	return 0;
+}
+
+static int
+hns3_set_mac_mtu(struct hns3_hw *hw, uint16_t new_mps)
+{
+	struct hns3_config_max_frm_size_cmd *req;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_MAX_FRM_SIZE, false);
+
+	req = (struct hns3_config_max_frm_size_cmd *)desc.data;
+	req->max_frm_size = rte_cpu_to_le_16(new_mps);
+	req->min_frm_size = HNS3_MIN_FRAME_LEN;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_config_mtu(struct hns3_hw *hw, uint16_t mps)
+{
+	int ret;
+
+	ret = hns3_set_mac_mtu(hw, mps);
+	if (ret) {
+		hns3_err(hw, "Failed to set mtu, ret = %d", ret);
+		return ret;
+	}
+
+	ret = hns3_buffer_alloc(hw);
+	if (ret) {
+		hns3_err(hw, "Failed to allocate buffer, ret = %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_parse_func_status(struct hns3_hw *hw, struct hns3_func_status_cmd *status)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+
+	if (!(status->pf_state & HNS3_PF_STATE_DONE))
+		return -EINVAL;
+
+	pf->is_main_pf = (status->pf_state & HNS3_PF_STATE_MAIN) ? true : false;
+
+	return 0;
+}
+
+static int
+hns3_query_function_status(struct hns3_hw *hw)
+{
+#define HNS3_QUERY_MAX_CNT		10
+#define HNS3_QUERY_SLEEP_MSCOEND	1
+	struct hns3_func_status_cmd *req;
+	struct hns3_cmd_desc desc;
+	int timeout = 0;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_FUNC_STATUS, true);
+	req = (struct hns3_func_status_cmd *)desc.data;
+
+	do {
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			PMD_INIT_LOG(ERR, "query function status failed %d",
+				     ret);
+			return ret;
+		}
+
+		/* Check pf reset is done */
+		if (req->pf_state)
+			break;
+
+		rte_delay_ms(HNS3_QUERY_SLEEP_MSCOEND);
+	} while (timeout++ < HNS3_QUERY_MAX_CNT);
+
+	return hns3_parse_func_status(hw, req);
+}
+
+static int
+hns3_query_pf_resource(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_pf_res_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_PF_RSRC, true);
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "query pf resource failed %d", ret);
+		return ret;
+	}
+
+	req = (struct hns3_pf_res_cmd *)desc.data;
+	hw->total_tqps_num = rte_le_to_cpu_16(req->tqp_num);
+	pf->pkt_buf_size = rte_le_to_cpu_16(req->buf_size) << HNS3_BUF_UNIT_S;
+	hw->tqps_num = RTE_MIN(hw->total_tqps_num, HNS3_MAX_TQP_NUM_PER_FUNC);
+
+	if (req->tx_buf_size)
+		pf->tx_buf_size =
+		    rte_le_to_cpu_16(req->tx_buf_size) << HNS3_BUF_UNIT_S;
+	else
+		pf->tx_buf_size = HNS3_DEFAULT_TX_BUF;
+
+	pf->tx_buf_size = roundup(pf->tx_buf_size, HNS3_BUF_SIZE_UNIT);
+
+	if (req->dv_buf_size)
+		pf->dv_buf_size =
+		    rte_le_to_cpu_16(req->dv_buf_size) << HNS3_BUF_UNIT_S;
+	else
+		pf->dv_buf_size = HNS3_DEFAULT_DV;
+
+	pf->dv_buf_size = roundup(pf->dv_buf_size, HNS3_BUF_SIZE_UNIT);
+
+	hw->num_msi =
+	    hns3_get_field(rte_le_to_cpu_16(req->pf_intr_vector_number),
+			   HNS3_PF_VEC_NUM_M, HNS3_PF_VEC_NUM_S);
+
+	return 0;
+}
+
+static void
+hns3_parse_cfg(struct hns3_cfg *cfg, struct hns3_cmd_desc *desc)
+{
+	struct hns3_cfg_param_cmd *req;
+	uint64_t mac_addr_tmp_high;
+	uint64_t mac_addr_tmp;
+	uint32_t i;
+
+	req = (struct hns3_cfg_param_cmd *)desc[0].data;
+
+	/* get the configuration */
+	cfg->vmdq_vport_num = hns3_get_field(rte_le_to_cpu_32(req->param[0]),
+					     HNS3_CFG_VMDQ_M, HNS3_CFG_VMDQ_S);
+	cfg->tc_num = hns3_get_field(rte_le_to_cpu_32(req->param[0]),
+				     HNS3_CFG_TC_NUM_M, HNS3_CFG_TC_NUM_S);
+	cfg->tqp_desc_num = hns3_get_field(rte_le_to_cpu_32(req->param[0]),
+					   HNS3_CFG_TQP_DESC_N_M,
+					   HNS3_CFG_TQP_DESC_N_S);
+
+	cfg->phy_addr = hns3_get_field(rte_le_to_cpu_32(req->param[1]),
+				       HNS3_CFG_PHY_ADDR_M,
+				       HNS3_CFG_PHY_ADDR_S);
+	cfg->media_type = hns3_get_field(rte_le_to_cpu_32(req->param[1]),
+					 HNS3_CFG_MEDIA_TP_M,
+					 HNS3_CFG_MEDIA_TP_S);
+	cfg->rx_buf_len = hns3_get_field(rte_le_to_cpu_32(req->param[1]),
+					 HNS3_CFG_RX_BUF_LEN_M,
+					 HNS3_CFG_RX_BUF_LEN_S);
+	/* get mac address */
+	mac_addr_tmp = rte_le_to_cpu_32(req->param[2]);
+	mac_addr_tmp_high = hns3_get_field(rte_le_to_cpu_32(req->param[3]),
+					   HNS3_CFG_MAC_ADDR_H_M,
+					   HNS3_CFG_MAC_ADDR_H_S);
+
+	mac_addr_tmp |= (mac_addr_tmp_high << 31) << 1;
+
+	cfg->default_speed = hns3_get_field(rte_le_to_cpu_32(req->param[3]),
+					    HNS3_CFG_DEFAULT_SPEED_M,
+					    HNS3_CFG_DEFAULT_SPEED_S);
+	cfg->rss_size_max = hns3_get_field(rte_le_to_cpu_32(req->param[3]),
+					   HNS3_CFG_RSS_SIZE_M,
+					   HNS3_CFG_RSS_SIZE_S);
+
+	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
+		cfg->mac_addr[i] = (mac_addr_tmp >> (8 * i)) & 0xff;
+
+	req = (struct hns3_cfg_param_cmd *)desc[1].data;
+	cfg->numa_node_map = rte_le_to_cpu_32(req->param[0]);
+
+	cfg->speed_ability = hns3_get_field(rte_le_to_cpu_32(req->param[1]),
+					    HNS3_CFG_SPEED_ABILITY_M,
+					    HNS3_CFG_SPEED_ABILITY_S);
+	cfg->umv_space = hns3_get_field(rte_le_to_cpu_32(req->param[1]),
+					HNS3_CFG_UMV_TBL_SPACE_M,
+					HNS3_CFG_UMV_TBL_SPACE_S);
+	if (!cfg->umv_space)
+		cfg->umv_space = HNS3_DEFAULT_UMV_SPACE_PER_PF;
+}
+
+/* hns3_get_board_cfg: query the static parameter from NCL_config file in flash
+ * @hw: pointer to struct hns3_hw
+ * @hcfg: the config structure to be getted
+ */
+static int
+hns3_get_board_cfg(struct hns3_hw *hw, struct hns3_cfg *hcfg)
+{
+	struct hns3_cmd_desc desc[HNS3_PF_CFG_DESC_NUM];
+	struct hns3_cfg_param_cmd *req;
+	uint32_t offset;
+	uint32_t i;
+	int ret;
+
+	for (i = 0; i < HNS3_PF_CFG_DESC_NUM; i++) {
+		offset = 0;
+		req = (struct hns3_cfg_param_cmd *)desc[i].data;
+		hns3_cmd_setup_basic_desc(&desc[i], HNS3_OPC_GET_CFG_PARAM,
+					  true);
+		hns3_set_field(offset, HNS3_CFG_OFFSET_M, HNS3_CFG_OFFSET_S,
+			       i * HNS3_CFG_RD_LEN_BYTES);
+		/* Len should be divided by 4 when send to hardware */
+		hns3_set_field(offset, HNS3_CFG_RD_LEN_M, HNS3_CFG_RD_LEN_S,
+			       HNS3_CFG_RD_LEN_BYTES / HNS3_CFG_RD_LEN_UNIT);
+		req->offset = rte_cpu_to_le_32(offset);
+	}
+
+	ret = hns3_cmd_send(hw, desc, HNS3_PF_CFG_DESC_NUM);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "get config failed %d.", ret);
+		return ret;
+	}
+
+	hns3_parse_cfg(hcfg, desc);
+
+	return 0;
+}
+
+static int
+hns3_parse_speed(int speed_cmd, uint32_t *speed)
+{
+	switch (speed_cmd) {
+	case HNS3_CFG_SPEED_10M:
+		*speed = ETH_SPEED_NUM_10M;
+		break;
+	case HNS3_CFG_SPEED_100M:
+		*speed = ETH_SPEED_NUM_100M;
+		break;
+	case HNS3_CFG_SPEED_1G:
+		*speed = ETH_SPEED_NUM_1G;
+		break;
+	case HNS3_CFG_SPEED_10G:
+		*speed = ETH_SPEED_NUM_10G;
+		break;
+	case HNS3_CFG_SPEED_25G:
+		*speed = ETH_SPEED_NUM_25G;
+		break;
+	case HNS3_CFG_SPEED_40G:
+		*speed = ETH_SPEED_NUM_40G;
+		break;
+	case HNS3_CFG_SPEED_50G:
+		*speed = ETH_SPEED_NUM_50G;
+		break;
+	case HNS3_CFG_SPEED_100G:
+		*speed = ETH_SPEED_NUM_100G;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+hns3_get_board_configuration(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_cfg cfg;
+	int ret;
+
+	ret = hns3_get_board_cfg(hw, &cfg);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "get board config failed %d", ret);
+		return ret;
+	}
+
+	if (cfg.media_type == HNS3_MEDIA_TYPE_COPPER) {
+		PMD_INIT_LOG(ERR, "media type is copper, not supported.");
+		return -EOPNOTSUPP;
+	}
+
+	hw->mac.media_type = cfg.media_type;
+	hw->rss_size_max = cfg.rss_size_max;
+	hw->rx_buf_len = cfg.rx_buf_len;
+	memcpy(hw->mac.mac_addr, cfg.mac_addr, RTE_ETHER_ADDR_LEN);
+	hw->mac.phy_addr = cfg.phy_addr;
+	hw->mac.default_addr_setted = false;
+	hw->num_tx_desc = cfg.tqp_desc_num;
+	hw->num_rx_desc = cfg.tqp_desc_num;
+	hw->dcb_info.num_pg = 1;
+	hw->dcb_info.hw_pfc_map = 0;
+
+	ret = hns3_parse_speed(cfg.default_speed, &hw->mac.link_speed);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Get wrong speed %d, ret = %d",
+			     cfg.default_speed, ret);
+		return ret;
+	}
+
+	pf->tc_max = cfg.tc_num;
+	if (pf->tc_max > HNS3_MAX_TC_NUM || pf->tc_max < 1) {
+		PMD_INIT_LOG(WARNING,
+			     "Get TC num(%u) from flash, set TC num to 1",
+			     pf->tc_max);
+		pf->tc_max = 1;
+	}
+
+	/* Dev does not support DCB */
+	if (!hns3_dev_dcb_supported(hw)) {
+		pf->tc_max = 1;
+		pf->pfc_max = 0;
+	} else
+		pf->pfc_max = pf->tc_max;
+
+	hw->dcb_info.num_tc = 1;
+	hw->alloc_rss_size = RTE_MIN(hw->rss_size_max,
+				     hw->tqps_num / hw->dcb_info.num_tc);
+	hns3_set_bit(hw->hw_tc_map, 0, 1);
+	pf->tx_sch_mode = HNS3_FLAG_TC_BASE_SCH_MODE;
+
+	pf->wanted_umv_size = cfg.umv_space;
+
+	return ret;
+}
+
+static int
+hns3_get_configuration(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_query_function_status(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to query function status: %d.", ret);
+		return ret;
+	}
+
+	/* Get pf resource */
+	ret = hns3_query_pf_resource(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to query pf resource: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_get_board_configuration(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get board configuration: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_map_tqps_to_func(struct hns3_hw *hw, uint16_t func_id, uint16_t tqp_pid,
+		      uint16_t tqp_vid, bool is_pf)
+{
+	struct hns3_tqp_map_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_SET_TQP_MAP, false);
+
+	req = (struct hns3_tqp_map_cmd *)desc.data;
+	req->tqp_id = rte_cpu_to_le_16(tqp_pid);
+	req->tqp_vf = func_id;
+	req->tqp_flag = 1 << HNS3_TQP_MAP_EN_B;
+	if (!is_pf)
+		req->tqp_flag |= (1 << HNS3_TQP_MAP_TYPE_B);
+	req->tqp_vid = rte_cpu_to_le_16(tqp_vid);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "TQP map failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_map_tqp(struct hns3_hw *hw)
+{
+	uint16_t tqps_num = hw->total_tqps_num;
+	uint16_t func_id;
+	uint16_t tqp_id;
+	int num;
+	int ret;
+	int i;
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * so we allocate tqps to PF as much as possible.
+	 */
+	tqp_id = 0;
+	num = DIV_ROUND_UP(hw->total_tqps_num, HNS3_MAX_TQP_NUM_PER_FUNC);
+	for (func_id = 0; func_id < num; func_id++) {
+		for (i = 0;
+		     i < HNS3_MAX_TQP_NUM_PER_FUNC && tqp_id < tqps_num; i++) {
+			ret = hns3_map_tqps_to_func(hw, func_id, tqp_id++, i,
+						    true);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+{
+	struct hns3_config_mac_speed_dup_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	req = (struct hns3_config_mac_speed_dup_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_SPEED_DUP, false);
+
+	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
+
+	switch (speed) {
+	case ETH_SPEED_NUM_10M:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
+		break;
+	case ETH_SPEED_NUM_100M:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
+		break;
+	case ETH_SPEED_NUM_1G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
+		break;
+	case ETH_SPEED_NUM_10G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
+		break;
+	case ETH_SPEED_NUM_25G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
+		break;
+	case ETH_SPEED_NUM_40G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
+		break;
+	case ETH_SPEED_NUM_50G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
+		break;
+	case ETH_SPEED_NUM_100G:
+		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
+			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "invalid speed (%u)", speed);
+		return -EINVAL;
+	}
+
+	hns3_set_bit(req->mac_change_fec_en, HNS3_CFG_MAC_SPEED_CHANGE_EN_B, 1);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "mac speed/duplex config cmd failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_tx_buffer_calc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_priv_buf *priv;
+	uint32_t i, total_size;
+
+	total_size = pf->pkt_buf_size;
+
+	/* alloc tx buffer for all enabled tc */
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+
+		if (hw->hw_tc_map & BIT(i)) {
+			if (total_size < pf->tx_buf_size)
+				return -ENOMEM;
+
+			priv->tx_buf_size = pf->tx_buf_size;
+		} else
+			priv->tx_buf_size = 0;
+
+		total_size -= priv->tx_buf_size;
+	}
+
+	return 0;
+}
+
+static int
+hns3_tx_buffer_alloc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+/* TX buffer size is unit by 128 byte */
+#define HNS3_BUF_SIZE_UNIT_SHIFT	7
+#define HNS3_BUF_SIZE_UPDATE_EN_MSK	BIT(15)
+	struct hns3_tx_buff_alloc_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint32_t buf_size;
+	uint32_t i;
+	int ret;
+
+	req = (struct hns3_tx_buff_alloc_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TX_BUFF_ALLOC, 0);
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		buf_size = buf_alloc->priv_buf[i].tx_buf_size;
+
+		buf_size = buf_size >> HNS3_BUF_SIZE_UNIT_SHIFT;
+		req->tx_pkt_buff[i] = rte_cpu_to_le_16(buf_size |
+						HNS3_BUF_SIZE_UPDATE_EN_MSK);
+	}
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "tx buffer alloc cmd failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_get_tc_num(struct hns3_hw *hw)
+{
+	int cnt = 0;
+	uint8_t i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++)
+		if (hw->hw_tc_map & BIT(i))
+			cnt++;
+	return cnt;
+}
+
+static uint32_t
+hns3_get_rx_priv_buff_alloced(struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_priv_buf *priv;
+	uint32_t rx_priv = 0;
+	int i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+		if (priv->enable)
+			rx_priv += priv->buf_size;
+	}
+	return rx_priv;
+}
+
+static uint32_t
+hns3_get_tx_buff_alloced(struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	uint32_t total_tx_size = 0;
+	uint32_t i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++)
+		total_tx_size += buf_alloc->priv_buf[i].tx_buf_size;
+
+	return total_tx_size;
+}
+
+/* Get the number of pfc enabled TCs, which have private buffer */
+static int
+hns3_get_pfc_priv_num(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_priv_buf *priv;
+	int cnt = 0;
+	uint8_t i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+		if ((hw->dcb_info.hw_pfc_map & BIT(i)) && priv->enable)
+			cnt++;
+	}
+
+	return cnt;
+}
+
+/* Get the number of pfc disabled TCs, which have private buffer */
+static int
+hns3_get_no_pfc_priv_num(struct hns3_hw *hw,
+			 struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_priv_buf *priv;
+	int cnt = 0;
+	uint8_t i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+		if (hw->hw_tc_map & BIT(i) &&
+		    !(hw->dcb_info.hw_pfc_map & BIT(i)) && priv->enable)
+			cnt++;
+	}
+
+	return cnt;
+}
+
+static bool
+hns3_is_rx_buf_ok(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc,
+		  uint32_t rx_all)
+{
+	uint32_t shared_buf_min, shared_buf_tc, shared_std, hi_thrd, lo_thrd;
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t shared_buf, aligned_mps;
+	uint32_t rx_priv;
+	uint8_t tc_num;
+	uint8_t i;
+
+	tc_num = hns3_get_tc_num(hw);
+	aligned_mps = roundup(pf->mps, HNS3_BUF_SIZE_UNIT);
+
+	if (hns3_dev_dcb_supported(hw))
+		shared_buf_min = HNS3_BUF_MUL_BY * aligned_mps +
+					pf->dv_buf_size;
+	else
+		shared_buf_min = aligned_mps + HNS3_NON_DCB_ADDITIONAL_BUF
+					+ pf->dv_buf_size;
+
+	shared_buf_tc = tc_num * aligned_mps + aligned_mps;
+	shared_std = roundup(max_t(uint32_t, shared_buf_min, shared_buf_tc),
+			     HNS3_BUF_SIZE_UNIT);
+
+	rx_priv = hns3_get_rx_priv_buff_alloced(buf_alloc);
+	if (rx_all < rx_priv + shared_std)
+		return false;
+
+	shared_buf = rounddown(rx_all - rx_priv, HNS3_BUF_SIZE_UNIT);
+	buf_alloc->s_buf.buf_size = shared_buf;
+	if (hns3_dev_dcb_supported(hw)) {
+		buf_alloc->s_buf.self.high = shared_buf - pf->dv_buf_size;
+		buf_alloc->s_buf.self.low = buf_alloc->s_buf.self.high
+			- roundup(aligned_mps / HNS3_BUF_DIV_BY,
+				  HNS3_BUF_SIZE_UNIT);
+	} else {
+		buf_alloc->s_buf.self.high =
+			aligned_mps + HNS3_NON_DCB_ADDITIONAL_BUF;
+		buf_alloc->s_buf.self.low = aligned_mps;
+	}
+
+	if (hns3_dev_dcb_supported(hw)) {
+		hi_thrd = shared_buf - pf->dv_buf_size;
+
+		if (tc_num <= NEED_RESERVE_TC_NUM)
+			hi_thrd = hi_thrd * BUF_RESERVE_PERCENT
+					/ BUF_MAX_PERCENT;
+
+		if (tc_num)
+			hi_thrd = hi_thrd / tc_num;
+
+		hi_thrd = max_t(uint32_t, hi_thrd,
+				HNS3_BUF_MUL_BY * aligned_mps);
+		hi_thrd = rounddown(hi_thrd, HNS3_BUF_SIZE_UNIT);
+		lo_thrd = hi_thrd - aligned_mps / HNS3_BUF_DIV_BY;
+	} else {
+		hi_thrd = aligned_mps + HNS3_NON_DCB_ADDITIONAL_BUF;
+		lo_thrd = aligned_mps;
+	}
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		buf_alloc->s_buf.tc_thrd[i].low = lo_thrd;
+		buf_alloc->s_buf.tc_thrd[i].high = hi_thrd;
+	}
+
+	return true;
+}
+
+static bool
+hns3_rx_buf_calc_all(struct hns3_hw *hw, bool max,
+		     struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_priv_buf *priv;
+	uint32_t aligned_mps;
+	uint32_t rx_all;
+	uint8_t i;
+
+	rx_all = pf->pkt_buf_size - hns3_get_tx_buff_alloced(buf_alloc);
+	aligned_mps = roundup(pf->mps, HNS3_BUF_SIZE_UNIT);
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+
+		priv->enable = 0;
+		priv->wl.low = 0;
+		priv->wl.high = 0;
+		priv->buf_size = 0;
+
+		if (!(hw->hw_tc_map & BIT(i)))
+			continue;
+
+		priv->enable = 1;
+		if (hw->dcb_info.hw_pfc_map & BIT(i)) {
+			priv->wl.low = max ? aligned_mps : HNS3_BUF_SIZE_UNIT;
+			priv->wl.high = roundup(priv->wl.low + aligned_mps,
+						HNS3_BUF_SIZE_UNIT);
+		} else {
+			priv->wl.low = 0;
+			priv->wl.high = max ? (aligned_mps * HNS3_BUF_MUL_BY) :
+					aligned_mps;
+		}
+
+		priv->buf_size = priv->wl.high + pf->dv_buf_size;
+	}
+
+	return hns3_is_rx_buf_ok(hw, buf_alloc, rx_all);
+}
+
+static bool
+hns3_drop_nopfc_buf_till_fit(struct hns3_hw *hw,
+			     struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_priv_buf *priv;
+	int no_pfc_priv_num;
+	uint32_t rx_all;
+	uint8_t mask;
+	int i;
+
+	rx_all = pf->pkt_buf_size - hns3_get_tx_buff_alloced(buf_alloc);
+	no_pfc_priv_num = hns3_get_no_pfc_priv_num(hw, buf_alloc);
+
+	/* let the last to be cleared first */
+	for (i = HNS3_MAX_TC_NUM - 1; i >= 0; i--) {
+		priv = &buf_alloc->priv_buf[i];
+		mask = BIT((uint8_t)i);
+
+		if (hw->hw_tc_map & mask &&
+		    !(hw->dcb_info.hw_pfc_map & mask)) {
+			/* Clear the no pfc TC private buffer */
+			priv->wl.low = 0;
+			priv->wl.high = 0;
+			priv->buf_size = 0;
+			priv->enable = 0;
+			no_pfc_priv_num--;
+		}
+
+		if (hns3_is_rx_buf_ok(hw, buf_alloc, rx_all) ||
+		    no_pfc_priv_num == 0)
+			break;
+	}
+
+	return hns3_is_rx_buf_ok(hw, buf_alloc, rx_all);
+}
+
+static bool
+hns3_drop_pfc_buf_till_fit(struct hns3_hw *hw,
+			   struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_priv_buf *priv;
+	uint32_t rx_all;
+	int pfc_priv_num;
+	uint8_t mask;
+	int i;
+
+	rx_all = pf->pkt_buf_size - hns3_get_tx_buff_alloced(buf_alloc);
+	pfc_priv_num = hns3_get_pfc_priv_num(hw, buf_alloc);
+
+	/* let the last to be cleared first */
+	for (i = HNS3_MAX_TC_NUM - 1; i >= 0; i--) {
+		priv = &buf_alloc->priv_buf[i];
+		mask = BIT((uint8_t)i);
+
+		if (hw->hw_tc_map & mask &&
+		    hw->dcb_info.hw_pfc_map & mask) {
+			/* Reduce the number of pfc TC with private buffer */
+			priv->wl.low = 0;
+			priv->enable = 0;
+			priv->wl.high = 0;
+			priv->buf_size = 0;
+			pfc_priv_num--;
+		}
+		if (hns3_is_rx_buf_ok(hw, buf_alloc, rx_all) ||
+		    pfc_priv_num == 0)
+			break;
+	}
+
+	return hns3_is_rx_buf_ok(hw, buf_alloc, rx_all);
+}
+
+static bool
+hns3_only_alloc_priv_buff(struct hns3_hw *hw,
+			  struct hns3_pkt_buf_alloc *buf_alloc)
+{
+#define COMPENSATE_BUFFER	0x3C00
+#define COMPENSATE_HALF_MPS_NUM	5
+#define PRIV_WL_GAP		0x1800
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t tc_num = hns3_get_tc_num(hw);
+	uint32_t half_mps = pf->mps >> 1;
+	struct hns3_priv_buf *priv;
+	uint32_t min_rx_priv;
+	uint32_t rx_priv;
+	uint8_t i;
+
+	rx_priv = pf->pkt_buf_size - hns3_get_tx_buff_alloced(buf_alloc);
+	if (tc_num)
+		rx_priv = rx_priv / tc_num;
+
+	if (tc_num <= NEED_RESERVE_TC_NUM)
+		rx_priv = rx_priv * BUF_RESERVE_PERCENT / BUF_MAX_PERCENT;
+
+	/*
+	 * Minimum value of private buffer in rx direction (min_rx_priv) is
+	 * equal to "DV + 2.5 * MPS + 15KB". Driver only allocates rx private
+	 * buffer if rx_priv is greater than min_rx_priv.
+	 */
+	min_rx_priv = pf->dv_buf_size + COMPENSATE_BUFFER +
+			COMPENSATE_HALF_MPS_NUM * half_mps;
+	min_rx_priv = roundup(min_rx_priv, HNS3_BUF_SIZE_UNIT);
+	rx_priv = rounddown(rx_priv, HNS3_BUF_SIZE_UNIT);
+
+	if (rx_priv < min_rx_priv)
+		return false;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		priv = &buf_alloc->priv_buf[i];
+
+		priv->enable = 0;
+		priv->wl.low = 0;
+		priv->wl.high = 0;
+		priv->buf_size = 0;
+
+		if (!(hw->hw_tc_map & BIT(i)))
+			continue;
+
+		priv->enable = 1;
+		priv->buf_size = rx_priv;
+		priv->wl.high = rx_priv - pf->dv_buf_size;
+		priv->wl.low = priv->wl.high - PRIV_WL_GAP;
+	}
+
+	buf_alloc->s_buf.buf_size = 0;
+
+	return true;
+}
+
+/*
+ * hns3_rx_buffer_calc: calculate the rx private buffer size for all TCs
+ * @hw: pointer to struct hns3_hw
+ * @buf_alloc: pointer to buffer calculation data
+ * @return: 0: calculate sucessful, negative: fail
+ */
+static int
+hns3_rx_buffer_calc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	/* When DCB is not supported, rx private buffer is not allocated. */
+	if (!hns3_dev_dcb_supported(hw)) {
+		struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+		struct hns3_pf *pf = &hns->pf;
+		uint32_t rx_all = pf->pkt_buf_size;
+
+		rx_all -= hns3_get_tx_buff_alloced(buf_alloc);
+		if (!hns3_is_rx_buf_ok(hw, buf_alloc, rx_all))
+			return -ENOMEM;
+
+		return 0;
+	}
+
+	/*
+	 * Try to allocate privated packet buffer for all TCs without share
+	 * buffer.
+	 */
+	if (hns3_only_alloc_priv_buff(hw, buf_alloc))
+		return 0;
+
+	/*
+	 * Try to allocate privated packet buffer for all TCs with share
+	 * buffer.
+	 */
+	if (hns3_rx_buf_calc_all(hw, true, buf_alloc))
+		return 0;
+
+	/*
+	 * For different application scenes, the enabled port number, TC number
+	 * and no_drop TC number are different. In order to obtain the better
+	 * performance, software could allocate the buffer size and configure
+	 * the waterline by tring to decrease the private buffer size according
+	 * to the order, namely, waterline of valided tc, pfc disabled tc, pfc
+	 * enabled tc.
+	 */
+	if (hns3_rx_buf_calc_all(hw, false, buf_alloc))
+		return 0;
+
+	if (hns3_drop_nopfc_buf_till_fit(hw, buf_alloc))
+		return 0;
+
+	if (hns3_drop_pfc_buf_till_fit(hw, buf_alloc))
+		return 0;
+
+	return -ENOMEM;
+}
+
+static int
+hns3_rx_priv_buf_alloc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_rx_priv_buff_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint32_t buf_size;
+	int ret;
+	int i;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RX_PRIV_BUFF_ALLOC, false);
+	req = (struct hns3_rx_priv_buff_cmd *)desc.data;
+
+	/* Alloc private buffer TCs */
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		struct hns3_priv_buf *priv = &buf_alloc->priv_buf[i];
+
+		req->buf_num[i] =
+			rte_cpu_to_le_16(priv->buf_size >> HNS3_BUF_UNIT_S);
+		req->buf_num[i] |= rte_cpu_to_le_16(1 << HNS3_TC0_PRI_BUF_EN_B);
+	}
+
+	buf_size = buf_alloc->s_buf.buf_size;
+	req->shared_buf = rte_cpu_to_le_16((buf_size >> HNS3_BUF_UNIT_S) |
+					   (1 << HNS3_TC0_PRI_BUF_EN_B));
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "rx private buffer alloc cmd failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_rx_priv_wl_config(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+#define HNS3_RX_PRIV_WL_ALLOC_DESC_NUM 2
+	struct hns3_rx_priv_wl_buf *req;
+	struct hns3_priv_buf *priv;
+	struct hns3_cmd_desc desc[HNS3_RX_PRIV_WL_ALLOC_DESC_NUM];
+	int i, j;
+	int ret;
+
+	for (i = 0; i < HNS3_RX_PRIV_WL_ALLOC_DESC_NUM; i++) {
+		hns3_cmd_setup_basic_desc(&desc[i], HNS3_OPC_RX_PRIV_WL_ALLOC,
+					  false);
+		req = (struct hns3_rx_priv_wl_buf *)desc[i].data;
+
+		/* The first descriptor set the NEXT bit to 1 */
+		if (i == 0)
+			desc[i].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		else
+			desc[i].flag &= ~rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+		for (j = 0; j < HNS3_TC_NUM_ONE_DESC; j++) {
+			uint32_t idx = i * HNS3_TC_NUM_ONE_DESC + j;
+
+			priv = &buf_alloc->priv_buf[idx];
+			req->tc_wl[j].high = rte_cpu_to_le_16(priv->wl.high >>
+							HNS3_BUF_UNIT_S);
+			req->tc_wl[j].high |=
+				rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+			req->tc_wl[j].low = rte_cpu_to_le_16(priv->wl.low >>
+							HNS3_BUF_UNIT_S);
+			req->tc_wl[j].low |=
+				rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+		}
+	}
+
+	/* Send 2 descriptor at one time */
+	ret = hns3_cmd_send(hw, desc, HNS3_RX_PRIV_WL_ALLOC_DESC_NUM);
+	if (ret)
+		PMD_INIT_LOG(ERR, "rx private waterline config cmd failed %d",
+			     ret);
+	return ret;
+}
+
+static int
+hns3_common_thrd_config(struct hns3_hw *hw,
+			struct hns3_pkt_buf_alloc *buf_alloc)
+{
+#define HNS3_RX_COM_THRD_ALLOC_DESC_NUM 2
+	struct hns3_shared_buf *s_buf = &buf_alloc->s_buf;
+	struct hns3_rx_com_thrd *req;
+	struct hns3_cmd_desc desc[HNS3_RX_COM_THRD_ALLOC_DESC_NUM];
+	struct hns3_tc_thrd *tc;
+	int tc_idx;
+	int i, j;
+	int ret;
+
+	for (i = 0; i < HNS3_RX_COM_THRD_ALLOC_DESC_NUM; i++) {
+		hns3_cmd_setup_basic_desc(&desc[i], HNS3_OPC_RX_COM_THRD_ALLOC,
+					  false);
+		req = (struct hns3_rx_com_thrd *)&desc[i].data;
+
+		/* The first descriptor set the NEXT bit to 1 */
+		if (i == 0)
+			desc[i].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		else
+			desc[i].flag &= ~rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+		for (j = 0; j < HNS3_TC_NUM_ONE_DESC; j++) {
+			tc_idx = i * HNS3_TC_NUM_ONE_DESC + j;
+			tc = &s_buf->tc_thrd[tc_idx];
+
+			req->com_thrd[j].high =
+				rte_cpu_to_le_16(tc->high >> HNS3_BUF_UNIT_S);
+			req->com_thrd[j].high |=
+				 rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+			req->com_thrd[j].low =
+				rte_cpu_to_le_16(tc->low >> HNS3_BUF_UNIT_S);
+			req->com_thrd[j].low |=
+				 rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+		}
+	}
+
+	/* Send 2 descriptors at one time */
+	ret = hns3_cmd_send(hw, desc, HNS3_RX_COM_THRD_ALLOC_DESC_NUM);
+	if (ret)
+		PMD_INIT_LOG(ERR, "common threshold config cmd failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_common_wl_config(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc)
+{
+	struct hns3_shared_buf *buf = &buf_alloc->s_buf;
+	struct hns3_rx_com_wl *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RX_COM_WL_ALLOC, false);
+
+	req = (struct hns3_rx_com_wl *)desc.data;
+	req->com_wl.high = rte_cpu_to_le_16(buf->self.high >> HNS3_BUF_UNIT_S);
+	req->com_wl.high |= rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+
+	req->com_wl.low = rte_cpu_to_le_16(buf->self.low >> HNS3_BUF_UNIT_S);
+	req->com_wl.low |= rte_cpu_to_le_16(BIT(HNS3_RX_PRIV_EN_B));
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "common waterline config cmd failed %d", ret);
+
+	return ret;
+}
+
+int
+hns3_buffer_alloc(struct hns3_hw *hw)
+{
+	struct hns3_pkt_buf_alloc pkt_buf;
+	int ret;
+
+	memset(&pkt_buf, 0, sizeof(pkt_buf));
+	ret = hns3_tx_buffer_calc(hw, &pkt_buf);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "could not calc tx buffer size for all TCs %d",
+			     ret);
+		return ret;
+	}
+
+	ret = hns3_tx_buffer_alloc(hw, &pkt_buf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "could not alloc tx buffers %d", ret);
+		return ret;
+	}
+
+	ret = hns3_rx_buffer_calc(hw, &pkt_buf);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "could not calc rx priv buffer size for all TCs %d",
+			     ret);
+		return ret;
+	}
+
+	ret = hns3_rx_priv_buf_alloc(hw, &pkt_buf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "could not alloc rx priv buffer %d", ret);
+		return ret;
+	}
+
+	if (hns3_dev_dcb_supported(hw)) {
+		ret = hns3_rx_priv_wl_config(hw, &pkt_buf);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "could not configure rx private waterline %d",
+				     ret);
+			return ret;
+		}
+
+		ret = hns3_common_thrd_config(hw, &pkt_buf);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "could not configure common threshold %d",
+				     ret);
+			return ret;
+		}
+	}
+
+	ret = hns3_common_wl_config(hw, &pkt_buf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "could not configure common waterline %d",
+			     ret);
+
+	return ret;
+}
+
+static int
+hns3_mac_init(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_mac *mac = &hw->mac;
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	pf->support_sfp_query = true;
+	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
+		return ret;
+	}
+
+	mac->link_status = ETH_LINK_DOWN;
+
+	return hns3_config_mtu(hw, pf->mps);
+}
+
+static int
+hns3_get_mac_ethertype_cmd_status(uint16_t cmdq_resp, uint8_t resp_code)
+{
+#define HNS3_ETHERTYPE_SUCCESS_ADD		0
+#define HNS3_ETHERTYPE_ALREADY_ADD		1
+#define HNS3_ETHERTYPE_MGR_TBL_OVERFLOW		2
+#define HNS3_ETHERTYPE_KEY_CONFLICT		3
+	int return_status;
+
+	if (cmdq_resp) {
+		PMD_INIT_LOG(ERR,
+			     "cmdq execute failed for get_mac_ethertype_cmd_status, status=%d.\n",
+			     cmdq_resp);
+		return -EIO;
+	}
+
+	switch (resp_code) {
+	case HNS3_ETHERTYPE_SUCCESS_ADD:
+	case HNS3_ETHERTYPE_ALREADY_ADD:
+		return_status = 0;
+		break;
+	case HNS3_ETHERTYPE_MGR_TBL_OVERFLOW:
+		PMD_INIT_LOG(ERR,
+			     "add mac ethertype failed for manager table overflow.");
+		return_status = -EIO;
+		break;
+	case HNS3_ETHERTYPE_KEY_CONFLICT:
+		PMD_INIT_LOG(ERR, "add mac ethertype failed for key conflict.");
+		return_status = -EIO;
+		break;
+	default:
+		PMD_INIT_LOG(ERR,
+			     "add mac ethertype failed for undefined, code=%d.",
+			     resp_code);
+		return_status = -EIO;
+	}
+
+	return return_status;
+}
+
+static int
+hns3_add_mgr_tbl(struct hns3_hw *hw,
+		 const struct hns3_mac_mgr_tbl_entry_cmd *req)
+{
+	struct hns3_cmd_desc desc;
+	uint8_t resp_code;
+	uint16_t retval;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_ETHTYPE_ADD, false);
+	memcpy(desc.data, req, sizeof(struct hns3_mac_mgr_tbl_entry_cmd));
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			     "add mac ethertype failed for cmd_send, ret =%d.",
+			     ret);
+		return ret;
+	}
+
+	resp_code = (rte_le_to_cpu_32(desc.data[0]) >> 8) & 0xff;
+	retval = rte_le_to_cpu_16(desc.retval);
+
+	return hns3_get_mac_ethertype_cmd_status(retval, resp_code);
+}
+
+static void
+hns3_prepare_mgr_tbl(struct hns3_mac_mgr_tbl_entry_cmd *mgr_table,
+		     int *table_item_num)
+{
+	struct hns3_mac_mgr_tbl_entry_cmd *tbl;
+
+	/*
+	 * In current version, we add one item in management table as below:
+	 * 0x0180C200000E -- LLDP MC address
+	 */
+	tbl = mgr_table;
+	tbl->flags = HNS3_MAC_MGR_MASK_VLAN_B;
+	tbl->ethter_type = rte_cpu_to_le_16(HNS3_MAC_ETHERTYPE_LLDP);
+	tbl->mac_addr_hi32 = rte_cpu_to_le_32(htonl(0x0180C200));
+	tbl->mac_addr_lo16 = rte_cpu_to_le_16(htons(0x000E));
+	tbl->i_port_bitmap = 0x1;
+	*table_item_num = 1;
+}
+
+static int
+hns3_init_mgr_tbl(struct hns3_hw *hw)
+{
+#define HNS_MAC_MGR_TBL_MAX_SIZE	16
+	struct hns3_mac_mgr_tbl_entry_cmd mgr_table[HNS_MAC_MGR_TBL_MAX_SIZE];
+	int table_item_num;
+	int ret;
+	int i;
+
+	hns3_prepare_mgr_tbl(mgr_table, &table_item_num);
+	for (i = 0; i < table_item_num; i++) {
+		ret = hns3_add_mgr_tbl(hw, &mgr_table[i]);
+		if (ret) {
+			PMD_INIT_LOG(ERR, "add mac ethertype failed, ret =%d",
+				     ret);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static void
+hns3_promisc_param_init(struct hns3_promisc_param *param, bool en_uc,
+			bool en_mc, bool en_bc, int vport_id)
+{
+	if (!param)
+		return;
+
+	memset(param, 0, sizeof(struct hns3_promisc_param));
+	if (en_uc)
+		param->enable = HNS3_PROMISC_EN_UC;
+	if (en_mc)
+		param->enable |= HNS3_PROMISC_EN_MC;
+	if (en_bc)
+		param->enable |= HNS3_PROMISC_EN_BC;
+	param->vf_id = vport_id;
+}
+
+static int
+hns3_cmd_set_promisc_mode(struct hns3_hw *hw, struct hns3_promisc_param *param)
+{
+	struct hns3_promisc_cfg_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_PROMISC_MODE, false);
+
+	req = (struct hns3_promisc_cfg_cmd *)desc.data;
+	req->vf_id = param->vf_id;
+	req->flag = (param->enable << HNS3_PROMISC_EN_B) |
+	    HNS3_PROMISC_TX_EN_B | HNS3_PROMISC_RX_EN_B;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Set promisc mode fail, status is %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_set_promisc_mode(struct hns3_hw *hw, bool en_uc_pmc, bool en_mc_pmc)
+{
+	struct hns3_promisc_param param;
+	bool en_bc_pmc = true;
+	uint8_t vf_id;
+	int ret;
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * the PF-related vf_id is 0, just need to configure parameters for
+	 * vf_id 0.
+	 */
+	vf_id = 0;
+
+	hns3_promisc_param_init(&param, en_uc_pmc, en_mc_pmc, en_bc_pmc, vf_id);
+	ret = hns3_cmd_set_promisc_mode(hw, &param);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+hns3_init_hardware(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_map_tqp(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to map tqp: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_init_umv_space(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init umv space: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_mac_init(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MAC: %d", ret);
+		goto err_mac_init;
+	}
+
+	ret = hns3_init_mgr_tbl(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init manager table: %d", ret);
+		goto err_mac_init;
+	}
+
+	ret = hns3_set_promisc_mode(hw, false, false);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set promisc mode: %d", ret);
+		goto err_mac_init;
+	}
+
+	ret = hns3_config_tso(hw, HNS3_TSO_MSS_MIN, HNS3_TSO_MSS_MAX);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to config tso: %d", ret);
+		goto err_mac_init;
+	}
+
+	ret = hns3_config_gro(hw, false);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to config gro: %d", ret);
+		goto err_mac_init;
+	}
+	return 0;
+
+err_mac_init:
+	hns3_uninit_umv_space(hw);
+	return ret;
+}
+
+static int
 hns3_init_pf(struct rte_eth_dev *eth_dev)
 {
 	struct rte_device *dev = eth_dev->device;
@@ -67,8 +1526,24 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 		goto err_cmd_init;
 	}
 
+	/* Get configuration */
+	ret = hns3_get_configuration(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fetch configuration: %d", ret);
+		goto err_get_config;
+	}
+
+	ret = hns3_init_hardware(hns);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init hardware: %d", ret);
+		goto err_get_config;
+	}
+
 	return 0;
 
+err_get_config:
+	hns3_cmd_uninit(hw);
+
 err_cmd_init:
 	hns3_cmd_destroy_queue(hw);
 
@@ -88,6 +1563,7 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
+	hns3_uninit_umv_space(hw);
 	hns3_cmd_uninit(hw);
 	hns3_cmd_destroy_queue(hw);
 	hw->io_base = NULL;
@@ -128,6 +1604,7 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 
 	hns->is_vf = false;
 	hw->data = eth_dev->data;
+	hns->pf.mps = HNS3_DEFAULT_FRAME_LEN;
 
 	ret = hns3_init_pf(eth_dev);
 	if (ret) {
@@ -135,10 +1612,30 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 		goto err_init_pf;
 	}
 
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("hns3-mac",
+					       sizeof(struct rte_ether_addr) *
+					       HNS3_UC_MACADDR_NUM, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %ld bytes needed to store"
+			     " MAC addresses",
+			     sizeof(struct rte_ether_addr) *
+			     HNS3_UC_MACADDR_NUM);
+		ret = -ENOMEM;
+		goto err_rte_zmalloc;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.mac_addr,
+			    &eth_dev->data->mac_addrs[0]);
+
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	hns3_info(hw, "hns3 dev initialization successful!");
 
 	return 0;
 
+err_rte_zmalloc:
+	hns3_uninit_pf(eth_dev);
+
 err_init_pf:
 	eth_dev->dev_ops = NULL;
 	return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 84fcf34..d5f62fe 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -606,4 +606,7 @@ hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr)
 	return __sync_fetch_and_and(addr, ~mask) & mask;
 }
 
+int hns3_buffer_alloc(struct hns3_hw *hw);
+int hns3_config_gro(struct hns3_hw *hw, bool en);
+
 #endif /* _HNS3_ETHDEV_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (4 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 05/22] net/hns3: add the initialization " Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:03   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations Wei Hu (Xavier)
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds the following mac address related operations defined in
struct eth_dev_ops: mac_addr_add, mac_addr_remove, mac_addr_set
and set_mc_addr_list.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 816 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 816 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 3b5deb0..44e21ac 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -152,6 +152,818 @@ hns3_uninit_umv_space(struct hns3_hw *hw)
 	return 0;
 }
 
+static bool
+hns3_is_umv_space_full(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	bool is_full;
+
+	is_full = (pf->used_umv_size >= pf->max_umv_size);
+
+	return is_full;
+}
+
+static void
+hns3_update_umv_space(struct hns3_hw *hw, bool is_free)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+
+	if (is_free) {
+		if (pf->used_umv_size > 0)
+			pf->used_umv_size--;
+	} else
+		pf->used_umv_size++;
+}
+
+static void
+hns3_prepare_mac_addr(struct hns3_mac_vlan_tbl_entry_cmd *new_req,
+		      const uint8_t *addr, bool is_mc)
+{
+	const unsigned char *mac_addr = addr;
+	uint32_t high_val = ((uint32_t)mac_addr[3] << 24) |
+			    ((uint32_t)mac_addr[2] << 16) |
+			    ((uint32_t)mac_addr[1] << 8) |
+			    (uint32_t)mac_addr[0];
+	uint32_t low_val = ((uint32_t)mac_addr[5] << 8) | (uint32_t)mac_addr[4];
+
+	hns3_set_bit(new_req->flags, HNS3_MAC_VLAN_BIT0_EN_B, 1);
+	if (is_mc) {
+		hns3_set_bit(new_req->entry_type, HNS3_MAC_VLAN_BIT0_EN_B, 0);
+		hns3_set_bit(new_req->entry_type, HNS3_MAC_VLAN_BIT1_EN_B, 1);
+		hns3_set_bit(new_req->mc_mac_en, HNS3_MAC_VLAN_BIT0_EN_B, 1);
+	}
+
+	new_req->mac_addr_hi32 = rte_cpu_to_le_32(high_val);
+	new_req->mac_addr_lo16 = rte_cpu_to_le_16(low_val & 0xffff);
+}
+
+static int
+hns3_get_mac_vlan_cmd_status(struct hns3_hw *hw, uint16_t cmdq_resp,
+			     uint8_t resp_code,
+			     enum hns3_mac_vlan_tbl_opcode op)
+{
+	if (cmdq_resp) {
+		hns3_err(hw, "cmdq execute failed for get_mac_vlan_cmd_status,status=%u",
+			 cmdq_resp);
+		return -EIO;
+	}
+
+	if (op == HNS3_MAC_VLAN_ADD) {
+		if (resp_code == 0 || resp_code == 1) {
+			return 0;
+		} else if (resp_code == HNS3_ADD_UC_OVERFLOW) {
+			hns3_err(hw, "add mac addr failed for uc_overflow");
+			return -ENOSPC;
+		} else if (resp_code == HNS3_ADD_MC_OVERFLOW) {
+			hns3_err(hw, "add mac addr failed for mc_overflow");
+			return -ENOSPC;
+		}
+
+		hns3_err(hw, "add mac addr failed for undefined, code=%u",
+			 resp_code);
+		return -EIO;
+	} else if (op == HNS3_MAC_VLAN_REMOVE) {
+		if (resp_code == 0) {
+			return 0;
+		} else if (resp_code == 1) {
+			hns3_dbg(hw, "remove mac addr failed for miss");
+			return -ENOENT;
+		}
+
+		hns3_err(hw, "remove mac addr failed for undefined, code=%u",
+			 resp_code);
+		return -EIO;
+	} else if (op == HNS3_MAC_VLAN_LKUP) {
+		if (resp_code == 0) {
+			return 0;
+		} else if (resp_code == 1) {
+			hns3_dbg(hw, "lookup mac addr failed for miss");
+			return -ENOENT;
+		}
+
+		hns3_err(hw, "lookup mac addr failed for undefined, code=%u",
+			 resp_code);
+		return -EIO;
+	}
+
+	hns3_err(hw, "unknown opcode for get_mac_vlan_cmd_status, opcode=%u",
+		 op);
+
+	return -EINVAL;
+}
+
+static int
+hns3_lookup_mac_vlan_tbl(struct hns3_hw *hw,
+			 struct hns3_mac_vlan_tbl_entry_cmd *req,
+			 struct hns3_cmd_desc *desc, bool is_mc)
+{
+	uint8_t resp_code;
+	uint16_t retval;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_OPC_MAC_VLAN_ADD, true);
+	if (is_mc) {
+		desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		memcpy(desc[0].data, req,
+			   sizeof(struct hns3_mac_vlan_tbl_entry_cmd));
+		hns3_cmd_setup_basic_desc(&desc[1], HNS3_OPC_MAC_VLAN_ADD,
+					  true);
+		desc[1].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		hns3_cmd_setup_basic_desc(&desc[2], HNS3_OPC_MAC_VLAN_ADD,
+					  true);
+		ret = hns3_cmd_send(hw, desc, HNS3_MC_MAC_VLAN_ADD_DESC_NUM);
+	} else {
+		memcpy(desc[0].data, req,
+		       sizeof(struct hns3_mac_vlan_tbl_entry_cmd));
+		ret = hns3_cmd_send(hw, desc, 1);
+	}
+	if (ret) {
+		hns3_err(hw, "lookup mac addr failed for cmd_send, ret =%d.",
+			 ret);
+		return ret;
+	}
+	resp_code = (rte_le_to_cpu_32(desc[0].data[0]) >> 8) & 0xff;
+	retval = rte_le_to_cpu_16(desc[0].retval);
+
+	return hns3_get_mac_vlan_cmd_status(hw, retval, resp_code,
+					    HNS3_MAC_VLAN_LKUP);
+}
+
+static int
+hns3_add_mac_vlan_tbl(struct hns3_hw *hw,
+		      struct hns3_mac_vlan_tbl_entry_cmd *req,
+		      struct hns3_cmd_desc *mc_desc)
+{
+	uint8_t resp_code;
+	uint16_t retval;
+	int cfg_status;
+	int ret;
+
+	if (mc_desc == NULL) {
+		struct hns3_cmd_desc desc;
+
+		hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_ADD, false);
+		memcpy(desc.data, req,
+		       sizeof(struct hns3_mac_vlan_tbl_entry_cmd));
+		ret = hns3_cmd_send(hw, &desc, 1);
+		resp_code = (rte_le_to_cpu_32(desc.data[0]) >> 8) & 0xff;
+		retval = rte_le_to_cpu_16(desc.retval);
+
+		cfg_status = hns3_get_mac_vlan_cmd_status(hw, retval, resp_code,
+							  HNS3_MAC_VLAN_ADD);
+	} else {
+		hns3_cmd_reuse_desc(&mc_desc[0], false);
+		mc_desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		hns3_cmd_reuse_desc(&mc_desc[1], false);
+		mc_desc[1].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+		hns3_cmd_reuse_desc(&mc_desc[2], false);
+		mc_desc[2].flag &= rte_cpu_to_le_16(~HNS3_CMD_FLAG_NEXT);
+		memcpy(mc_desc[0].data, req,
+		       sizeof(struct hns3_mac_vlan_tbl_entry_cmd));
+		mc_desc[0].retval = 0;
+		ret = hns3_cmd_send(hw, mc_desc, HNS3_MC_MAC_VLAN_ADD_DESC_NUM);
+		resp_code = (rte_le_to_cpu_32(mc_desc[0].data[0]) >> 8) & 0xff;
+		retval = rte_le_to_cpu_16(mc_desc[0].retval);
+
+		cfg_status = hns3_get_mac_vlan_cmd_status(hw, retval, resp_code,
+							  HNS3_MAC_VLAN_ADD);
+	}
+
+	if (ret) {
+		hns3_err(hw, "add mac addr failed for cmd_send, ret =%d", ret);
+		return ret;
+	}
+
+	return cfg_status;
+}
+
+static int
+hns3_remove_mac_vlan_tbl(struct hns3_hw *hw,
+			 struct hns3_mac_vlan_tbl_entry_cmd *req)
+{
+	struct hns3_cmd_desc desc;
+	uint8_t resp_code;
+	uint16_t retval;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_REMOVE, false);
+
+	memcpy(desc.data, req, sizeof(struct hns3_mac_vlan_tbl_entry_cmd));
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "del mac addr failed for cmd_send, ret =%d", ret);
+		return ret;
+	}
+	resp_code = (rte_le_to_cpu_32(desc.data[0]) >> 8) & 0xff;
+	retval = rte_le_to_cpu_16(desc.retval);
+
+	return hns3_get_mac_vlan_cmd_status(hw, retval, resp_code,
+					    HNS3_MAC_VLAN_REMOVE);
+}
+
+static int
+hns3_add_uc_addr_common(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_mac_vlan_tbl_entry_cmd req;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_cmd_desc desc;
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	uint16_t egress_port = 0;
+	uint8_t vf_id;
+	int ret;
+
+	/* check if mac addr is valid */
+	if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Add unicast mac addr err! addr(%s) invalid",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	memset(&req, 0, sizeof(req));
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * the PF-related vf_id is 0, just need to configure parameters for
+	 * vf_id 0.
+	 */
+	vf_id = 0;
+	hns3_set_field(egress_port, HNS3_MAC_EPORT_VFID_M,
+		       HNS3_MAC_EPORT_VFID_S, vf_id);
+
+	req.egress_port = rte_cpu_to_le_16(egress_port);
+
+	hns3_prepare_mac_addr(&req, mac_addr->addr_bytes, false);
+
+	/*
+	 * Lookup the mac address in the mac_vlan table, and add
+	 * it if the entry is inexistent. Repeated unicast entry
+	 * is not allowed in the mac vlan table.
+	 */
+	ret = hns3_lookup_mac_vlan_tbl(hw, &req, &desc, false);
+	if (ret == -ENOENT) {
+		if (!hns3_is_umv_space_full(hw)) {
+			ret = hns3_add_mac_vlan_tbl(hw, &req, NULL);
+			if (!ret)
+				hns3_update_umv_space(hw, false);
+			return ret;
+		}
+
+		hns3_err(hw, "UC MAC table full(%u)", pf->used_umv_size);
+
+		return -ENOSPC;
+	}
+
+	rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr);
+
+	/* check if we just hit the duplicate */
+	if (ret == 0) {
+		hns3_dbg(hw, "mac addr(%s) has been in the MAC table", mac_str);
+		return 0;
+	}
+
+	hns3_err(hw, "PF failed to add unicast entry(%s) in the MAC table",
+		 mac_str);
+
+	return ret;
+}
+
+static int
+hns3_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+		  uint32_t idx, __attribute__ ((unused)) uint32_t pool)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_add_uc_addr_common(hw, mac_addr);
+	if (ret) {
+		rte_spinlock_unlock(&hw->lock);
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to add mac addr(%s): %d", mac_str, ret);
+		return ret;
+	}
+
+	if (idx == 0)
+		hw->mac.default_addr_setted = true;
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3_remove_uc_addr_common(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
+{
+	struct hns3_mac_vlan_tbl_entry_cmd req;
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	/* check if mac addr is valid */
+	if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Remove unicast mac addr err! addr(%s) invalid",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	memset(&req, 0, sizeof(req));
+	hns3_set_bit(req.entry_type, HNS3_MAC_VLAN_BIT0_EN_B, 0);
+	hns3_prepare_mac_addr(&req, mac_addr->addr_bytes, false);
+	ret = hns3_remove_mac_vlan_tbl(hw, &req);
+	if (ret == -ENOENT) /* mac addr isn't existent in the mac vlan table. */
+		return 0;
+	else if (ret == 0)
+		hns3_update_umv_space(hw, true);
+
+	return ret;
+}
+
+static void
+hns3_remove_mac_addr(struct rte_eth_dev *dev, uint32_t idx)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	/* index will be checked by upper level rte interface */
+	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[idx];
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_remove_uc_addr_common(hw, mac_addr);
+	if (ret) {
+		rte_spinlock_unlock(&hw->lock);
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to remove mac addr(%s): %d", mac_str, ret);
+		return;
+	}
+
+	if (idx == 0)
+		hw->mac.default_addr_setted = false;
+	rte_spinlock_unlock(&hw->lock);
+}
+
+static int
+hns3_set_default_mac_addr(struct rte_eth_dev *dev,
+			  struct rte_ether_addr *mac_addr)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_ether_addr *oaddr;
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	bool default_addr_setted;
+	bool rm_succes = false;
+	int ret, ret_val;
+
+	/* check if mac addr is valid */
+	if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to set mac addr, addr(%s) invalid",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	oaddr = (struct rte_ether_addr *)hw->mac.mac_addr;
+	default_addr_setted = hw->mac.default_addr_setted;
+	if (default_addr_setted && !!rte_is_same_ether_addr(mac_addr, oaddr))
+		return 0;
+
+	rte_spinlock_lock(&hw->lock);
+	if (default_addr_setted) {
+		ret = hns3_remove_uc_addr_common(hw, oaddr);
+		if (ret) {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      oaddr);
+			hns3_warn(hw, "Remove old uc mac address(%s) fail: %d",
+				  mac_str, ret);
+			rm_succes = false;
+		} else
+			rm_succes = true;
+	}
+
+	ret = hns3_add_uc_addr_common(hw, mac_addr);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to set mac addr(%s): %d", mac_str, ret);
+		goto err_add_uc_addr;
+	}
+
+	ret = hns3_pause_addr_cfg(hw, mac_addr->addr_bytes);
+	if (ret) {
+		hns3_err(hw, "Failed to configure mac pause address: %d", ret);
+		goto err_pause_addr_cfg;
+	}
+
+	rte_ether_addr_copy(mac_addr,
+			    (struct rte_ether_addr *)hw->mac.mac_addr);
+	hw->mac.default_addr_setted = true;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+
+err_pause_addr_cfg:
+	ret_val = hns3_remove_uc_addr_common(hw, mac_addr);
+	if (ret_val) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_warn(hw,
+			  "Failed to roll back to del setted mac addr(%s): %d",
+			  mac_str, ret_val);
+	}
+
+err_add_uc_addr:
+	if (rm_succes) {
+		ret_val = hns3_add_uc_addr_common(hw, oaddr);
+		if (ret_val) {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      oaddr);
+			hns3_warn(hw,
+				  "Failed to restore old uc mac addr(%s): %d",
+				  mac_str, ret_val);
+			hw->mac.default_addr_setted = false;
+		}
+	}
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3_configure_all_mac_addr(struct hns3_adapter *hns, bool del)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_ether_addr *addr;
+	int err = 0;
+	int ret;
+	int i;
+
+	for (i = 0; i < HNS3_UC_MACADDR_NUM; i++) {
+		addr = &hw->data->mac_addrs[i];
+		if (!rte_is_valid_assigned_ether_addr(addr))
+			continue;
+		if (del)
+			ret = hns3_remove_uc_addr_common(hw, addr);
+		else
+			ret = hns3_add_uc_addr_common(hw, addr);
+		if (ret) {
+			err = ret;
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      addr);
+			hns3_dbg(hw,
+				 "Failed to %s mac addr(%s). ret:%d i:%d",
+				 del ? "remove" : "restore", mac_str, ret, i);
+		}
+	}
+	return err;
+}
+
+static void
+hns3_update_desc_vfid(struct hns3_cmd_desc *desc, uint8_t vfid, bool clr)
+{
+#define HNS3_VF_NUM_IN_FIRST_DESC 192
+	uint8_t word_num;
+	uint8_t bit_num;
+
+	if (vfid < HNS3_VF_NUM_IN_FIRST_DESC) {
+		word_num = vfid / 32;
+		bit_num = vfid % 32;
+		if (clr)
+			desc[1].data[word_num] &=
+			    rte_cpu_to_le_32(~(1UL << bit_num));
+		else
+			desc[1].data[word_num] |=
+			    rte_cpu_to_le_32(1UL << bit_num);
+	} else {
+		word_num = (vfid - HNS3_VF_NUM_IN_FIRST_DESC) / 32;
+		bit_num = vfid % 32;
+		if (clr)
+			desc[2].data[word_num] &=
+			    rte_cpu_to_le_32(~(1UL << bit_num));
+		else
+			desc[2].data[word_num] |=
+			    rte_cpu_to_le_32(1UL << bit_num);
+	}
+}
+
+static int
+hns3_add_mc_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
+{
+	struct hns3_mac_vlan_tbl_entry_cmd req;
+	struct hns3_cmd_desc desc[3];
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	uint8_t vf_id;
+	int ret;
+
+	/* Check if mac addr is valid */
+	if (!rte_is_multicast_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to add mc mac addr, addr(%s) invalid",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	memset(&req, 0, sizeof(req));
+	hns3_set_bit(req.entry_type, HNS3_MAC_VLAN_BIT0_EN_B, 0);
+	hns3_prepare_mac_addr(&req, mac_addr->addr_bytes, true);
+	ret = hns3_lookup_mac_vlan_tbl(hw, &req, desc, true);
+	if (ret) {
+		/* This mac addr do not exist, add new entry for it */
+		memset(desc[0].data, 0, sizeof(desc[0].data));
+		memset(desc[1].data, 0, sizeof(desc[0].data));
+		memset(desc[2].data, 0, sizeof(desc[0].data));
+	}
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * the PF-related vf_id is 0, just need to configure parameters for
+	 * vf_id 0.
+	 */
+	vf_id = 0;
+	hns3_update_desc_vfid(desc, vf_id, false);
+	ret = hns3_add_mac_vlan_tbl(hw, &req, desc);
+	if (ret) {
+		if (ret == -ENOSPC)
+			hns3_err(hw, "mc mac vlan table is full");
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to add mc mac addr(%s): %d", mac_str, ret);
+	}
+
+	return ret;
+}
+
+static int
+hns3_remove_mc_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
+{
+	struct hns3_mac_vlan_tbl_entry_cmd req;
+	struct hns3_cmd_desc desc[3];
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	uint8_t vf_id;
+	int ret;
+
+	/* Check if mac addr is valid */
+	if (!rte_is_multicast_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to rm mc mac addr, addr(%s) invalid",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	memset(&req, 0, sizeof(req));
+	hns3_set_bit(req.entry_type, HNS3_MAC_VLAN_BIT0_EN_B, 0);
+	hns3_prepare_mac_addr(&req, mac_addr->addr_bytes, true);
+	ret = hns3_lookup_mac_vlan_tbl(hw, &req, desc, true);
+	if (ret == 0) {
+		/*
+		 * This mac addr exist, remove this handle's VFID for it.
+		 * In current version VF is not supported when PF is taken over
+		 * by DPDK, the PF-related vf_id is 0, just need to configure
+		 * parameters for vf_id 0.
+		 */
+		vf_id = 0;
+		hns3_update_desc_vfid(desc, vf_id, true);
+
+		/* All the vfid is zero, so need to delete this entry */
+		ret = hns3_remove_mac_vlan_tbl(hw, &req);
+	} else if (ret == -ENOENT) {
+		/* This mac addr doesn't exist. */
+		return 0;
+	}
+
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to rm mc mac addr(%s): %d", mac_str, ret);
+	}
+
+	return ret;
+}
+
+static int
+hns3_set_mc_addr_chk_param(struct hns3_hw *hw,
+			   struct rte_ether_addr *mc_addr_set,
+			   uint32_t nb_mc_addr)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct rte_ether_addr *addr;
+	uint32_t i;
+	uint32_t j;
+
+	if (nb_mc_addr > HNS3_MC_MACADDR_NUM) {
+		hns3_err(hw, "Failed to set mc mac addr, nb_mc_addr(%d) "
+			 "invalid. valid range: 0~%d",
+			 nb_mc_addr, HNS3_MC_MACADDR_NUM);
+		return -EINVAL;
+	}
+
+	/* Check if input mac addresses are valid */
+	for (i = 0; i < nb_mc_addr; i++) {
+		addr = &mc_addr_set[i];
+		if (!rte_is_multicast_ether_addr(addr)) {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      addr);
+			hns3_err(hw,
+				 "Failed to set mc mac addr, addr(%s) invalid.",
+				 mac_str);
+			return -EINVAL;
+		}
+
+		/* Check if there are duplicate addresses */
+		for (j = i + 1; j < nb_mc_addr; j++) {
+			if (rte_is_same_ether_addr(addr, &mc_addr_set[j])) {
+				rte_ether_format_addr(mac_str,
+						      RTE_ETHER_ADDR_FMT_SIZE,
+						      addr);
+				hns3_err(hw, "Failed to set mc mac addr, "
+					 "addrs invalid. two same addrs(%s).",
+					 mac_str);
+				return -EINVAL;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static void
+hns3_set_mc_addr_calc_addr(struct hns3_hw *hw,
+			   struct rte_ether_addr *mc_addr_set,
+			   int mc_addr_num,
+			   struct rte_ether_addr *reserved_addr_list,
+			   int *reserved_addr_num,
+			   struct rte_ether_addr *add_addr_list,
+			   int *add_addr_num,
+			   struct rte_ether_addr *rm_addr_list,
+			   int *rm_addr_num)
+{
+	struct rte_ether_addr *addr;
+	int current_addr_num;
+	int reserved_num = 0;
+	int add_num = 0;
+	int rm_num = 0;
+	int num;
+	int i;
+	int j;
+	bool same_addr;
+
+	/* Calculate the mc mac address list that should be removed */
+	current_addr_num = hw->mc_addrs_num;
+	for (i = 0; i < current_addr_num; i++) {
+		addr = &hw->mc_addrs[i];
+		same_addr = false;
+		for (j = 0; j < mc_addr_num; j++) {
+			if (rte_is_same_ether_addr(addr, &mc_addr_set[j])) {
+				same_addr = true;
+				break;
+			}
+		}
+
+		if (!same_addr) {
+			rte_ether_addr_copy(addr, &rm_addr_list[rm_num]);
+			rm_num++;
+		} else {
+			rte_ether_addr_copy(addr,
+					    &reserved_addr_list[reserved_num]);
+			reserved_num++;
+		}
+	}
+
+	/* Calculate the mc mac address list that should be added */
+	for (i = 0; i < mc_addr_num; i++) {
+		addr = &mc_addr_set[i];
+		same_addr = false;
+		for (j = 0; j < current_addr_num; j++) {
+			if (rte_is_same_ether_addr(addr, &hw->mc_addrs[j])) {
+				same_addr = true;
+				break;
+			}
+		}
+
+		if (!same_addr) {
+			rte_ether_addr_copy(addr, &add_addr_list[add_num]);
+			add_num++;
+		}
+	}
+
+	/* Reorder the mc mac address list maintained by driver */
+	for (i = 0; i < reserved_num; i++)
+		rte_ether_addr_copy(&reserved_addr_list[i], &hw->mc_addrs[i]);
+
+	for (i = 0; i < rm_num; i++) {
+		num = reserved_num + i;
+		rte_ether_addr_copy(&rm_addr_list[i], &hw->mc_addrs[num]);
+	}
+
+	*reserved_addr_num = reserved_num;
+	*add_addr_num = add_num;
+	*rm_addr_num = rm_num;
+}
+
+static int
+hns3_set_mc_mac_addr_list(struct rte_eth_dev *dev,
+			  struct rte_ether_addr *mc_addr_set,
+			  uint32_t nb_mc_addr)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_ether_addr reserved_addr_list[HNS3_MC_MACADDR_NUM];
+	struct rte_ether_addr add_addr_list[HNS3_MC_MACADDR_NUM];
+	struct rte_ether_addr rm_addr_list[HNS3_MC_MACADDR_NUM];
+	struct rte_ether_addr *addr;
+	int reserved_addr_num;
+	int add_addr_num;
+	int rm_addr_num;
+	int mc_addr_num;
+	int num;
+	int ret;
+	int i;
+
+	/* Check if input parameters are valid */
+	ret = hns3_set_mc_addr_chk_param(hw, mc_addr_set, nb_mc_addr);
+	if (ret)
+		return ret;
+
+	rte_spinlock_lock(&hw->lock);
+
+	/*
+	 * Calculate the mc mac address lists those should be removed and be
+	 * added, Reorder the mc mac address list maintained by driver.
+	 */
+	mc_addr_num = (int)nb_mc_addr;
+	hns3_set_mc_addr_calc_addr(hw, mc_addr_set, mc_addr_num,
+				   reserved_addr_list, &reserved_addr_num,
+				   add_addr_list, &add_addr_num,
+				   rm_addr_list, &rm_addr_num);
+
+	/* Remove mc mac addresses */
+	for (i = 0; i < rm_addr_num; i++) {
+		num = rm_addr_num - i - 1;
+		addr = &rm_addr_list[num];
+		ret = hns3_remove_mc_addr(hw, addr);
+		if (ret) {
+			rte_spinlock_unlock(&hw->lock);
+			return ret;
+		}
+		hw->mc_addrs_num--;
+	}
+
+	/* Add mc mac addresses */
+	for (i = 0; i < add_addr_num; i++) {
+		addr = &add_addr_list[i];
+		ret = hns3_add_mc_addr(hw, addr);
+		if (ret) {
+			rte_spinlock_unlock(&hw->lock);
+			return ret;
+		}
+
+		num = reserved_addr_num + i;
+		rte_ether_addr_copy(addr, &hw->mc_addrs[num]);
+		hw->mc_addrs_num++;
+	}
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static int
+hns3_configure_all_mc_mac_addr(struct hns3_adapter *hns, bool del)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_ether_addr *addr;
+	int err = 0;
+	int ret;
+	int i;
+
+	for (i = 0; i < hw->mc_addrs_num; i++) {
+		addr = &hw->mc_addrs[i];
+		if (!rte_is_multicast_ether_addr(addr))
+			continue;
+		if (del)
+			ret = hns3_remove_mc_addr(hw, addr);
+		else
+			ret = hns3_add_mc_addr(hw, addr);
+		if (ret) {
+			err = ret;
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      addr);
+			hns3_dbg(hw, "%s mc mac addr: %s failed",
+				 del ? "Remove" : "Restore", mac_str);
+		}
+	}
+	return err;
+}
+
 static int
 hns3_set_mac_mtu(struct hns3_hw *hw, uint16_t new_mps)
 {
@@ -1582,6 +2394,10 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 
 static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_close          = hns3_dev_close,
+	.mac_addr_add           = hns3_add_mac_addr,
+	.mac_addr_remove        = hns3_remove_mac_addr,
+	.mac_addr_set           = hns3_set_default_mac_addr,
+	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
 };
 
 static int
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (5 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:04   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation Wei Hu (Xavier)
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds the following operations defined in struct eth_dev_ops:
mtu_set, infos_get and fw_version_get for hns3 PMD driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 137 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 136 insertions(+), 1 deletion(-)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 44e21ac..ced9348 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -40,6 +40,8 @@
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
+static int hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+
 static int
 hns3_config_tso(struct hns3_hw *hw, unsigned int tso_mss_min,
 		unsigned int tso_mss_max)
@@ -1000,6 +1002,131 @@ hns3_config_mtu(struct hns3_hw *hw, uint16_t mps)
 }
 
 static int
+hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
+	struct hns3_hw *hw = &hns->hw;
+	bool is_jumbo_frame;
+	int ret;
+
+	if (mtu < RTE_ETHER_MIN_MTU || frame_size > HNS3_MAX_FRAME_LEN) {
+		hns3_err(hw, "Failed to set mtu, mtu(%u) invalid. valid "
+			 "range: %d~%d", mtu, RTE_ETHER_MIN_MTU, HNS3_MAX_MTU);
+		return -EINVAL;
+	}
+
+	if (dev->data->dev_started) {
+		hns3_err(hw, "Failed to set mtu, port %u must be stopped "
+			 "before configuration", dev->data->port_id);
+		return -EBUSY;
+	}
+
+	rte_spinlock_lock(&hw->lock);
+	is_jumbo_frame = frame_size > RTE_ETHER_MAX_LEN ? true : false;
+	frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
+
+	/*
+	 * Maximum value of frame_size is HNS3_MAX_FRAME_LEN, so it can safely
+	 * assign to "uint16_t" type variable.
+	 */
+	ret = hns3_config_mtu(hw, (uint16_t)frame_size);
+	if (ret) {
+		rte_spinlock_unlock(&hw->lock);
+		hns3_err(hw, "Failed to set mtu, port %u mtu %u: %d",
+			 dev->data->port_id, mtu, ret);
+		return ret;
+	}
+	hns->pf.mps = (uint16_t)frame_size;
+	if (is_jumbo_frame)
+		dev->data->dev_conf.rxmode.offloads |=
+						DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static void
+hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	info->max_rx_queues = hw->tqps_num;
+	info->max_tx_queues = hw->tqps_num;
+	info->max_rx_pktlen = HNS3_MAX_FRAME_LEN; /* CRC included */
+	info->min_rx_bufsize = hw->rx_buf_len;
+	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
+	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
+	info->min_mtu = RTE_ETHER_MIN_MTU;
+	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
+				 DEV_RX_OFFLOAD_UDP_CKSUM |
+				 DEV_RX_OFFLOAD_TCP_CKSUM |
+				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 DEV_RX_OFFLOAD_KEEP_CRC |
+				 DEV_RX_OFFLOAD_SCATTER |
+				 DEV_RX_OFFLOAD_VLAN_STRIP |
+				 DEV_RX_OFFLOAD_QINQ_STRIP |
+				 DEV_RX_OFFLOAD_VLAN_FILTER |
+				 DEV_RX_OFFLOAD_VLAN_EXTEND |
+				 DEV_RX_OFFLOAD_JUMBO_FRAME);
+	info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	info->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
+				 DEV_TX_OFFLOAD_UDP_CKSUM |
+				 DEV_TX_OFFLOAD_TCP_CKSUM |
+				 DEV_TX_OFFLOAD_VLAN_INSERT |
+				 DEV_TX_OFFLOAD_QINQ_INSERT |
+				 DEV_TX_OFFLOAD_MULTI_SEGS |
+				 info->tx_queue_offload_capa);
+
+	info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = HNS3_MAX_RING_DESC,
+		.nb_min = HNS3_MIN_RING_DESC,
+		.nb_align = HNS3_ALIGN_RING_DESC,
+	};
+
+	info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = HNS3_MAX_RING_DESC,
+		.nb_min = HNS3_MIN_RING_DESC,
+		.nb_align = HNS3_ALIGN_RING_DESC,
+	};
+
+	info->vmdq_queue_num = 0;
+
+	info->reta_size = HNS3_RSS_IND_TBL_SIZE;
+	info->hash_key_size = HNS3_RSS_KEY_SIZE;
+	info->flow_type_rss_offloads = HNS3_ETH_RSS_SUPPORT;
+
+	info->default_rxportconf.burst_size = HNS3_DEFAULT_PORT_CONF_BURST_SIZE;
+	info->default_txportconf.burst_size = HNS3_DEFAULT_PORT_CONF_BURST_SIZE;
+	info->default_rxportconf.nb_queues = HNS3_DEFAULT_PORT_CONF_QUEUES_NUM;
+	info->default_txportconf.nb_queues = HNS3_DEFAULT_PORT_CONF_QUEUES_NUM;
+	info->default_rxportconf.ring_size = HNS3_DEFAULT_RING_DESC;
+	info->default_txportconf.ring_size = HNS3_DEFAULT_RING_DESC;
+}
+
+static int
+hns3_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
+		    size_t fw_size)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "0x%08x", hw->fw_version);
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (uint32_t)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 hns3_parse_func_status(struct hns3_hw *hw, struct hns3_func_status_cmd *status)
 {
 	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
@@ -2394,6 +2521,9 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 
 static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_close          = hns3_dev_close,
+	.mtu_set            = hns3_dev_mtu_set,
+	.dev_infos_get          = hns3_dev_infos_get,
+	.fw_version_get         = hns3_fw_version_get,
 	.mac_addr_add           = hns3_add_mac_addr,
 	.mac_addr_remove        = hns3_remove_mac_addr,
 	.mac_addr_set           = hns3_set_default_mac_addr,
@@ -2420,7 +2550,12 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 
 	hns->is_vf = false;
 	hw->data = eth_dev->data;
-	hns->pf.mps = HNS3_DEFAULT_FRAME_LEN;
+
+	/*
+	 * Set default max packet size according to the mtu
+	 * default vale in DPDK frame.
+	 */
+	hns->pf.mps = hw->data->mtu + HNS3_ETH_OVERHEAD;
 
 	ret = hns3_init_pf(eth_dev);
 	if (ret) {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (6 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:04   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver Wei Hu (Xavier)
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds link update operation to hns3 PMD driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 199 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 199 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ced9348..a162d7f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -1127,6 +1127,40 @@ hns3_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
 }
 
 static int
+hns3_dev_link_update(struct rte_eth_dev *eth_dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_mac *mac = &hw->mac;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+	switch (mac->link_speed) {
+	case ETH_SPEED_NUM_10M:
+	case ETH_SPEED_NUM_100M:
+	case ETH_SPEED_NUM_1G:
+	case ETH_SPEED_NUM_10G:
+	case ETH_SPEED_NUM_25G:
+	case ETH_SPEED_NUM_40G:
+	case ETH_SPEED_NUM_50G:
+	case ETH_SPEED_NUM_100G:
+		new_link.link_speed = mac->link_speed;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	}
+
+	new_link.link_duplex = mac->link_duplex;
+	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_autoneg =
+	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(eth_dev, &new_link);
+}
+
+static int
 hns3_parse_func_status(struct hns3_hw *hw, struct hns3_func_status_cmd *status)
 {
 	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
@@ -2382,6 +2416,169 @@ hns3_set_promisc_mode(struct hns3_hw *hw, bool en_uc_pmc, bool en_mc_pmc)
 }
 
 static int
+hns3_get_sfp_speed(struct hns3_hw *hw, uint32_t *speed)
+{
+	struct hns3_sfp_speed_cmd *resp;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_SFP_GET_SPEED, true);
+	resp = (struct hns3_sfp_speed_cmd *)desc.data;
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret == -EOPNOTSUPP) {
+		hns3_err(hw, "IMP do not support get SFP speed %d", ret);
+		return ret;
+	} else if (ret) {
+		hns3_err(hw, "get sfp speed failed %d", ret);
+		return ret;
+	}
+
+	*speed = resp->sfp_speed;
+
+	return 0;
+}
+
+static uint8_t
+hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
+{
+	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
+		duplex = ETH_LINK_FULL_DUPLEX;
+
+	return duplex;
+}
+
+static int
+hns3_cfg_mac_speed_dup(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
+{
+	struct hns3_mac *mac = &hw->mac;
+	int ret;
+
+	duplex = hns3_check_speed_dup(duplex, speed);
+	if (mac->link_speed == speed && mac->link_duplex == duplex)
+		return 0;
+
+	ret = hns3_cfg_mac_speed_dup_hw(hw, speed, duplex);
+	if (ret)
+		return ret;
+
+	mac->link_speed = speed;
+	mac->link_duplex = duplex;
+
+	return 0;
+}
+
+static int
+hns3_update_speed_duplex(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t speed;
+	int ret;
+
+	/* If IMP do not support get SFP/qSFP speed, return directly */
+	if (!pf->support_sfp_query)
+		return 0;
+
+	ret = hns3_get_sfp_speed(hw, &speed);
+	if (ret == -EOPNOTSUPP) {
+		pf->support_sfp_query = false;
+		return ret;
+	} else if (ret)
+		return ret;
+
+	if (speed == ETH_SPEED_NUM_NONE)
+		return 0; /* do nothing if no SFP */
+
+	/* Config full duplex for SFP */
+	return hns3_cfg_mac_speed_dup(hw, speed, ETH_LINK_FULL_DUPLEX);
+}
+
+static int
+hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
+{
+	struct hns3_config_mac_mode_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint32_t loop_en = 0;
+	uint8_t val = 0;
+	int ret;
+
+	req = (struct hns3_config_mac_mode_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_MAC_MODE, false);
+	if (enable)
+		val = 1;
+	hns3_set_bit(loop_en, HNS3_MAC_TX_EN_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_RX_EN_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_PAD_TX_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_PAD_RX_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_1588_TX_B, 0);
+	hns3_set_bit(loop_en, HNS3_MAC_1588_RX_B, 0);
+	hns3_set_bit(loop_en, HNS3_MAC_APP_LP_B, 0);
+	hns3_set_bit(loop_en, HNS3_MAC_LINE_LP_B, 0);
+	hns3_set_bit(loop_en, HNS3_MAC_FCS_TX_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_TX_OVERSIZE_TRUNCATE_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_RX_OVERSIZE_TRUNCATE_B, val);
+	hns3_set_bit(loop_en, HNS3_MAC_TX_UNDER_MIN_ERR_B, val);
+	req->txrx_pad_fcs_loop_en = rte_cpu_to_le_32(loop_en);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		PMD_INIT_LOG(ERR, "mac enable fail, ret =%d.", ret);
+
+	return ret;
+}
+
+static int
+hns3_get_mac_link_status(struct hns3_hw *hw)
+{
+	struct hns3_link_status_cmd *req;
+	struct hns3_cmd_desc desc;
+	int link_status;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_LINK_STATUS, true);
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "get link status cmd failed %d", ret);
+		return ret;
+	}
+
+	req = (struct hns3_link_status_cmd *)desc.data;
+	link_status = req->status & HNS3_LINK_STATUS_UP_M;
+
+	return !!link_status;
+}
+
+static void
+hns3_update_link_status(struct hns3_hw *hw)
+{
+	int state;
+
+	state = hns3_get_mac_link_status(hw);
+	if (state != hw->mac.link_status)
+		hw->mac.link_status = state;
+}
+
+static void
+hns3_service_handler(void *param)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	if (!hns3_is_reset_pending(hns)) {
+		hns3_update_speed_duplex(eth_dev);
+		hns3_update_link_status(hw);
+	} else
+		hns3_warn(hw, "Cancel the query when reset is pending");
+
+	rte_eal_alarm_set(HNS3_SERVICE_INTERVAL, hns3_service_handler, eth_dev);
+}
+
+static int
 hns3_init_hardware(struct hns3_adapter *hns)
 {
 	struct hns3_hw *hw = &hns->hw;
@@ -2528,6 +2725,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.mac_addr_remove        = hns3_remove_mac_addr,
 	.mac_addr_set           = hns3_set_default_mac_addr,
 	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
+	.link_update            = hns3_dev_link_update,
 };
 
 static int
@@ -2580,6 +2778,7 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 			    &eth_dev->data->mac_addrs[0]);
 
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	rte_eal_alarm_set(HNS3_SERVICE_INTERVAL, hns3_service_handler, eth_dev);
 	hns3_info(hw, "hns3 dev initialization successful!");
 
 	return 0;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (7 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:06   ` Ferruh Yigit
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS " Wei Hu (Xavier)
                   ` (13 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for flow directory of hns3 PMD driver.
Flow directory feature is only supported in hns3 PF driver.
It supports the network L2\L3\L4 and tunnel packet creation,
deletion, flushing, and querying hit statistics.

Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c    |    1 +
 drivers/net/hns3/hns3_ethdev.c |   33 +
 drivers/net/hns3/hns3_ethdev.h |    3 +
 drivers/net/hns3/hns3_fdir.c   | 1062 +++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_fdir.h   |  203 ++++++
 drivers/net/hns3/hns3_flow.c   | 1450 ++++++++++++++++++++++++++++++++++++++++
 6 files changed, 2752 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_fdir.c
 create mode 100644 drivers/net/hns3/hns3_fdir.h
 create mode 100644 drivers/net/hns3/hns3_flow.c

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index f272374..4fc282c 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -27,6 +27,7 @@
 #include <rte_io.h>
 
 #include "hns3_cmd.h"
+#include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
 #include "hns3_logs.h"
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index a162d7f..c1f9bcb 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -30,6 +30,7 @@
 #include <rte_pci.h>
 
 #include "hns3_cmd.h"
+#include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_regs.h"
@@ -2614,6 +2615,12 @@ hns3_init_hardware(struct hns3_adapter *hns)
 		goto err_mac_init;
 	}
 
+	ret = hns3_init_fd_config(hns);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init flow director: %d", ret);
+		goto err_mac_init;
+	}
+
 	ret = hns3_config_tso(hw, HNS3_TSO_MSS_MIN, HNS3_TSO_MSS_MAX);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to config tso: %d", ret);
@@ -2675,8 +2682,18 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 		goto err_get_config;
 	}
 
+	/* Initialize flow director filter list & hash */
+	ret = hns3_fdir_filter_init(hns);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to alloc hashmap for fdir: %d", ret);
+		goto err_hw_init;
+	}
+
 	return 0;
 
+err_hw_init:
+	hns3_uninit_umv_space(hw);
+
 err_get_config:
 	hns3_cmd_uninit(hw);
 
@@ -2699,6 +2716,7 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
+	hns3_fdir_filter_uninit(hns);
 	hns3_uninit_umv_space(hw);
 	hns3_cmd_uninit(hw);
 	hns3_cmd_destroy_queue(hw);
@@ -2726,6 +2744,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.mac_addr_set           = hns3_set_default_mac_addr,
 	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
 	.link_update            = hns3_dev_link_update,
+	.filter_ctrl            = hns3_dev_filter_ctrl,
 };
 
 static int
@@ -2739,6 +2758,16 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	eth_dev->process_private = (struct hns3_process_private *)
+	    rte_zmalloc_socket("hns3_filter_list",
+			       sizeof(struct hns3_process_private),
+			       RTE_CACHE_LINE_SIZE, eth_dev->device->numa_node);
+	if (eth_dev->process_private == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
+		return -ENOMEM;
+	}
+	/* initialize flow filter lists */
+	hns3_filterlist_init(eth_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -2788,6 +2817,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 
 err_init_pf:
 	eth_dev->dev_ops = NULL;
+	rte_free(eth_dev->process_private);
+	eth_dev->process_private = NULL;
 	return ret;
 }
 
@@ -2808,6 +2839,8 @@ hns3_dev_uninit(struct rte_eth_dev *eth_dev)
 	if (hw->adapter_state < HNS3_NIC_CLOSING)
 		hns3_dev_close(eth_dev);
 
+	rte_free(eth_dev->process_private);
+	eth_dev->process_private = NULL;
 	hw->adapter_state = HNS3_NIC_REMOVED;
 	return 0;
 }
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index d5f62fe..c46211a 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -463,6 +463,9 @@ struct hns3_pf {
 	struct hns3_vtag_cfg vtag_config;
 	struct hns3_port_base_vlan_config port_base_vlan_cfg;
 	LIST_HEAD(vlan_tbl, hns3_user_vlan_table) vlan_list;
+
+	struct hns3_fdir_info fdir; /* flow director info */
+	LIST_HEAD(counters, hns3_flow_counter) flow_counters;
 };
 
 struct hns3_vf {
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
new file mode 100644
index 0000000..aa7d968
--- /dev/null
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -0,0 +1,1062 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+#include <sys/queue.h>
+#include <rte_ethdev_driver.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "hns3_cmd.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+
+#define HNS3_VLAN_TAG_TYPE_NONE		0
+#define HNS3_VLAN_TAG_TYPE_TAG2		1
+#define HNS3_VLAN_TAG_TYPE_TAG1		2
+#define HNS3_VLAN_TAG_TYPE_TAG1_2	3
+
+#define HNS3_PF_ID_S			0
+#define HNS3_PF_ID_M			GENMASK(2, 0)
+#define HNS3_VF_ID_S			3
+#define HNS3_VF_ID_M			GENMASK(10, 3)
+#define HNS3_PORT_TYPE_B		11
+#define HNS3_NETWORK_PORT_ID_S		0
+#define HNS3_NETWORK_PORT_ID_M		GENMASK(3, 0)
+
+#define HNS3_FD_EPORT_SW_EN_B		0
+
+#define HNS3_FD_AD_DATA_S		32
+#define HNS3_FD_AD_DROP_B		0
+#define HNS3_FD_AD_DIRECT_QID_B	1
+#define HNS3_FD_AD_QID_S		2
+#define HNS3_FD_AD_QID_M		GENMASK(12, 2)
+#define HNS3_FD_AD_USE_COUNTER_B	12
+#define HNS3_FD_AD_COUNTER_NUM_S	13
+#define HNS3_FD_AD_COUNTER_NUM_M	GENMASK(20, 13)
+#define HNS3_FD_AD_NXT_STEP_B		20
+#define HNS3_FD_AD_NXT_KEY_S		21
+#define HNS3_FD_AD_NXT_KEY_M		GENMASK(26, 21)
+#define HNS3_FD_AD_WR_RULE_ID_B	0
+#define HNS3_FD_AD_RULE_ID_S		1
+#define HNS3_FD_AD_RULE_ID_M		GENMASK(13, 1)
+
+enum HNS3_PORT_TYPE {
+	HOST_PORT,
+	NETWORK_PORT
+};
+
+enum HNS3_FD_MODE {
+	HNS3_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1,
+	HNS3_FD_MODE_DEPTH_1K_WIDTH_400B_STAGE_2,
+	HNS3_FD_MODE_DEPTH_4K_WIDTH_200B_STAGE_1,
+	HNS3_FD_MODE_DEPTH_2K_WIDTH_200B_STAGE_2,
+};
+
+enum HNS3_FD_KEY_TYPE {
+	HNS3_FD_KEY_BASE_ON_PTYPE,
+	HNS3_FD_KEY_BASE_ON_TUPLE,
+};
+
+enum HNS3_FD_META_DATA {
+	PACKET_TYPE_ID,
+	IP_FRAGEMENT,
+	ROCE_TYPE,
+	NEXT_KEY,
+	VLAN_NUMBER,
+	SRC_VPORT,
+	DST_VPORT,
+	TUNNEL_PACKET,
+	MAX_META_DATA,
+};
+
+struct key_info {
+	uint8_t key_type;
+	uint8_t key_length;
+};
+
+static const struct key_info meta_data_key_info[] = {
+	{PACKET_TYPE_ID, 6},
+	{IP_FRAGEMENT, 1},
+	{ROCE_TYPE, 1},
+	{NEXT_KEY, 5},
+	{VLAN_NUMBER, 2},
+	{SRC_VPORT, 12},
+	{DST_VPORT, 12},
+	{TUNNEL_PACKET, 1},
+};
+
+static const struct key_info tuple_key_info[] = {
+	{OUTER_DST_MAC, 48},
+	{OUTER_SRC_MAC, 48},
+	{OUTER_VLAN_TAG_FST, 16},
+	{OUTER_VLAN_TAG_SEC, 16},
+	{OUTER_ETH_TYPE, 16},
+	{OUTER_L2_RSV, 16},
+	{OUTER_IP_TOS, 8},
+	{OUTER_IP_PROTO, 8},
+	{OUTER_SRC_IP, 32},
+	{OUTER_DST_IP, 32},
+	{OUTER_L3_RSV, 16},
+	{OUTER_SRC_PORT, 16},
+	{OUTER_DST_PORT, 16},
+	{OUTER_L4_RSV, 32},
+	{OUTER_TUN_VNI, 24},
+	{OUTER_TUN_FLOW_ID, 8},
+	{INNER_DST_MAC, 48},
+	{INNER_SRC_MAC, 48},
+	{INNER_VLAN_TAG1, 16},
+	{INNER_VLAN_TAG2, 16},
+	{INNER_ETH_TYPE, 16},
+	{INNER_L2_RSV, 16},
+	{INNER_IP_TOS, 8},
+	{INNER_IP_PROTO, 8},
+	{INNER_SRC_IP, 32},
+	{INNER_DST_IP, 32},
+	{INNER_L3_RSV, 16},
+	{INNER_SRC_PORT, 16},
+	{INNER_DST_PORT, 16},
+	{INNER_SCTP_TAG, 32},
+};
+
+#define HNS3_BITS_PER_BYTE	8
+#define MAX_KEY_LENGTH		400
+#define MAX_200B_KEY_LENGTH	200
+#define MAX_META_DATA_LENGTH	16
+#define MAX_KEY_DWORDS	DIV_ROUND_UP(MAX_KEY_LENGTH / HNS3_BITS_PER_BYTE, 4)
+#define MAX_KEY_BYTES	(MAX_KEY_DWORDS * 4)
+
+enum HNS3_FD_PACKET_TYPE {
+	NIC_PACKET,
+	ROCE_PACKET,
+};
+
+/* For each bit of TCAM entry, it uses a pair of 'x' and
+ * 'y' to indicate which value to match, like below:
+ * ----------------------------------
+ * | bit x | bit y |  search value  |
+ * ----------------------------------
+ * |   0   |   0   |   always hit   |
+ * ----------------------------------
+ * |   1   |   0   |   match '0'    |
+ * ----------------------------------
+ * |   0   |   1   |   match '1'    |
+ * ----------------------------------
+ * |   1   |   1   |   invalid      |
+ * ----------------------------------
+ * Then for input key(k) and mask(v), we can calculate the value by
+ * the formulae:
+ *	x = (~k) & v
+ *	y = k & v
+ */
+#define calc_x(x, k, v) ((x) = (~(k) & (v)))
+#define calc_y(y, k, v) ((y) = ((k) & (v)))
+
+struct hns3_fd_tcam_config_1_cmd {
+	uint8_t stage;
+	uint8_t xy_sel;
+	uint8_t port_info;
+	uint8_t rsv1[1];
+	rte_le32_t index;
+	uint8_t entry_vld;
+	uint8_t rsv2[7];
+	uint8_t tcam_data[8];
+};
+
+struct hns3_fd_tcam_config_2_cmd {
+	uint8_t tcam_data[24];
+};
+
+struct hns3_fd_tcam_config_3_cmd {
+	uint8_t tcam_data[20];
+	uint8_t rsv[4];
+};
+
+struct hns3_get_fd_mode_cmd {
+	uint8_t mode;
+	uint8_t enable;
+	uint8_t rsv[22];
+};
+
+struct hns3_get_fd_allocation_cmd {
+	rte_le32_t stage1_entry_num;
+	rte_le32_t stage2_entry_num;
+	rte_le16_t stage1_counter_num;
+	rte_le16_t stage2_counter_num;
+	uint8_t rsv[12];
+};
+
+struct hns3_set_fd_key_config_cmd {
+	uint8_t stage;
+	uint8_t key_select;
+	uint8_t inner_sipv6_word_en;
+	uint8_t inner_dipv6_word_en;
+	uint8_t outer_sipv6_word_en;
+	uint8_t outer_dipv6_word_en;
+	uint8_t rsv1[2];
+	rte_le32_t tuple_mask;
+	rte_le32_t meta_data_mask;
+	uint8_t rsv2[8];
+};
+
+struct hns3_fd_ad_config_cmd {
+	uint8_t stage;
+	uint8_t rsv1[3];
+	rte_le32_t index;
+	rte_le64_t ad_data;
+	uint8_t rsv2[8];
+};
+
+struct hns3_fd_get_cnt_cmd {
+	uint8_t stage;
+	uint8_t rsv1[3];
+	rte_le16_t index;
+	uint8_t rsv2[2];
+	rte_le64_t value;
+	uint8_t rsv3[8];
+};
+
+static int hns3_get_fd_mode(struct hns3_hw *hw, uint8_t *fd_mode)
+{
+	struct hns3_get_fd_mode_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FD_MODE_CTRL, true);
+
+	req = (struct hns3_get_fd_mode_cmd *)desc.data;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Get fd mode fail, ret=%d", ret);
+		return ret;
+	}
+
+	*fd_mode = req->mode;
+
+	return ret;
+}
+
+static int hns3_get_fd_allocation(struct hns3_hw *hw,
+				  uint32_t *stage1_entry_num,
+				  uint32_t *stage2_entry_num,
+				  uint16_t *stage1_counter_num,
+				  uint16_t *stage2_counter_num)
+{
+	struct hns3_get_fd_allocation_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FD_GET_ALLOCATION, true);
+
+	req = (struct hns3_get_fd_allocation_cmd *)desc.data;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Query fd allocation fail, ret=%d", ret);
+		return ret;
+	}
+
+	*stage1_entry_num = rte_le_to_cpu_32(req->stage1_entry_num);
+	*stage2_entry_num = rte_le_to_cpu_32(req->stage2_entry_num);
+	*stage1_counter_num = rte_le_to_cpu_16(req->stage1_counter_num);
+	*stage2_counter_num = rte_le_to_cpu_16(req->stage2_counter_num);
+
+	return ret;
+}
+
+static int hns3_set_fd_key_config(struct hns3_adapter *hns)
+{
+	struct hns3_set_fd_key_config_cmd *req;
+	struct hns3_fd_key_cfg *key_cfg;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FD_KEY_CONFIG, false);
+
+	req = (struct hns3_set_fd_key_config_cmd *)desc.data;
+	key_cfg = &pf->fdir.fd_cfg.key_cfg[HNS3_FD_STAGE_1];
+	req->stage = HNS3_FD_STAGE_1;
+	req->key_select = key_cfg->key_sel;
+	req->inner_sipv6_word_en = key_cfg->inner_sipv6_word_en;
+	req->inner_dipv6_word_en = key_cfg->inner_dipv6_word_en;
+	req->outer_sipv6_word_en = key_cfg->outer_sipv6_word_en;
+	req->outer_dipv6_word_en = key_cfg->outer_dipv6_word_en;
+	req->tuple_mask = rte_cpu_to_le_32(~key_cfg->tuple_active);
+	req->meta_data_mask = rte_cpu_to_le_32(~key_cfg->meta_data_active);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Set fd key fail, ret=%d", ret);
+
+	return ret;
+}
+
+int hns3_init_fd_config(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_fd_key_cfg *key_cfg;
+	int ret;
+
+	ret = hns3_get_fd_mode(hw, &pf->fdir.fd_cfg.fd_mode);
+	if (ret)
+		return ret;
+
+	switch (pf->fdir.fd_cfg.fd_mode) {
+	case HNS3_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1:
+		pf->fdir.fd_cfg.max_key_length = MAX_KEY_LENGTH;
+		break;
+	case HNS3_FD_MODE_DEPTH_4K_WIDTH_200B_STAGE_1:
+		pf->fdir.fd_cfg.max_key_length = MAX_200B_KEY_LENGTH;
+		hns3_warn(hw, "Unsupported tunnel filter in 4K*200Bit");
+		break;
+	default:
+		hns3_err(hw, "Unsupported flow director mode %d",
+			    pf->fdir.fd_cfg.fd_mode);
+		return -EOPNOTSUPP;
+	}
+
+	key_cfg = &pf->fdir.fd_cfg.key_cfg[HNS3_FD_STAGE_1];
+	key_cfg->key_sel = HNS3_FD_KEY_BASE_ON_TUPLE;
+	key_cfg->inner_sipv6_word_en = IPV6_ADDR_WORD_MASK;
+	key_cfg->inner_dipv6_word_en = IPV6_ADDR_WORD_MASK;
+	key_cfg->outer_sipv6_word_en = 0;
+	key_cfg->outer_dipv6_word_en = 0;
+
+	key_cfg->tuple_active = BIT(INNER_VLAN_TAG1) | BIT(INNER_ETH_TYPE) |
+	    BIT(INNER_IP_PROTO) | BIT(INNER_IP_TOS) |
+	    BIT(INNER_SRC_IP) | BIT(INNER_DST_IP) |
+	    BIT(INNER_SRC_PORT) | BIT(INNER_DST_PORT);
+
+	/* If use max 400bit key, we can support tuples for ether type */
+	if (pf->fdir.fd_cfg.max_key_length == MAX_KEY_LENGTH) {
+		key_cfg->tuple_active |=
+		    BIT(INNER_DST_MAC) | BIT(INNER_SRC_MAC) |
+		    BIT(OUTER_SRC_PORT) | BIT(INNER_SCTP_TAG) |
+		    BIT(OUTER_DST_PORT) | BIT(INNER_VLAN_TAG2) |
+		    BIT(OUTER_TUN_VNI) | BIT(OUTER_TUN_FLOW_ID) |
+		    BIT(OUTER_ETH_TYPE) | BIT(OUTER_IP_PROTO);
+	}
+
+	/* roce_type is used to filter roce frames
+	 * dst_vport is used to specify the rule
+	 */
+	key_cfg->meta_data_active = BIT(DST_VPORT) | BIT(TUNNEL_PACKET) |
+	    BIT(VLAN_NUMBER);
+
+	ret = hns3_get_fd_allocation(hw,
+				     &pf->fdir.fd_cfg.rule_num[HNS3_FD_STAGE_1],
+				     &pf->fdir.fd_cfg.rule_num[HNS3_FD_STAGE_2],
+				     &pf->fdir.fd_cfg.cnt_num[HNS3_FD_STAGE_1],
+				     &pf->fdir.fd_cfg.cnt_num[HNS3_FD_STAGE_2]);
+	if (ret)
+		return ret;
+
+	return hns3_set_fd_key_config(hns);
+}
+
+static int hns3_fd_tcam_config(struct hns3_hw *hw, bool sel_x, int loc,
+			       uint8_t *key, bool is_add)
+{
+#define	FD_TCAM_CMD_NUM 3
+	struct hns3_fd_tcam_config_1_cmd *req1;
+	struct hns3_fd_tcam_config_2_cmd *req2;
+	struct hns3_fd_tcam_config_3_cmd *req3;
+	struct hns3_cmd_desc desc[FD_TCAM_CMD_NUM];
+	int len;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_OPC_FD_TCAM_OP, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+	hns3_cmd_setup_basic_desc(&desc[1], HNS3_OPC_FD_TCAM_OP, false);
+	desc[1].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+	hns3_cmd_setup_basic_desc(&desc[2], HNS3_OPC_FD_TCAM_OP, false);
+
+	req1 = (struct hns3_fd_tcam_config_1_cmd *)desc[0].data;
+	req2 = (struct hns3_fd_tcam_config_2_cmd *)desc[1].data;
+	req3 = (struct hns3_fd_tcam_config_3_cmd *)desc[2].data;
+
+	req1->stage = HNS3_FD_STAGE_1;
+	req1->xy_sel = sel_x ? 1 : 0;
+	hns3_set_bit(req1->port_info, HNS3_FD_EPORT_SW_EN_B, 0);
+	req1->index = rte_cpu_to_le_32(loc);
+	req1->entry_vld = sel_x ? is_add : 0;
+
+	if (key) {
+		len = sizeof(req1->tcam_data);
+		memcpy(req1->tcam_data, key, len);
+		key += len;
+
+		len = sizeof(req2->tcam_data);
+		memcpy(req2->tcam_data, key, len);
+		key += len;
+
+		len = sizeof(req3->tcam_data);
+		memcpy(req3->tcam_data, key, len);
+	}
+
+	ret = hns3_cmd_send(hw, desc, FD_TCAM_CMD_NUM);
+	if (ret)
+		hns3_err(hw, "Config tcam key fail, ret=%d loc=%d add=%d",
+			    ret, loc, is_add);
+	return ret;
+}
+
+static int hns3_fd_ad_config(struct hns3_hw *hw, int loc,
+			     struct hns3_fd_ad_data *action)
+{
+	struct hns3_fd_ad_config_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint64_t ad_data = 0;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FD_AD_OP, false);
+
+	req = (struct hns3_fd_ad_config_cmd *)desc.data;
+	req->index = rte_cpu_to_le_32(loc);
+	req->stage = HNS3_FD_STAGE_1;
+
+	hns3_set_bit(ad_data, HNS3_FD_AD_WR_RULE_ID_B,
+		     action->write_rule_id_to_bd);
+	hns3_set_field(ad_data, HNS3_FD_AD_RULE_ID_M, HNS3_FD_AD_RULE_ID_S,
+		       action->rule_id);
+	ad_data <<= HNS3_FD_AD_DATA_S;
+	hns3_set_bit(ad_data, HNS3_FD_AD_DROP_B, action->drop_packet);
+	hns3_set_bit(ad_data, HNS3_FD_AD_DIRECT_QID_B,
+		     action->forward_to_direct_queue);
+	hns3_set_field(ad_data, HNS3_FD_AD_QID_M, HNS3_FD_AD_QID_S,
+		       action->queue_id);
+	hns3_set_bit(ad_data, HNS3_FD_AD_USE_COUNTER_B, action->use_counter);
+	hns3_set_field(ad_data, HNS3_FD_AD_COUNTER_NUM_M,
+		       HNS3_FD_AD_COUNTER_NUM_S, action->counter_id);
+	hns3_set_bit(ad_data, HNS3_FD_AD_NXT_STEP_B, action->use_next_stage);
+	hns3_set_field(ad_data, HNS3_FD_AD_NXT_KEY_M, HNS3_FD_AD_NXT_KEY_S,
+		       action->counter_id);
+
+	req->ad_data = rte_cpu_to_le_64(ad_data);
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Config fd ad fail, ret=%d loc=%d", ret, loc);
+
+	return ret;
+}
+
+static inline void hns3_fd_convert_mac(uint8_t *key, uint8_t *mask,
+				       uint8_t *mac_x, uint8_t *mac_y)
+{
+	uint8_t tmp;
+	int i;
+
+	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+		tmp = RTE_ETHER_ADDR_LEN - 1 - i;
+		calc_x(mac_x[tmp], key[i], mask[i]);
+		calc_y(mac_y[tmp], key[i], mask[i]);
+	}
+}
+
+static void hns3_fd_convert_int16(uint32_t tuple, struct hns3_fdir_rule *rule,
+				  uint8_t *val_x, uint8_t *val_y)
+{
+	uint16_t tmp_x_s;
+	uint16_t tmp_y_s;
+	uint16_t mask;
+	uint16_t key;
+
+	switch (tuple) {
+	case OUTER_SRC_PORT:
+		key = rule->key_conf.spec.outer_src_port;
+		mask = rule->key_conf.mask.outer_src_port;
+		break;
+	case OUTER_DST_PORT:
+		key = rule->key_conf.spec.tunnel_type;
+		mask = rule->key_conf.mask.tunnel_type;
+		break;
+	case OUTER_ETH_TYPE:
+		key = rule->key_conf.spec.outer_ether_type;
+		mask = rule->key_conf.mask.outer_ether_type;
+		break;
+	case INNER_SRC_PORT:
+		key = rule->key_conf.spec.src_port;
+		mask = rule->key_conf.mask.src_port;
+		break;
+	case INNER_DST_PORT:
+		key = rule->key_conf.spec.dst_port;
+		mask = rule->key_conf.mask.dst_port;
+		break;
+	case INNER_VLAN_TAG1:
+		key = rule->key_conf.spec.vlan_tag1;
+		mask = rule->key_conf.mask.vlan_tag1;
+		break;
+	case INNER_VLAN_TAG2:
+		key = rule->key_conf.spec.vlan_tag2;
+		mask = rule->key_conf.mask.vlan_tag2;
+		break;
+	default:
+		/*  INNER_ETH_TYPE: */
+		key = rule->key_conf.spec.ether_type;
+		mask = rule->key_conf.mask.ether_type;
+		break;
+	}
+	calc_x(tmp_x_s, key, mask);
+	calc_y(tmp_y_s, key, mask);
+	val_x[0] = rte_cpu_to_le_16(tmp_x_s) & 0xFF;
+	val_x[1] = rte_cpu_to_le_16(tmp_x_s) >> HNS3_BITS_PER_BYTE;
+	val_y[0] = rte_cpu_to_le_16(tmp_y_s) & 0xFF;
+	val_y[1] = rte_cpu_to_le_16(tmp_y_s) >> HNS3_BITS_PER_BYTE;
+}
+
+static inline void hns3_fd_convert_int32(uint32_t key, uint32_t mask,
+					 uint8_t *val_x, uint8_t *val_y)
+{
+	uint32_t tmp_x_l;
+	uint32_t tmp_y_l;
+
+	calc_x(tmp_x_l, key, mask);
+	calc_y(tmp_y_l, key, mask);
+	memcpy(val_x, &tmp_x_l, sizeof(tmp_x_l));
+	memcpy(val_y, &tmp_y_l, sizeof(tmp_y_l));
+}
+
+static bool hns3_fd_convert_tuple(uint32_t tuple, uint8_t *key_x,
+				  uint8_t *key_y, struct hns3_fdir_rule *rule)
+{
+	struct hns3_fdir_key_conf *key_conf;
+	int tmp;
+	int i;
+
+	if ((rule->input_set & BIT(tuple)) == 0)
+		return true;
+
+	key_conf = &rule->key_conf;
+	switch (tuple) {
+	case INNER_DST_MAC:
+		hns3_fd_convert_mac(key_conf->spec.dst_mac,
+				    key_conf->mask.dst_mac, key_x, key_y);
+		break;
+	case INNER_SRC_MAC:
+		hns3_fd_convert_mac(key_conf->spec.src_mac,
+				    key_conf->mask.src_mac, key_x, key_y);
+		break;
+	case OUTER_SRC_PORT:
+	case OUTER_DST_PORT:
+	case OUTER_ETH_TYPE:
+	case INNER_SRC_PORT:
+	case INNER_DST_PORT:
+	case INNER_VLAN_TAG1:
+	case INNER_VLAN_TAG2:
+	case INNER_ETH_TYPE:
+		hns3_fd_convert_int16(tuple, rule, key_x, key_y);
+		break;
+	case INNER_SRC_IP:
+		hns3_fd_convert_int32(key_conf->spec.src_ip[IP_ADDR_KEY_ID],
+				      key_conf->mask.src_ip[IP_ADDR_KEY_ID],
+				      key_x, key_y);
+		break;
+	case INNER_DST_IP:
+		hns3_fd_convert_int32(key_conf->spec.dst_ip[IP_ADDR_KEY_ID],
+				      key_conf->mask.dst_ip[IP_ADDR_KEY_ID],
+				      key_x, key_y);
+		break;
+	case INNER_SCTP_TAG:
+		hns3_fd_convert_int32(key_conf->spec.sctp_tag,
+				      key_conf->mask.sctp_tag, key_x, key_y);
+		break;
+	case OUTER_TUN_VNI:
+		for (i = 0; i < VNI_OR_TNI_LEN; i++) {
+			tmp = VNI_OR_TNI_LEN - 1 - i;
+			calc_x(key_x[tmp],
+			       key_conf->spec.outer_tun_vni[i],
+			       key_conf->mask.outer_tun_vni[i]);
+			calc_y(key_y[tmp],
+			       key_conf->spec.outer_tun_vni[i],
+			       key_conf->mask.outer_tun_vni[i]);
+		}
+		break;
+	case OUTER_TUN_FLOW_ID:
+		calc_x(*key_x, key_conf->spec.outer_tun_flow_id,
+		       key_conf->mask.outer_tun_flow_id);
+		calc_y(*key_y, key_conf->spec.outer_tun_flow_id,
+		       key_conf->mask.outer_tun_flow_id);
+		break;
+	case INNER_IP_TOS:
+		calc_x(*key_x, key_conf->spec.ip_tos, key_conf->mask.ip_tos);
+		calc_y(*key_y, key_conf->spec.ip_tos, key_conf->mask.ip_tos);
+		break;
+	case OUTER_IP_PROTO:
+		calc_x(*key_x, key_conf->spec.outer_proto,
+		       key_conf->mask.outer_proto);
+		calc_y(*key_y, key_conf->spec.outer_proto,
+		       key_conf->mask.outer_proto);
+		break;
+	case INNER_IP_PROTO:
+		calc_x(*key_x, key_conf->spec.ip_proto,
+		       key_conf->mask.ip_proto);
+		calc_y(*key_y, key_conf->spec.ip_proto,
+		       key_conf->mask.ip_proto);
+		break;
+	}
+	return true;
+}
+
+static uint32_t hns3_get_port_number(uint8_t pf_id, uint8_t vf_id)
+{
+	uint32_t port_number = 0;
+
+	hns3_set_field(port_number, HNS3_PF_ID_M, HNS3_PF_ID_S, pf_id);
+	hns3_set_field(port_number, HNS3_VF_ID_M, HNS3_VF_ID_S, vf_id);
+	hns3_set_bit(port_number, HNS3_PORT_TYPE_B, HOST_PORT);
+
+	return port_number;
+}
+
+static void hns3_fd_convert_meta_data(struct hns3_fd_key_cfg *cfg,
+				      uint8_t vf_id,
+				      struct hns3_fdir_rule *rule,
+				      uint8_t *key_x, uint8_t *key_y)
+{
+	uint16_t meta_data = 0;
+	uint16_t port_number;
+	uint8_t cur_pos = 0;
+	uint8_t tuple_size;
+	uint8_t shift_bits;
+	uint32_t tmp_x;
+	uint32_t tmp_y;
+	uint8_t i;
+
+	for (i = 0; i < MAX_META_DATA; i++) {
+		if ((cfg->meta_data_active & BIT(i)) == 0)
+			continue;
+
+		tuple_size = meta_data_key_info[i].key_length;
+		if (i == TUNNEL_PACKET) {
+			hns3_set_bit(meta_data, cur_pos,
+				     rule->key_conf.spec.tunnel_type ? 1 : 0);
+			cur_pos += tuple_size;
+		} else if (i == VLAN_NUMBER) {
+			uint8_t vlan_tag;
+			uint8_t vlan_num;
+			if (rule->key_conf.spec.tunnel_type == 0)
+				vlan_num = rule->key_conf.vlan_num;
+			else
+				vlan_num = rule->key_conf.outer_vlan_num;
+			if (vlan_num == 1)
+				vlan_tag = HNS3_VLAN_TAG_TYPE_TAG1;
+			else if (vlan_num == VLAN_TAG_NUM_MAX)
+				vlan_tag = HNS3_VLAN_TAG_TYPE_TAG1_2;
+			else
+				vlan_tag = HNS3_VLAN_TAG_TYPE_NONE;
+			hns3_set_field(meta_data,
+				       GENMASK(cur_pos + tuple_size,
+					       cur_pos), cur_pos, vlan_tag);
+			cur_pos += tuple_size;
+		} else if (i == DST_VPORT) {
+			port_number = hns3_get_port_number(0, vf_id);
+			hns3_set_field(meta_data,
+				       GENMASK(cur_pos + tuple_size, cur_pos),
+				       cur_pos, port_number);
+			cur_pos += tuple_size;
+		}
+	}
+
+	calc_x(tmp_x, meta_data, 0xFFFF);
+	calc_y(tmp_y, meta_data, 0xFFFF);
+	shift_bits = sizeof(meta_data) * HNS3_BITS_PER_BYTE - cur_pos;
+
+	tmp_x = rte_cpu_to_le_32(tmp_x << shift_bits);
+	tmp_y = rte_cpu_to_le_32(tmp_y << shift_bits);
+	key_x[0] = tmp_x & 0xFF;
+	key_x[1] = (tmp_x >> HNS3_BITS_PER_BYTE) & 0xFF;
+	key_y[0] = tmp_y & 0xFF;
+	key_y[1] = (tmp_y >> HNS3_BITS_PER_BYTE) & 0xFF;
+}
+
+/* A complete key is combined with meta data key and tuple key.
+ * Meta data key is stored at the MSB region, and tuple key is stored at
+ * the LSB region, unused bits will be filled 0.
+ */
+static int hns3_config_key(struct hns3_adapter *hns,
+			   struct hns3_fdir_rule *rule)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_fd_key_cfg *key_cfg;
+	uint8_t *cur_key_x;
+	uint8_t *cur_key_y;
+	uint8_t key_x[MAX_KEY_BYTES] __attribute__((aligned(4)));
+	uint8_t key_y[MAX_KEY_BYTES] __attribute__((aligned(4)));
+	uint8_t vf_id = rule->vf_id;
+	uint8_t meta_data_region;
+	uint8_t tuple_size;
+	uint8_t i;
+	int ret;
+
+	memset(key_x, 0, sizeof(key_x));
+	memset(key_y, 0, sizeof(key_y));
+	cur_key_x = key_x;
+	cur_key_y = key_y;
+
+	key_cfg = &pf->fdir.fd_cfg.key_cfg[HNS3_FD_STAGE_1];
+	for (i = 0; i < MAX_TUPLE; i++) {
+		bool tuple_valid;
+
+		tuple_size = tuple_key_info[i].key_length / HNS3_BITS_PER_BYTE;
+		if (key_cfg->tuple_active & BIT(i)) {
+			tuple_valid = hns3_fd_convert_tuple(i, cur_key_x,
+							    cur_key_y, rule);
+			if (tuple_valid) {
+				cur_key_x += tuple_size;
+				cur_key_y += tuple_size;
+			}
+		}
+	}
+
+	meta_data_region = pf->fdir.fd_cfg.max_key_length / HNS3_BITS_PER_BYTE -
+	    MAX_META_DATA_LENGTH / HNS3_BITS_PER_BYTE;
+
+	hns3_fd_convert_meta_data(key_cfg, vf_id, rule,
+				  key_x + meta_data_region,
+				  key_y + meta_data_region);
+
+	ret = hns3_fd_tcam_config(hw, false, rule->location, key_y, true);
+	if (ret) {
+		hns3_err(hw, "Config fd key_y fail, loc=%d, ret=%d",
+			    rule->queue_id, ret);
+		return ret;
+	}
+
+	ret = hns3_fd_tcam_config(hw, true, rule->location, key_x, true);
+	if (ret)
+		hns3_err(hw, "Config fd key_x fail, loc=%d, ret=%d",
+			    rule->queue_id, ret);
+	return ret;
+}
+
+static int hns3_config_action(struct hns3_hw *hw, struct hns3_fdir_rule *rule)
+{
+	struct hns3_fd_ad_data ad_data;
+
+	ad_data.ad_id = rule->location;
+
+	if (rule->action == HNS3_FD_ACTION_DROP_PACKET) {
+		ad_data.drop_packet = true;
+		ad_data.forward_to_direct_queue = false;
+		ad_data.queue_id = 0;
+	} else {
+		ad_data.drop_packet = false;
+		ad_data.forward_to_direct_queue = true;
+		ad_data.queue_id = rule->queue_id;
+	}
+
+	if (unlikely(rule->flags & HNS3_RULE_FLAG_COUNTER)) {
+		ad_data.use_counter = true;
+		ad_data.counter_id = rule->act_cnt.id;
+	} else {
+		ad_data.use_counter = false;
+		ad_data.counter_id = 0;
+	}
+
+	if (unlikely(rule->flags & HNS3_RULE_FLAG_FDID))
+		ad_data.rule_id = rule->fd_id;
+	else
+		ad_data.rule_id = rule->location;
+
+	ad_data.use_next_stage = false;
+	ad_data.next_input_key = 0;
+
+	ad_data.write_rule_id_to_bd = true;
+
+	return hns3_fd_ad_config(hw, ad_data.ad_id, &ad_data);
+}
+
+int hns3_fdir_filter_init(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_fdir_info *fdir_info = &pf->fdir;
+	uint32_t rule_num = fdir_info->fd_cfg.rule_num[HNS3_FD_STAGE_1];
+	char fdir_hash_name[RTE_HASH_NAMESIZE];
+	struct rte_hash_parameters fdir_hash_params = {
+		.name = fdir_hash_name,
+		.entries = rule_num,
+		.key_len = sizeof(struct hns3_fdir_key_conf),
+		.hash_func = rte_hash_crc,
+		.hash_func_init_val = 0,
+	};
+
+	fdir_hash_params.socket_id = rte_socket_id();
+	TAILQ_INIT(&fdir_info->fdir_list);
+	rte_spinlock_init(&fdir_info->flows_lock);
+	snprintf(fdir_hash_name, RTE_HASH_NAMESIZE, "%s", hns->hw.data->name);
+	fdir_info->hash_handle = rte_hash_create(&fdir_hash_params);
+	if (fdir_info->hash_handle == NULL) {
+		PMD_INIT_LOG(ERR, "Create FDIR hash handle fail!");
+		return -EINVAL;
+	}
+	fdir_info->hash_map = rte_zmalloc("hns3 FDIR hash",
+					  rule_num *
+					  sizeof(struct hns3_fdir_rule_ele *),
+					  0);
+	if (fdir_info->hash_map == NULL) {
+		PMD_INIT_LOG(ERR, "Allocate memory for FDIR hash map fail!");
+		rte_hash_free(fdir_info->hash_handle);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+void hns3_fdir_filter_uninit(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_fdir_info *fdir_info = &pf->fdir;
+	struct hns3_fdir_rule_ele *fdir_filter;
+
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	if (fdir_info->hash_map) {
+		rte_free(fdir_info->hash_map);
+		fdir_info->hash_map = NULL;
+	}
+	if (fdir_info->hash_handle) {
+		rte_hash_free(fdir_info->hash_handle);
+		fdir_info->hash_handle = NULL;
+	}
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+
+	fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
+	while (fdir_filter) {
+		TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+		hns3_fd_tcam_config(&hns->hw, true,
+				    fdir_filter->fdir_conf.location, NULL,
+				    false);
+		rte_free(fdir_filter);
+		fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
+	}
+}
+
+/*
+ * Find a key in the hash table.
+ * @return
+ *   - Zero and positive values are key location.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOENT if the key is not found.
+ */
+static int hns3_fdir_filter_lookup(struct hns3_fdir_info *fdir_info,
+				    struct hns3_fdir_key_conf *key)
+{
+	hash_sig_t sig;
+	int ret;
+
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	sig = rte_hash_crc(key, sizeof(*key), 0);
+	ret = rte_hash_lookup_with_hash(fdir_info->hash_handle, key, sig);
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+
+	return ret;
+}
+
+static int hns3_insert_fdir_filter(struct hns3_hw *hw,
+				   struct hns3_fdir_info *fdir_info,
+				   struct hns3_fdir_rule_ele *fdir_filter)
+{
+	struct hns3_fdir_key_conf *key;
+	hash_sig_t sig;
+	int ret;
+
+	key = &fdir_filter->fdir_conf.key_conf;
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	sig = rte_hash_crc(key, sizeof(*key), 0);
+	ret = rte_hash_add_key_with_hash(fdir_info->hash_handle, key, sig);
+	if (ret < 0) {
+		rte_spinlock_unlock(&fdir_info->flows_lock);
+		hns3_err(hw, "Hash table full? err:%d(%s)!", ret,
+			 strerror(ret));
+		return ret;
+	}
+
+	fdir_info->hash_map[ret] = fdir_filter;
+	TAILQ_INSERT_TAIL(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+
+	return ret;
+}
+
+static int hns3_remove_fdir_filter(struct hns3_hw *hw,
+				   struct hns3_fdir_info *fdir_info,
+				   struct hns3_fdir_key_conf *key)
+{
+	struct hns3_fdir_rule_ele *fdir_filter;
+	hash_sig_t sig;
+	int ret;
+
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	sig = rte_hash_crc(key, sizeof(*key), 0);
+	ret = rte_hash_del_key_with_hash(fdir_info->hash_handle, key, sig);
+	if (ret < 0) {
+		rte_spinlock_unlock(&fdir_info->flows_lock);
+		hns3_err(hw, "Delete hash key fail ret=%d", ret);
+		return ret;
+	}
+
+	fdir_filter = fdir_info->hash_map[ret];
+	fdir_info->hash_map[ret] = NULL;
+	TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+
+	rte_free(fdir_filter);
+
+	return 0;
+}
+
+int hns3_fdir_filter_program(struct hns3_adapter *hns,
+			     struct hns3_fdir_rule *rule, bool del)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_fdir_info *fdir_info = &pf->fdir;
+	struct hns3_fdir_rule_ele *node;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	if (del) {
+		ret = hns3_fd_tcam_config(hw, true, rule->location, NULL,
+					  false);
+		if (ret)
+			hns3_err(hw, "Failed to delete fdir: %d src_ip:%x "
+				 "dst_ip:%x src_port:%d dst_port:%d",
+				 rule->location,
+				 rule->key_conf.spec.src_ip[IP_ADDR_KEY_ID],
+				 rule->key_conf.spec.dst_ip[IP_ADDR_KEY_ID],
+				 rule->key_conf.spec.src_port,
+				 rule->key_conf.spec.dst_port);
+		else
+			hns3_remove_fdir_filter(hw, fdir_info, &rule->key_conf);
+
+		return ret;
+	}
+
+	ret = hns3_fdir_filter_lookup(fdir_info, &rule->key_conf);
+	if (ret >= 0) {
+		hns3_err(hw, "Conflict with existing fdir loc: %d", ret);
+		return -EINVAL;
+	}
+
+	node = rte_zmalloc("hns3 fdir rule", sizeof(struct hns3_fdir_rule_ele),
+			   0);
+	if (node == NULL) {
+		hns3_err(hw, "Failed to allocate fdir_rule memory");
+		return -ENOMEM;
+	}
+
+	rte_memcpy(&node->fdir_conf, rule, sizeof(struct hns3_fdir_rule));
+	ret = hns3_insert_fdir_filter(hw, fdir_info, node);
+	if (ret < 0) {
+		rte_free(node);
+		return ret;
+	}
+	rule->location = ret;
+	node->fdir_conf.location = ret;
+
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	ret = hns3_config_action(hw, rule);
+	if (!ret)
+		ret = hns3_config_key(hns, rule);
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+	if (ret) {
+		hns3_err(hw, "Failed to config fdir: %d src_ip:%x dst_ip:%x "
+			 "src_port:%d dst_port:%d",
+			 rule->location,
+			 rule->key_conf.spec.src_ip[IP_ADDR_KEY_ID],
+			 rule->key_conf.spec.dst_ip[IP_ADDR_KEY_ID],
+			 rule->key_conf.spec.src_port,
+			 rule->key_conf.spec.dst_port);
+		(void)hns3_remove_fdir_filter(hw, fdir_info, &rule->key_conf);
+	}
+
+	return ret;
+}
+
+/* remove all the flow director filters */
+int hns3_clear_all_fdir_filter(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_fdir_info *fdir_info = &pf->fdir;
+	struct hns3_fdir_rule_ele *fdir_filter;
+	struct hns3_hw *hw = &hns->hw;
+	int ret = 0;
+
+	/* flush flow director */
+	rte_spinlock_lock(&fdir_info->flows_lock);
+	rte_hash_reset(fdir_info->hash_handle);
+	rte_spinlock_unlock(&fdir_info->flows_lock);
+
+	fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
+	while (fdir_filter) {
+		TAILQ_REMOVE(&fdir_info->fdir_list, fdir_filter, entries);
+		ret += hns3_fd_tcam_config(hw, true,
+					   fdir_filter->fdir_conf.location,
+					   NULL, false);
+		rte_free(fdir_filter);
+		fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list);
+	}
+
+	if (ret) {
+		hns3_err(hw, "Fail to delete FDIR filter!");
+		ret = -EIO;
+	}
+	return ret;
+}
+
+int hns3_restore_all_fdir_filter(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_fdir_info *fdir_info = &pf->fdir;
+	struct hns3_fdir_rule_ele *fdir_filter;
+	struct hns3_hw *hw = &hns->hw;
+	bool err = false;
+	int ret;
+
+	TAILQ_FOREACH(fdir_filter, &fdir_info->fdir_list, entries) {
+		ret = hns3_config_action(hw, &fdir_filter->fdir_conf);
+		if (!ret)
+			ret = hns3_config_key(hns, &fdir_filter->fdir_conf);
+		if (ret) {
+			err = true;
+			if (ret == -EBUSY)
+				break;
+		}
+	}
+
+	if (err) {
+		hns3_err(hw, "Fail to restore FDIR filter!");
+		return -EIO;
+	}
+	return 0;
+}
+
+int hns3_get_count(struct hns3_hw *hw, uint32_t id, uint64_t *value)
+{
+	struct hns3_fd_get_cnt_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FD_COUNTER_OP, true);
+
+	req = (struct hns3_fd_get_cnt_cmd *)desc.data;
+	req->stage = HNS3_FD_STAGE_1;
+	req->index = rte_cpu_to_le_32(id);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Read counter fail, ret=%d", ret);
+		return ret;
+	}
+
+	*value = req->value;
+
+	return ret;
+}
diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h
new file mode 100644
index 0000000..4825086
--- /dev/null
+++ b/drivers/net/hns3/hns3_fdir.h
@@ -0,0 +1,203 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_FDIR_H_
+#define _HNS3_FDIR_H_
+
+#include <rte_flow.h>
+
+struct hns3_fd_key_cfg {
+	uint8_t key_sel;
+	uint8_t inner_sipv6_word_en;
+	uint8_t inner_dipv6_word_en;
+	uint8_t outer_sipv6_word_en;
+	uint8_t outer_dipv6_word_en;
+	uint32_t tuple_active;
+	uint32_t meta_data_active;
+};
+
+enum HNS3_FD_STAGE {
+	HNS3_FD_STAGE_1,
+	HNS3_FD_STAGE_2,
+	HNS3_FD_STAGE_NUM,
+};
+
+enum HNS3_FD_ACTION {
+	HNS3_FD_ACTION_ACCEPT_PACKET,
+	HNS3_FD_ACTION_DROP_PACKET,
+};
+
+struct hns3_fd_cfg {
+	uint8_t fd_mode;
+	uint16_t max_key_length;
+	uint32_t rule_num[HNS3_FD_STAGE_NUM]; /* rule entry number */
+	uint16_t cnt_num[HNS3_FD_STAGE_NUM];  /* rule hit counter number */
+	struct hns3_fd_key_cfg key_cfg[HNS3_FD_STAGE_NUM];
+};
+
+/* OUTER_XXX indicates tuples in tunnel header of tunnel packet
+ * INNER_XXX indicate tuples in tunneled header of tunnel packet or
+ *           tuples of non-tunnel packet
+ */
+enum HNS3_FD_TUPLE {
+	OUTER_DST_MAC,
+	OUTER_SRC_MAC,
+	OUTER_VLAN_TAG_FST,
+	OUTER_VLAN_TAG_SEC,
+	OUTER_ETH_TYPE,
+	OUTER_L2_RSV,
+	OUTER_IP_TOS,
+	OUTER_IP_PROTO,
+	OUTER_SRC_IP,
+	OUTER_DST_IP,
+	OUTER_L3_RSV,
+	OUTER_SRC_PORT,
+	OUTER_DST_PORT,
+	OUTER_L4_RSV,
+	OUTER_TUN_VNI,
+	OUTER_TUN_FLOW_ID,
+	INNER_DST_MAC,
+	INNER_SRC_MAC,
+	INNER_VLAN_TAG1,
+	INNER_VLAN_TAG2,
+	INNER_ETH_TYPE,
+	INNER_L2_RSV,
+	INNER_IP_TOS,
+	INNER_IP_PROTO,
+	INNER_SRC_IP,
+	INNER_DST_IP,
+	INNER_L3_RSV,
+	INNER_SRC_PORT,
+	INNER_DST_PORT,
+	INNER_SCTP_TAG,
+	MAX_TUPLE,
+};
+
+#define VLAN_TAG_NUM_MAX 2
+#define VNI_OR_TNI_LEN 3
+#define IP_ADDR_LEN    4 /* Length of IPv6 address. */
+#define IP_ADDR_KEY_ID 3 /* The last 32bit of IP address as FDIR search key */
+#define IPV6_ADDR_WORD_MASK 3 /* The last two word of IPv6 as FDIR search key */
+
+struct hns3_fd_rule_tuples {
+	uint8_t src_mac[RTE_ETHER_ADDR_LEN];
+	uint8_t dst_mac[RTE_ETHER_ADDR_LEN];
+	uint32_t src_ip[IP_ADDR_LEN];
+	uint32_t dst_ip[IP_ADDR_LEN];
+	uint16_t src_port;
+	uint16_t dst_port;
+	uint16_t vlan_tag1;
+	uint16_t vlan_tag2;
+	uint16_t ether_type;
+	uint8_t ip_tos;
+	uint8_t ip_proto;
+	uint32_t sctp_tag;
+	uint16_t outer_src_port;
+	uint16_t tunnel_type;
+	uint16_t outer_ether_type;
+	uint8_t outer_proto;
+	uint8_t outer_tun_vni[VNI_OR_TNI_LEN];
+	uint8_t outer_tun_flow_id;
+};
+
+struct hns3_fd_ad_data {
+	uint16_t ad_id;
+	uint8_t drop_packet;
+	uint8_t forward_to_direct_queue;
+	uint16_t queue_id;
+	uint8_t use_counter;
+	uint8_t counter_id;
+	uint8_t use_next_stage;
+	uint8_t write_rule_id_to_bd;
+	uint8_t next_input_key;
+	uint16_t rule_id;
+};
+
+struct hns3_flow_counter {
+	LIST_ENTRY(hns3_flow_counter) next; /* Pointer to the next counter. */
+	uint32_t shared:1;   /* Share counter ID with other flow rules. */
+	uint32_t ref_cnt:31; /* Reference counter. */
+	uint16_t id;   /* Counter ID. */
+	uint64_t hits; /* Number of packets matched by the rule. */
+};
+
+#define HNS3_RULE_FLAG_FDID		0x1
+#define HNS3_RULE_FLAG_VF_ID		0x2
+#define HNS3_RULE_FLAG_COUNTER		0x4
+
+struct hns3_fdir_key_conf {
+	struct hns3_fd_rule_tuples spec;
+	struct hns3_fd_rule_tuples mask;
+	uint8_t vlan_num;
+	uint8_t outer_vlan_num;
+};
+
+struct hns3_fdir_rule {
+	struct hns3_fdir_key_conf key_conf;
+	uint32_t input_set;
+	uint32_t flags;
+	uint32_t fd_id; /* APP marked unique value for this rule. */
+	uint8_t action;
+	/* VF id, avaiblable when flags with HNS3_RULE_FLAG_VF_ID. */
+	uint8_t vf_id;
+	uint16_t queue_id;
+	uint16_t location;
+	struct rte_flow_action_count act_cnt;
+};
+
+/* FDIR filter list structure */
+struct hns3_fdir_rule_ele {
+	TAILQ_ENTRY(hns3_fdir_rule_ele) entries;
+	struct hns3_fdir_rule fdir_conf;
+};
+/* rss filter list structure */
+struct hns3_rss_conf_ele {
+	TAILQ_ENTRY(hns3_rss_conf_ele) entries;
+	struct hns3_rss_conf filter_info;
+};
+/* hns3_flow memory list structure */
+struct hns3_flow_mem {
+	TAILQ_ENTRY(hns3_flow_mem) entries;
+	struct rte_flow *flow;
+};
+
+TAILQ_HEAD(hns3_fdir_rule_list, hns3_fdir_rule_ele);
+TAILQ_HEAD(hns3_rss_filter_list, hns3_rss_conf_ele);
+TAILQ_HEAD(hns3_flow_mem_list, hns3_flow_mem);
+
+struct hns3_process_private {
+	struct hns3_fdir_rule_list fdir_list;
+	struct hns3_rss_filter_list filter_rss_list;
+	struct hns3_flow_mem_list flow_list;
+};
+
+/*
+ *  A structure used to define fields of a FDIR related info.
+ */
+struct hns3_fdir_info {
+	rte_spinlock_t flows_lock;
+	struct hns3_fdir_rule_list fdir_list;
+	struct hns3_fdir_rule_ele **hash_map;
+	struct rte_hash *hash_handle;
+	struct hns3_fd_cfg fd_cfg;
+};
+
+struct rte_flow {
+	enum rte_filter_type filter_type;
+	void *rule;
+	uint32_t counter_id;
+};
+struct hns3_adapter;
+
+int hns3_init_fd_config(struct hns3_adapter *hns);
+int hns3_fdir_filter_init(struct hns3_adapter *hns);
+void hns3_fdir_filter_uninit(struct hns3_adapter *hns);
+int hns3_fdir_filter_program(struct hns3_adapter *hns,
+			     struct hns3_fdir_rule *rule, bool del);
+int hns3_clear_all_fdir_filter(struct hns3_adapter *hns);
+int hns3_get_count(struct hns3_hw *hw, uint32_t id, uint64_t *value);
+void hns3_filterlist_init(struct rte_eth_dev *dev);
+int hns3_restore_all_fdir_filter(struct hns3_adapter *hns);
+
+#endif /* _HNS3_FDIR_H_ */
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
new file mode 100644
index 0000000..98ed818
--- /dev/null
+++ b/drivers/net/hns3/hns3_flow.c
@@ -0,0 +1,1450 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+#include <sys/queue.h>
+#include <rte_flow_driver.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "hns3_cmd.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+
+/* Default default keys */
+static uint8_t hns3_hash_key[] = {
+	0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
+	0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
+	0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
+	0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
+	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA
+};
+
+static const uint8_t full_mask[VNI_OR_TNI_LEN] = { 0xFF, 0xFF, 0xFF };
+static const uint8_t zero_mask[VNI_OR_TNI_LEN] = { 0x00, 0x00, 0x00 };
+
+/* Special Filter id for non-specific packet flagging. Don't change value */
+#define HNS3_MAX_FILTER_ID	0x0FFF
+
+#define ETHER_TYPE_MASK		0xFFFF
+#define IPPROTO_MASK		0xFF
+#define TUNNEL_TYPE_MASK	0xFFFF
+
+#define HNS3_TUNNEL_TYPE_VXLAN		0x12B5
+#define HNS3_TUNNEL_TYPE_VXLAN_GPE	0x12B6
+#define HNS3_TUNNEL_TYPE_GENEVE		0x17C1
+#define HNS3_TUNNEL_TYPE_NVGRE		0x6558
+
+static enum rte_flow_item_type first_items[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_SCTP,
+	RTE_FLOW_ITEM_TYPE_ICMP,
+	RTE_FLOW_ITEM_TYPE_NVGRE,
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_GENEVE,
+	RTE_FLOW_ITEM_TYPE_VXLAN_GPE,
+	RTE_FLOW_ITEM_TYPE_MPLS
+};
+
+static enum rte_flow_item_type L2_next_items[] = {
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV4,
+	RTE_FLOW_ITEM_TYPE_IPV6
+};
+
+static enum rte_flow_item_type L3_next_items[] = {
+	RTE_FLOW_ITEM_TYPE_TCP,
+	RTE_FLOW_ITEM_TYPE_UDP,
+	RTE_FLOW_ITEM_TYPE_SCTP,
+	RTE_FLOW_ITEM_TYPE_NVGRE,
+	RTE_FLOW_ITEM_TYPE_ICMP
+};
+
+static enum rte_flow_item_type L4_next_items[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_GENEVE,
+	RTE_FLOW_ITEM_TYPE_VXLAN_GPE,
+	RTE_FLOW_ITEM_TYPE_MPLS
+};
+
+static enum rte_flow_item_type tunnel_next_items[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN
+};
+
+struct items_step_mngr {
+	enum rte_flow_item_type *items;
+	int count;
+};
+
+static inline void
+net_addr_to_host(uint32_t *dst, const rte_be32_t *src, size_t len)
+{
+	size_t i;
+
+	for (i = 0; i < len; i++)
+		dst[i] = rte_be_to_cpu_32(src[i]);
+}
+
+static inline const struct rte_flow_action *
+find_rss_action(const struct rte_flow_action actions[])
+{
+	const struct rte_flow_action *next = &actions[0];
+
+	for (; next->type != RTE_FLOW_ACTION_TYPE_END; next++) {
+		if (next->type == RTE_FLOW_ACTION_TYPE_RSS)
+			return next;
+	}
+	return NULL;
+}
+
+static inline struct hns3_flow_counter *
+hns3_counter_lookup(struct rte_eth_dev *dev, uint32_t id)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_flow_counter *cnt;
+
+	LIST_FOREACH(cnt, &pf->flow_counters, next) {
+		if (cnt->id == id)
+			return cnt;
+	}
+	return NULL;
+}
+
+static int
+hns3_counter_new(struct rte_eth_dev *dev, uint32_t shared, uint32_t id,
+		 struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_flow_counter *cnt;
+
+	cnt = hns3_counter_lookup(dev, id);
+	if (cnt) {
+		if (!cnt->shared || cnt->shared != shared)
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ACTION,
+						  cnt,
+						  "Counter id is used,shared flag not match");
+		cnt->ref_cnt++;
+		return 0;
+	}
+
+	cnt = rte_zmalloc("hns3 counter", sizeof(*cnt), 0);
+	if (cnt == NULL)
+		return rte_flow_error_set(error, ENOMEM,
+					  RTE_FLOW_ERROR_TYPE_ACTION, cnt,
+					  "Alloc mem for counter failed");
+	cnt->id = id;
+	cnt->shared = shared;
+	cnt->ref_cnt = 1;
+	cnt->hits = 0;
+	LIST_INSERT_HEAD(&pf->flow_counters, cnt, next);
+	return 0;
+}
+
+static int
+hns3_counter_query(struct rte_eth_dev *dev, struct rte_flow *flow,
+		   struct rte_flow_query_count *qc,
+		   struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_flow_counter *cnt;
+	uint64_t value;
+	int ret;
+
+	/* FDIR is available only in PF driver */
+	if (hns->is_vf)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					  "Fdir is not supported in VF");
+	cnt = hns3_counter_lookup(dev, flow->counter_id);
+	if (cnt == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Can't find counter id");
+
+	ret = hns3_get_count(&hns->hw, flow->counter_id, &value);
+	if (ret) {
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL, "Read counter fail.");
+		return ret;
+	}
+	qc->hits_set = 1;
+	qc->hits = value;
+
+	return 0;
+}
+
+static int
+hns3_counter_release(struct rte_eth_dev *dev, uint32_t id)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_flow_counter *cnt;
+
+	cnt = hns3_counter_lookup(dev, id);
+	if (cnt == NULL) {
+		hns3_err(hw, "Can't find available counter to release");
+		return -EINVAL;
+	}
+	cnt->ref_cnt--;
+	if (cnt->ref_cnt == 0) {
+		LIST_REMOVE(cnt, next);
+		rte_free(cnt);
+	}
+	return 0;
+}
+
+static void
+hns3_counter_flush(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_flow_counter *cnt_ptr;
+
+	cnt_ptr = LIST_FIRST(&pf->flow_counters);
+	while (cnt_ptr) {
+		LIST_REMOVE(cnt_ptr, next);
+		rte_free(cnt_ptr);
+		cnt_ptr = LIST_FIRST(&pf->flow_counters);
+	}
+}
+
+static int
+hns3_handle_action_queue(struct rte_eth_dev *dev,
+			 const struct rte_flow_action *action,
+			 struct hns3_fdir_rule *rule,
+			 struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	const struct rte_flow_action_queue *queue;
+
+	queue = (const struct rte_flow_action_queue *)action->conf;
+	if (queue->index >= hw->data->nb_rx_queues)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, action,
+					  "Invalid queue ID in PF");
+	rule->queue_id = queue->index;
+	rule->action = HNS3_FD_ACTION_ACCEPT_PACKET;
+	return 0;
+}
+
+/*
+ * Parse actions structure from the provided pattern.
+ * The pattern is validated as the items are copied.
+ *
+ * @param actions[in]
+ * @param rule[out]
+ *   NIC specfilc actions derived from the actions.
+ * @param error[out]
+ */
+static int
+hns3_handle_actions(struct rte_eth_dev *dev,
+		    const struct rte_flow_action actions[],
+		    struct hns3_fdir_rule *rule, struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_flow_action_count *act_count;
+	const struct rte_flow_action_mark *mark;
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t counter_num;
+	int ret;
+
+	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+		switch (actions->type) {
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			ret = hns3_handle_action_queue(dev, actions, rule,
+						       error);
+			if (ret)
+				return ret;
+			break;
+		case RTE_FLOW_ACTION_TYPE_DROP:
+			rule->action = HNS3_FD_ACTION_DROP_PACKET;
+			break;
+		case RTE_FLOW_ACTION_TYPE_MARK:
+			mark =
+			    (const struct rte_flow_action_mark *)actions->conf;
+			if (mark->id >= HNS3_MAX_FILTER_ID)
+				return rte_flow_error_set(error, EINVAL,
+						     RTE_FLOW_ERROR_TYPE_ACTION,
+						     actions,
+						     "Invalid Mark ID");
+			rule->fd_id = mark->id;
+			rule->flags |= HNS3_RULE_FLAG_FDID;
+			break;
+		case RTE_FLOW_ACTION_TYPE_FLAG:
+			rule->fd_id = HNS3_MAX_FILTER_ID;
+			rule->flags |= HNS3_RULE_FLAG_FDID;
+			break;
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+			act_count =
+			    (const struct rte_flow_action_count *)actions->conf;
+			counter_num = pf->fdir.fd_cfg.cnt_num[HNS3_FD_STAGE_1];
+			if (act_count->id >= counter_num)
+				return rte_flow_error_set(error, EINVAL,
+						     RTE_FLOW_ERROR_TYPE_ACTION,
+						     actions,
+						     "Invalid counter id");
+			rule->act_cnt = *act_count;
+			rule->flags |= HNS3_RULE_FLAG_COUNTER;
+			break;
+		case RTE_FLOW_ACTION_TYPE_VOID:
+			break;
+		default:
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ACTION,
+						  NULL, "Unsupported action");
+		}
+	}
+
+	return 0;
+}
+
+/* Parse to get the attr and action info of flow director rule. */
+static int
+hns3_check_attr(const struct rte_flow_attr *attr, struct rte_flow_error *error)
+{
+	if (!attr->ingress)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+					  attr, "Ingress can't be zero");
+	if (attr->egress)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+					  attr, "Not support egress");
+	if (attr->transfer)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER,
+					  attr, "No support for transfer");
+	if (attr->priority)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+					  attr, "Not support priority");
+	if (attr->group)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+					  attr, "Not support group");
+	return 0;
+}
+
+static int
+hns3_parse_eth(const struct rte_flow_item *item,
+		   struct hns3_fdir_rule *rule, struct rte_flow_error *error)
+{
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		eth_mask = item->mask;
+		if (eth_mask->type) {
+			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
+			rule->key_conf.mask.ether_type =
+			    rte_be_to_cpu_16(eth_mask->type);
+		}
+		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
+			memcpy(rule->key_conf.mask.src_mac,
+			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+		}
+		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
+			memcpy(rule->key_conf.mask.dst_mac,
+			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		}
+	}
+
+	eth_spec = item->spec;
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	       RTE_ETHER_ADDR_LEN);
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	       RTE_ETHER_ADDR_LEN);
+	return 0;
+}
+
+static int
+hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	rule->key_conf.vlan_num++;
+	if (rule->key_conf.vlan_num > VLAN_TAG_NUM_MAX)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Vlan_num is more than 2");
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		vlan_mask = item->mask;
+		if (vlan_mask->tci) {
+			if (rule->key_conf.vlan_num == 1) {
+				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
+					     1);
+				rule->key_conf.mask.vlan_tag1 =
+				    rte_be_to_cpu_16(vlan_mask->tci);
+			} else {
+				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
+					     1);
+				rule->key_conf.mask.vlan_tag2 =
+				    rte_be_to_cpu_16(vlan_mask->tci);
+			}
+		}
+	}
+
+	vlan_spec = item->spec;
+	if (rule->key_conf.vlan_num == 1)
+		rule->key_conf.spec.vlan_tag1 =
+		    rte_be_to_cpu_16(vlan_spec->tci);
+	else
+		rule->key_conf.spec.vlan_tag2 =
+		    rte_be_to_cpu_16(vlan_spec->tci);
+	return 0;
+}
+
+static int
+hns3_parse_ipv4(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
+	rule->key_conf.spec.ether_type = RTE_ETHER_TYPE_IPV4;
+	rule->key_conf.mask.ether_type = ETHER_TYPE_MASK;
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		ipv4_mask = item->mask;
+
+		if (ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Only support src & dst ip,tos,proto in IPV4");
+		}
+
+		if (ipv4_mask->hdr.src_addr) {
+			hns3_set_bit(rule->input_set, INNER_SRC_IP, 1);
+			rule->key_conf.mask.src_ip[IP_ADDR_KEY_ID] =
+			    rte_be_to_cpu_32(ipv4_mask->hdr.src_addr);
+		}
+
+		if (ipv4_mask->hdr.dst_addr) {
+			hns3_set_bit(rule->input_set, INNER_DST_IP, 1);
+			rule->key_conf.mask.dst_ip[IP_ADDR_KEY_ID] =
+			    rte_be_to_cpu_32(ipv4_mask->hdr.dst_addr);
+		}
+
+		if (ipv4_mask->hdr.type_of_service) {
+			hns3_set_bit(rule->input_set, INNER_IP_TOS, 1);
+			rule->key_conf.mask.ip_tos =
+			    ipv4_mask->hdr.type_of_service;
+		}
+
+		if (ipv4_mask->hdr.next_proto_id) {
+			hns3_set_bit(rule->input_set, INNER_IP_PROTO, 1);
+			rule->key_conf.mask.ip_proto =
+			    ipv4_mask->hdr.next_proto_id;
+		}
+	}
+
+	ipv4_spec = item->spec;
+	rule->key_conf.spec.src_ip[IP_ADDR_KEY_ID] =
+	    rte_be_to_cpu_32(ipv4_spec->hdr.src_addr);
+	rule->key_conf.spec.dst_ip[IP_ADDR_KEY_ID] =
+	    rte_be_to_cpu_32(ipv4_spec->hdr.dst_addr);
+	rule->key_conf.spec.ip_tos = ipv4_spec->hdr.type_of_service;
+	rule->key_conf.spec.ip_proto = ipv4_spec->hdr.next_proto_id;
+	return 0;
+}
+
+static int
+hns3_parse_ipv6(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	const struct rte_flow_item_ipv6 *ipv6_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
+	rule->key_conf.spec.ether_type = RTE_ETHER_TYPE_IPV6;
+	rule->key_conf.mask.ether_type = ETHER_TYPE_MASK;
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		ipv6_mask = item->mask;
+		if (ipv6_mask->hdr.vtc_flow ||
+		    ipv6_mask->hdr.payload_len || ipv6_mask->hdr.hop_limits) {
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Only support src & dst ip,proto in IPV6");
+		}
+		net_addr_to_host(rule->key_conf.mask.src_ip,
+				 (const rte_be32_t *)ipv6_mask->hdr.src_addr,
+				 IP_ADDR_LEN);
+		net_addr_to_host(rule->key_conf.mask.dst_ip,
+				 (const rte_be32_t *)ipv6_mask->hdr.dst_addr,
+				 IP_ADDR_LEN);
+		rule->key_conf.mask.ip_proto = ipv6_mask->hdr.proto;
+		if (rule->key_conf.mask.src_ip[IP_ADDR_KEY_ID])
+			hns3_set_bit(rule->input_set, INNER_SRC_IP, 1);
+		if (rule->key_conf.mask.dst_ip[IP_ADDR_KEY_ID])
+			hns3_set_bit(rule->input_set, INNER_DST_IP, 1);
+		if (ipv6_mask->hdr.proto)
+			hns3_set_bit(rule->input_set, INNER_IP_PROTO, 1);
+	}
+
+	ipv6_spec = item->spec;
+	net_addr_to_host(rule->key_conf.spec.src_ip,
+			 (const rte_be32_t *)ipv6_spec->hdr.src_addr,
+			 IP_ADDR_LEN);
+	net_addr_to_host(rule->key_conf.spec.dst_ip,
+			 (const rte_be32_t *)ipv6_spec->hdr.dst_addr,
+			 IP_ADDR_LEN);
+	rule->key_conf.spec.ip_proto = ipv6_spec->hdr.proto;
+
+	return 0;
+}
+
+static int
+hns3_parse_tcp(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	hns3_set_bit(rule->input_set, INNER_IP_PROTO, 1);
+	rule->key_conf.spec.ip_proto = IPPROTO_TCP;
+	rule->key_conf.mask.ip_proto = IPPROTO_MASK;
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		tcp_mask = item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum || tcp_mask->hdr.tcp_urp) {
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Only support src & dst port in TCP");
+		}
+
+		if (tcp_mask->hdr.src_port) {
+			hns3_set_bit(rule->input_set, INNER_SRC_PORT, 1);
+			rule->key_conf.mask.src_port =
+			    rte_be_to_cpu_16(tcp_mask->hdr.src_port);
+		}
+		if (tcp_mask->hdr.dst_port) {
+			hns3_set_bit(rule->input_set, INNER_DST_PORT, 1);
+			rule->key_conf.mask.dst_port =
+			    rte_be_to_cpu_16(tcp_mask->hdr.dst_port);
+		}
+	}
+
+	tcp_spec = item->spec;
+	rule->key_conf.spec.src_port = rte_be_to_cpu_16(tcp_spec->hdr.src_port);
+	rule->key_conf.spec.dst_port = rte_be_to_cpu_16(tcp_spec->hdr.dst_port);
+
+	return 0;
+}
+
+static int
+hns3_parse_udp(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+	       struct rte_flow_error *error)
+{
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	hns3_set_bit(rule->input_set, INNER_IP_PROTO, 1);
+	rule->key_conf.spec.ip_proto = IPPROTO_UDP;
+	rule->key_conf.mask.ip_proto = IPPROTO_MASK;
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		udp_mask = item->mask;
+		if (udp_mask->hdr.dgram_len || udp_mask->hdr.dgram_cksum) {
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Only support src & dst port in UDP");
+		}
+		if (udp_mask->hdr.src_port) {
+			hns3_set_bit(rule->input_set, INNER_SRC_PORT, 1);
+			rule->key_conf.mask.src_port =
+			    rte_be_to_cpu_16(udp_mask->hdr.src_port);
+		}
+		if (udp_mask->hdr.dst_port) {
+			hns3_set_bit(rule->input_set, INNER_DST_PORT, 1);
+			rule->key_conf.mask.dst_port =
+			    rte_be_to_cpu_16(udp_mask->hdr.dst_port);
+		}
+	}
+
+	udp_spec = item->spec;
+	rule->key_conf.spec.src_port = rte_be_to_cpu_16(udp_spec->hdr.src_port);
+	rule->key_conf.spec.dst_port = rte_be_to_cpu_16(udp_spec->hdr.dst_port);
+
+	return 0;
+}
+
+static int
+hns3_parse_sctp(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		struct rte_flow_error *error)
+{
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+
+	hns3_set_bit(rule->input_set, INNER_IP_PROTO, 1);
+	rule->key_conf.spec.ip_proto = IPPROTO_SCTP;
+	rule->key_conf.mask.ip_proto = IPPROTO_MASK;
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	if (item->mask) {
+		sctp_mask = item->mask;
+		if (sctp_mask->hdr.cksum)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Only support src & dst port in SCTP");
+
+		if (sctp_mask->hdr.src_port) {
+			hns3_set_bit(rule->input_set, INNER_SRC_PORT, 1);
+			rule->key_conf.mask.src_port =
+			    rte_be_to_cpu_16(sctp_mask->hdr.src_port);
+		}
+		if (sctp_mask->hdr.dst_port) {
+			hns3_set_bit(rule->input_set, INNER_DST_PORT, 1);
+			rule->key_conf.mask.dst_port =
+			    rte_be_to_cpu_16(sctp_mask->hdr.dst_port);
+		}
+		if (sctp_mask->hdr.tag) {
+			hns3_set_bit(rule->input_set, INNER_SCTP_TAG, 1);
+			rule->key_conf.mask.sctp_tag =
+			    rte_be_to_cpu_32(sctp_mask->hdr.tag);
+		}
+	}
+
+	sctp_spec = item->spec;
+	rule->key_conf.spec.src_port =
+	    rte_be_to_cpu_16(sctp_spec->hdr.src_port);
+	rule->key_conf.spec.dst_port =
+	    rte_be_to_cpu_16(sctp_spec->hdr.dst_port);
+	rule->key_conf.spec.sctp_tag = rte_be_to_cpu_32(sctp_spec->hdr.tag);
+
+	return 0;
+}
+
+/*
+ * Check items before tunnel, save inner configs to outer configs,and clear
+ * inner configs.
+ * The key consists of two parts: meta_data and tuple keys.
+ * Meta data uses 15 bits, including vlan_num(2bit), des_port(12bit) and tunnel
+ * packet(1bit).
+ * Tuple keys uses 384bit, including ot_dst-mac(48bit), ot_dst-port(16bit),
+ * ot_tun_vni(24bit), ot_flow_id(8bit), src-mac(48bit), dst-mac(48bit),
+ * src-ip(32/128bit), dst-ip(32/128bit), src-port(16bit), dst-port(16bit),
+ * tos(8bit), ether-proto(16bit), ip-proto(8bit), vlantag1(16bit),
+ * Vlantag2(16bit) and sctp-tag(32bit).
+ */
+static int
+hns3_handle_tunnel(const struct rte_flow_item *item,
+		   struct hns3_fdir_rule *rule, struct rte_flow_error *error)
+{
+	/* check eth config */
+	if (rule->input_set & (BIT(INNER_SRC_MAC) | BIT(INNER_DST_MAC)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  item, "Outer eth mac is unsupported");
+	if (rule->input_set & BIT(INNER_ETH_TYPE)) {
+		hns3_set_bit(rule->input_set, OUTER_ETH_TYPE, 1);
+		rule->key_conf.spec.outer_ether_type =
+		    rule->key_conf.spec.ether_type;
+		rule->key_conf.mask.outer_ether_type =
+		    rule->key_conf.mask.ether_type;
+		hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 0);
+		rule->key_conf.spec.ether_type = 0;
+		rule->key_conf.mask.ether_type = 0;
+	}
+
+	/* check vlan config */
+	if (rule->input_set & (BIT(INNER_VLAN_TAG1) | BIT(INNER_VLAN_TAG2)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Outer vlan tags is unsupported");
+
+	/* clear vlan_num for inner vlan select */
+	rule->key_conf.outer_vlan_num = rule->key_conf.vlan_num;
+	rule->key_conf.vlan_num = 0;
+
+	/* check L3 config */
+	if (rule->input_set &
+	    (BIT(INNER_SRC_IP) | BIT(INNER_DST_IP) | BIT(INNER_IP_TOS)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  item, "Outer ip is unsupported");
+	if (rule->input_set & BIT(INNER_IP_PROTO)) {
+		hns3_set_bit(rule->input_set, OUTER_IP_PROTO, 1);
+		rule->key_conf.spec.outer_proto = rule->key_conf.spec.ip_proto;
+		rule->key_conf.mask.outer_proto = rule->key_conf.mask.ip_proto;
+		hns3_set_bit(rule->input_set, INNER_IP_PROTO, 0);
+		rule->key_conf.spec.ip_proto = 0;
+		rule->key_conf.mask.ip_proto = 0;
+	}
+
+	/* check L4 config */
+	if (rule->input_set & BIT(INNER_SCTP_TAG))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Outer sctp tag is unsupported");
+
+	if (rule->input_set & BIT(INNER_SRC_PORT)) {
+		hns3_set_bit(rule->input_set, OUTER_SRC_PORT, 1);
+		rule->key_conf.spec.outer_src_port =
+		    rule->key_conf.spec.src_port;
+		rule->key_conf.mask.outer_src_port =
+		    rule->key_conf.mask.src_port;
+		hns3_set_bit(rule->input_set, INNER_SRC_PORT, 0);
+		rule->key_conf.spec.src_port = 0;
+		rule->key_conf.mask.src_port = 0;
+	}
+	if (rule->input_set & BIT(INNER_DST_PORT)) {
+		hns3_set_bit(rule->input_set, INNER_DST_PORT, 0);
+		rule->key_conf.spec.dst_port = 0;
+		rule->key_conf.mask.dst_port = 0;
+	}
+	return 0;
+}
+
+static int
+hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		 struct rte_flow_error *error)
+{
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+	else if (item->spec && (item->mask == NULL))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Tunnel packets must configure with mask");
+
+	hns3_set_bit(rule->input_set, OUTER_DST_PORT, 1);
+	rule->key_conf.mask.tunnel_type = TUNNEL_TYPE_MASK;
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+		rule->key_conf.spec.tunnel_type = HNS3_TUNNEL_TYPE_VXLAN;
+	else
+		rule->key_conf.spec.tunnel_type = HNS3_TUNNEL_TYPE_VXLAN_GPE;
+
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	vxlan_mask = item->mask;
+	vxlan_spec = item->spec;
+
+	if (vxlan_mask->flags)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Flags is not supported in VxLAN");
+
+	/* VNI must be totally masked or not. */
+	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "VNI must be totally masked or not in VxLAN");
+	if (vxlan_mask->vni[0]) {
+		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+			   VNI_OR_TNI_LEN);
+	}
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+		   VNI_OR_TNI_LEN);
+	return 0;
+}
+
+static int
+hns3_parse_nvgre(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		 struct rte_flow_error *error)
+{
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+	else if (item->spec && (item->mask == NULL))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Tunnel packets must configure with mask");
+
+	hns3_set_bit(rule->input_set, OUTER_IP_PROTO, 1);
+	rule->key_conf.spec.outer_proto = IPPROTO_GRE;
+	rule->key_conf.mask.outer_proto = IPPROTO_MASK;
+
+	hns3_set_bit(rule->input_set, OUTER_DST_PORT, 1);
+	rule->key_conf.spec.tunnel_type = HNS3_TUNNEL_TYPE_NVGRE;
+	rule->key_conf.mask.tunnel_type = ~HNS3_TUNNEL_TYPE_NVGRE;
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	nvgre_mask = item->mask;
+	nvgre_spec = item->spec;
+
+	if (nvgre_mask->protocol || nvgre_mask->c_k_s_rsvd0_ver)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Ver/protocal is not supported in NVGRE");
+
+	/* TNI must be totally masked or not. */
+	if (memcmp(nvgre_mask->tni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(nvgre_mask->tni, zero_mask, VNI_OR_TNI_LEN))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "TNI must be totally masked or not in NVGRE");
+
+	if (nvgre_mask->tni[0]) {
+		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
+		memcpy(rule->key_conf.mask.outer_tun_vni, nvgre_mask->tni,
+			   VNI_OR_TNI_LEN);
+	}
+	memcpy(rule->key_conf.spec.outer_tun_vni, nvgre_spec->tni,
+		   VNI_OR_TNI_LEN);
+
+	if (nvgre_mask->flow_id) {
+		hns3_set_bit(rule->input_set, OUTER_TUN_FLOW_ID, 1);
+		rule->key_conf.mask.outer_tun_flow_id = nvgre_mask->flow_id;
+	}
+	rule->key_conf.spec.outer_tun_flow_id = nvgre_spec->flow_id;
+	return 0;
+}
+
+static int
+hns3_parse_geneve(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		  struct rte_flow_error *error)
+{
+	const struct rte_flow_item_geneve *geneve_spec;
+	const struct rte_flow_item_geneve *geneve_mask;
+
+	if (item->spec == NULL && item->mask)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Can't configure FDIR with mask but without spec");
+	else if (item->spec && (item->mask == NULL))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Tunnel packets must configure with mask");
+
+	hns3_set_bit(rule->input_set, OUTER_DST_PORT, 1);
+	rule->key_conf.spec.tunnel_type = HNS3_TUNNEL_TYPE_GENEVE;
+	rule->key_conf.mask.tunnel_type = TUNNEL_TYPE_MASK;
+	/* Only used to describe the protocol stack. */
+	if (item->spec == NULL && item->mask == NULL)
+		return 0;
+
+	geneve_mask = item->mask;
+	geneve_spec = item->spec;
+
+	if (geneve_mask->ver_opt_len_o_c_rsvd0 || geneve_mask->protocol)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "Ver/protocal is not supported in GENEVE");
+	/* VNI must be totally masked or not. */
+	if (memcmp(geneve_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(geneve_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "VNI must be totally masked or not in GENEVE");
+	if (geneve_mask->vni[0]) {
+		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
+		memcpy(rule->key_conf.mask.outer_tun_vni, geneve_mask->vni,
+			   VNI_OR_TNI_LEN);
+	}
+	memcpy(rule->key_conf.spec.outer_tun_vni, geneve_spec->vni,
+		   VNI_OR_TNI_LEN);
+	return 0;
+}
+
+static int
+hns3_parse_tunnel(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+		  struct rte_flow_error *error)
+{
+	int ret;
+
+	switch (item->type) {
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+		ret = hns3_parse_vxlan(item, rule, error);
+		break;
+	case RTE_FLOW_ITEM_TYPE_NVGRE:
+		ret = hns3_parse_nvgre(item, rule, error);
+		break;
+	case RTE_FLOW_ITEM_TYPE_GENEVE:
+		ret = hns3_parse_geneve(item, rule, error);
+		break;
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_HANDLE,
+					  NULL, "Unsupported tunnel type!");
+	}
+	if (ret)
+		return ret;
+	return hns3_handle_tunnel(item, rule, error);
+}
+
+static int
+hns3_parse_normal(const struct rte_flow_item *item,
+		  struct hns3_fdir_rule *rule,
+		  struct items_step_mngr *step_mngr,
+		  struct rte_flow_error *error)
+{
+	int ret;
+
+	switch (item->type) {
+	case RTE_FLOW_ITEM_TYPE_ETH:
+		ret = hns3_parse_eth(item, rule, error);
+		step_mngr->items = L2_next_items;
+		step_mngr->count = ARRAY_SIZE(L2_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+		ret = hns3_parse_vlan(item, rule, error);
+		step_mngr->items = L2_next_items;
+		step_mngr->count = ARRAY_SIZE(L2_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_IPV4:
+		ret = hns3_parse_ipv4(item, rule, error);
+		step_mngr->items = L3_next_items;
+		step_mngr->count = ARRAY_SIZE(L3_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_IPV6:
+		ret = hns3_parse_ipv6(item, rule, error);
+		step_mngr->items = L3_next_items;
+		step_mngr->count = ARRAY_SIZE(L3_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_TCP:
+		ret = hns3_parse_tcp(item, rule, error);
+		step_mngr->items = L4_next_items;
+		step_mngr->count = ARRAY_SIZE(L4_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_UDP:
+		ret = hns3_parse_udp(item, rule, error);
+		step_mngr->items = L4_next_items;
+		step_mngr->count = ARRAY_SIZE(L4_next_items);
+		break;
+	case RTE_FLOW_ITEM_TYPE_SCTP:
+		ret = hns3_parse_sctp(item, rule, error);
+		step_mngr->items = L4_next_items;
+		step_mngr->count = ARRAY_SIZE(L4_next_items);
+		break;
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_HANDLE,
+					  NULL, "Unsupported normal type!");
+	}
+
+	return ret;
+}
+
+static int
+hns3_validate_item(const struct rte_flow_item *item,
+		   struct items_step_mngr step_mngr,
+		   struct rte_flow_error *error)
+{
+	int i;
+
+	if (item->last)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, item,
+					  "Not supported last point for range");
+
+	for (i = 0; i < step_mngr.count; i++) {
+		if (item->type == step_mngr.items[i])
+			break;
+	}
+
+	if (i == step_mngr.count) {
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  item, "Inval or missing item");
+	}
+	return 0;
+}
+
+static inline bool
+is_tunnel_packet(enum rte_flow_item_type type)
+{
+	if (type == RTE_FLOW_ITEM_TYPE_VXLAN_GPE ||
+	    type == RTE_FLOW_ITEM_TYPE_VXLAN ||
+	    type == RTE_FLOW_ITEM_TYPE_NVGRE ||
+	    type == RTE_FLOW_ITEM_TYPE_GENEVE ||
+	    type == RTE_FLOW_ITEM_TYPE_MPLS)
+		return true;
+	return false;
+}
+
+/*
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ * UDP/TCP/SCTP PATTERN:
+ * The first not void item can be ETH or IPV4 or IPV6
+ * The second not void item must be IPV4 or IPV6 if the first one is ETH.
+ * The next not void item could be UDP or TCP or SCTP (optional)
+ * The next not void item could be RAW (for flexbyte, optional)
+ * The next not void item must be END.
+ * A Fuzzy Match pattern can appear at any place before END.
+ * Fuzzy Match is optional for IPV4 but is required for IPV6
+ * MAC VLAN PATTERN:
+ * The first not void item must be ETH.
+ * The second not void item must be MAC VLAN.
+ * The next not void item must be END.
+ * ACTION:
+ * The first not void action should be QUEUE or DROP.
+ * The second not void optional action should be MARK,
+ * mark_id is a uint32_t number.
+ * The next not void action should be END.
+ * UDP/TCP/SCTP pattern example:
+ * ITEM		Spec			Mask
+ * ETH		NULL			NULL
+ * IPV4		src_addr 192.168.1.20	0xFFFFFFFF
+ *		dst_addr 192.167.3.50	0xFFFFFFFF
+ * UDP/TCP/SCTP	src_port	80	0xFFFF
+ *		dst_port	80	0xFFFF
+ * END
+ * MAC VLAN pattern example:
+ * ITEM		Spec			Mask
+ * ETH		dst_addr
+		{0xAC, 0x7B, 0xA1,	{0xFF, 0xFF, 0xFF,
+		0x2C, 0x6D, 0x36}	0xFF, 0xFF, 0xFF}
+ * MAC VLAN	tci	0x2016		0xEFFF
+ * END
+ * Other members in mask and spec should set to 0x00.
+ * Item->last should be NULL.
+ */
+static int
+hns3_parse_fdir_filter(struct rte_eth_dev *dev,
+		       const struct rte_flow_item pattern[],
+		       const struct rte_flow_action actions[],
+		       struct hns3_fdir_rule *rule,
+		       struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_flow_item *item;
+	struct items_step_mngr step_mngr;
+	int ret;
+
+	/* FDIR is available only in PF driver */
+	if (hns->is_vf)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+					  "Fdir not supported in VF");
+
+	if (dev->data->dev_conf.fdir_conf.mode != RTE_FDIR_MODE_PERFECT)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
+					  "fdir_conf.mode isn't perfect");
+
+	step_mngr.items = first_items;
+	step_mngr.count = ARRAY_SIZE(first_items);
+	for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		if (item->type == RTE_FLOW_ITEM_TYPE_VOID)
+			continue;
+
+		ret = hns3_validate_item(item, step_mngr, error);
+		if (ret)
+			return ret;
+
+		if (is_tunnel_packet(item->type)) {
+			ret = hns3_parse_tunnel(item, rule, error);
+			if (ret)
+				return ret;
+			step_mngr.items = tunnel_next_items;
+			step_mngr.count = ARRAY_SIZE(tunnel_next_items);
+		} else {
+			ret = hns3_parse_normal(item, rule, &step_mngr, error);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return hns3_handle_actions(dev, actions, rule, error);
+}
+
+void
+hns3_filterlist_init(struct rte_eth_dev *dev)
+{
+	struct hns3_process_private *process_list = dev->process_private;
+
+	TAILQ_INIT(&process_list->fdir_list);
+	TAILQ_INIT(&process_list->filter_rss_list);
+	TAILQ_INIT(&process_list->flow_list);
+}
+
+static void
+hns3_filterlist_flush(struct rte_eth_dev *dev)
+{
+	struct hns3_process_private *process_list = dev->process_private;
+	struct hns3_fdir_rule_ele *fdir_rule_ptr;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
+	struct hns3_flow_mem *flow_node;
+
+	fdir_rule_ptr = TAILQ_FIRST(&process_list->fdir_list);
+	while (fdir_rule_ptr) {
+		TAILQ_REMOVE(&process_list->fdir_list, fdir_rule_ptr, entries);
+		rte_free(fdir_rule_ptr);
+		fdir_rule_ptr = TAILQ_FIRST(&process_list->fdir_list);
+	}
+
+	rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	while (rss_filter_ptr) {
+		TAILQ_REMOVE(&process_list->filter_rss_list, rss_filter_ptr,
+			     entries);
+		rte_free(rss_filter_ptr);
+		rss_filter_ptr = TAILQ_FIRST(&process_list->filter_rss_list);
+	}
+
+	flow_node = TAILQ_FIRST(&process_list->flow_list);
+	while (flow_node) {
+		TAILQ_REMOVE(&process_list->flow_list, flow_node, entries);
+		rte_free(flow_node->flow);
+		rte_free(flow_node);
+		flow_node = TAILQ_FIRST(&process_list->flow_list);
+	}
+}
+
+static int
+hns3_flow_args_check(const struct rte_flow_attr *attr,
+		     const struct rte_flow_item pattern[],
+		     const struct rte_flow_action actions[],
+		     struct rte_flow_error *error)
+{
+	if (pattern == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+					  NULL, "NULL pattern.");
+
+	if (actions == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+					  NULL, "NULL action.");
+
+	if (attr == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ATTR,
+					  NULL, "NULL attribute.");
+
+	return hns3_check_attr(attr, error);
+}
+
+/*
+ * Check if the flow rule is supported by hns3.
+ * It only checkes the format. Don't guarantee the rule can be programmed into
+ * the HW. Because there can be no enough room for the rule.
+ */
+static int
+hns3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	struct hns3_rss_conf rss_conf;
+	struct hns3_fdir_rule fdir_rule;
+	int ret;
+
+	ret = hns3_flow_args_check(attr, pattern, actions, error);
+	if (ret)
+		return ret;
+
+	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
+	return hns3_parse_fdir_filter(dev, pattern, actions, &fdir_rule, error);
+}
+
+/*
+ * Create or destroy a flow rule.
+ * Theorically one rule can match more than one filters.
+ * We will let it use the filter which it hitt first.
+ * So, the sequence matters.
+ */
+static struct rte_flow *
+hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct hns3_process_private *process_list = dev->process_private;
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_fdir_rule_ele *fdir_rule_ptr;
+	struct hns3_flow_mem *flow_node;
+	struct rte_flow *flow;
+	struct hns3_fdir_rule fdir_rule;
+	int ret;
+
+	ret = hns3_flow_args_check(attr, pattern, actions, error);
+	if (ret)
+		return NULL;
+
+	flow = rte_zmalloc("hns3 flow", sizeof(struct rte_flow), 0);
+	if (flow == NULL) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to allocate flow memory");
+		return NULL;
+	}
+	flow_node = rte_zmalloc("hns3 flow node",
+				sizeof(struct hns3_flow_mem), 0);
+	if (flow_node == NULL) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to allocate flow list memory");
+		rte_free(flow);
+		return NULL;
+	}
+
+	flow_node->flow = flow;
+	TAILQ_INSERT_TAIL(&process_list->flow_list, flow_node, entries);
+
+	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
+	ret = hns3_parse_fdir_filter(dev, pattern, actions, &fdir_rule, error);
+	if (ret)
+		goto out;
+
+	if (fdir_rule.flags & HNS3_RULE_FLAG_COUNTER) {
+		ret = hns3_counter_new(dev, fdir_rule.act_cnt.shared,
+				       fdir_rule.act_cnt.id, error);
+		if (ret)
+			goto out;
+
+		flow->counter_id = fdir_rule.act_cnt.id;
+	}
+	ret = hns3_fdir_filter_program(hns, &fdir_rule, false);
+	if (!ret) {
+		fdir_rule_ptr = rte_zmalloc("hns3 fdir rule",
+					    sizeof(struct hns3_fdir_rule_ele),
+					    0);
+		if (fdir_rule_ptr == NULL) {
+			hns3_err(hw, "Failed to allocate fdir_rule memory");
+			ret = -ENOMEM;
+			goto err;
+		}
+		memcpy(&fdir_rule_ptr->fdir_conf, &fdir_rule,
+			sizeof(struct hns3_fdir_rule));
+		TAILQ_INSERT_TAIL(&process_list->fdir_list,
+				  fdir_rule_ptr, entries);
+		flow->rule = fdir_rule_ptr;
+		flow->filter_type = RTE_ETH_FILTER_FDIR;
+
+		return flow;
+	}
+
+	if (fdir_rule.flags & HNS3_RULE_FLAG_COUNTER)
+		hns3_counter_release(dev, fdir_rule.act_cnt.id);
+
+err:
+	rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow");
+out:
+	TAILQ_REMOVE(&process_list->flow_list, flow_node, entries);
+	rte_free(flow_node);
+	rte_free(flow);
+	return NULL;
+}
+
+/* Destroy a flow rule on hns3. */
+static int
+hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct hns3_process_private *process_list = dev->process_private;
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_fdir_rule_ele *fdir_rule_ptr;
+	struct hns3_flow_mem *flow_node;
+	struct hns3_hw *hw = &hns->hw;
+	enum rte_filter_type filter_type;
+	struct hns3_fdir_rule fdir_rule;
+	int ret;
+
+	if (flow == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_HANDLE,
+					  flow, "Flow is NULL");
+	filter_type = flow->filter_type;
+	switch (filter_type) {
+	case RTE_ETH_FILTER_FDIR:
+		fdir_rule_ptr = (struct hns3_fdir_rule_ele *)flow->rule;
+		memcpy(&fdir_rule, &fdir_rule_ptr->fdir_conf,
+			   sizeof(struct hns3_fdir_rule));
+
+		ret = hns3_fdir_filter_program(hns, &fdir_rule, true);
+		if (ret)
+			return rte_flow_error_set(error, EIO,
+						  RTE_FLOW_ERROR_TYPE_HANDLE,
+						  flow,
+						  "Destroy FDIR fail.Try again");
+		if (fdir_rule.flags & HNS3_RULE_FLAG_COUNTER)
+			hns3_counter_release(dev, fdir_rule.act_cnt.id);
+		TAILQ_REMOVE(&process_list->fdir_list, fdir_rule_ptr, entries);
+		rte_free(fdir_rule_ptr);
+		fdir_rule_ptr = NULL;
+		break;
+	default:
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_HANDLE, flow,
+					  "Unsupported filter type");
+	}
+
+	TAILQ_FOREACH(flow_node, &process_list->flow_list, entries) {
+		if (flow_node->flow == flow) {
+			TAILQ_REMOVE(&process_list->flow_list, flow_node,
+				     entries);
+			rte_free(flow_node);
+			flow_node = NULL;
+			break;
+		}
+	}
+	rte_free(flow);
+	flow = NULL;
+
+	return 0;
+}
+
+/*  Destroy all flow rules associated with a port on hns3. */
+static int
+hns3_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	int ret;
+
+	/* FDIR is available only in PF driver */
+	if (!hns->is_vf) {
+		ret = hns3_clear_all_fdir_filter(hns);
+		if (ret) {
+			rte_flow_error_set(error, ret,
+					   RTE_FLOW_ERROR_TYPE_HANDLE,
+					   NULL, "Failed to flush rule");
+			return ret;
+		}
+		hns3_counter_flush(dev);
+	}
+
+	hns3_filterlist_flush(dev);
+
+	return 0;
+}
+
+/* Query an existing flow rule. */
+static int
+hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
+		const struct rte_flow_action *actions, void *data,
+		struct rte_flow_error *error)
+{
+	struct rte_flow_query_count *qc;
+	int ret;
+
+	for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+		switch (actions->type) {
+		case RTE_FLOW_ACTION_TYPE_VOID:
+			break;
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+			qc = (struct rte_flow_query_count *)data;
+			ret = hns3_counter_query(dev, flow, qc, error);
+			if (ret)
+				return ret;
+			break;
+		default:
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ACTION,
+						  actions,
+						  "Query action only support count");
+		}
+	}
+	return 0;
+}
+
+const struct rte_flow_ops hns3_flow_ops = {
+	.validate = hns3_flow_validate,
+	.create = hns3_flow_create,
+	.destroy = hns3_flow_destroy,
+	.flush = hns3_flow_flush,
+	.query = hns3_flow_query,
+	.isolate = NULL,
+};
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (8 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver Wei Hu (Xavier)
@ 2019-08-23 13:46 ` Wei Hu (Xavier)
  2019-08-30 15:07   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control " Wei Hu (Xavier)
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:46 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for RSS of hns3 PMD driver.
It included the following functions in file hns3_rss.c:
1) Set/query hash key, rss_hf by .rss_hash_update/.rss_hash_conf_get ops
   callback functions.
2) Set/query redirection table by .reta_update/.reta_query. ops callback
   functions.
3) Set/query hash algorithm by .filter_ctrl ops callback function when
   the 'filter_type' is RTE_ETH_FILTER_HASH.

And it included the following functions in file hns3_flow.c:
1) Set hash key, rss_hf, redirection table and algorithm by .create ops
   callback function.
2) Disable RSS by .destroy or .flush ops callback function.
3) Check the effectiveness of the RSS's configuration by .validate ops
   callback function.

Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c    |    1 +
 drivers/net/hns3/hns3_ethdev.c |    8 +
 drivers/net/hns3/hns3_ethdev.h |    3 +
 drivers/net/hns3/hns3_fdir.c   |    1 +
 drivers/net/hns3/hns3_flow.c   |  426 ++++++++++++
 drivers/net/hns3/hns3_rss.c    | 1386 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_rss.h    |  139 ++++
 7 files changed, 1964 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_rss.c
 create mode 100644 drivers/net/hns3/hns3_rss.h

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 4fc282c..07fe92e 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -27,6 +27,7 @@
 #include <rte_io.h>
 
 #include "hns3_cmd.h"
+#include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index c1f9bcb..723ada1 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -30,6 +30,7 @@
 #include <rte_pci.h>
 
 #include "hns3_cmd.h"
+#include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
@@ -2689,6 +2690,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 		goto err_hw_init;
 	}
 
+	hns3_set_default_rss_args(hw);
+
 	return 0;
 
 err_hw_init:
@@ -2716,6 +2719,7 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
+	hns3_rss_uninit(hns);
 	hns3_fdir_filter_uninit(hns);
 	hns3_uninit_umv_space(hw);
 	hns3_cmd_uninit(hw);
@@ -2744,6 +2748,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.mac_addr_set           = hns3_set_default_mac_addr,
 	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
 	.link_update            = hns3_dev_link_update,
+	.rss_hash_update        = hns3_dev_rss_hash_update,
+	.rss_hash_conf_get      = hns3_dev_rss_hash_conf_get,
+	.reta_update            = hns3_dev_rss_reta_update,
+	.reta_query             = hns3_dev_rss_reta_query,
 	.filter_ctrl            = hns3_dev_filter_ctrl,
 };
 
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index c46211a..31301af 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -342,6 +342,9 @@ struct hns3_hw {
 	struct rte_ether_addr mc_addrs[HNS3_MC_MACADDR_NUM];
 	int mc_addrs_num; /* Multicast mac addresses number */
 
+	/* The configuration info of RSS */
+	struct hns3_rss_conf rss_info;
+
 	uint8_t num_tc;             /* Total number of enabled TCs */
 	uint8_t hw_tc_map;
 	enum hns3_fc_mode current_mode;
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index aa7d968..77c93ba 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -11,6 +11,7 @@
 #include <rte_malloc.h>
 
 #include "hns3_cmd.h"
+#include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 98ed818..9717fbf 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -9,6 +9,7 @@
 #include <rte_malloc.h>
 
 #include "hns3_cmd.h"
+#include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
@@ -1191,6 +1192,379 @@ hns3_filterlist_flush(struct rte_eth_dev *dev)
 	}
 }
 
+static bool
+hns3_action_rss_same(const struct rte_flow_action_rss *comp,
+		     const struct rte_flow_action_rss *with)
+{
+	return (comp->func == with->func &&
+		comp->level == with->level &&
+		comp->types == with->types &&
+		comp->key_len == with->key_len &&
+		comp->queue_num == with->queue_num &&
+		!memcmp(comp->key, with->key, with->key_len) &&
+		!memcmp(comp->queue, with->queue,
+			sizeof(*with->queue) * with->queue_num));
+}
+
+static int
+hns3_rss_conf_copy(struct hns3_rss_conf *out,
+		   const struct rte_flow_action_rss *in)
+{
+	if (in->key_len > RTE_DIM(out->key) ||
+	    in->queue_num > RTE_DIM(out->queue))
+		return -EINVAL;
+	if (in->key == NULL && in->key_len)
+		return -EINVAL;
+	out->conf = (struct rte_flow_action_rss) {
+		.func = in->func,
+		.level = in->level,
+		.types = in->types,
+		.key_len = in->key_len,
+		.queue_num = in->queue_num,
+	};
+	out->conf.queue =
+		memcpy(out->queue, in->queue,
+		       sizeof(*in->queue) * in->queue_num);
+	if (in->key)
+		out->conf.key = memcpy(out->key, in->key, in->key_len);
+
+	return 0;
+}
+
+/*
+ * This function is used to parse rss action validatation.
+ */
+static int
+hns3_parse_rss_filter(struct rte_eth_dev *dev,
+		      const struct rte_flow_action *actions,
+		      struct rte_flow_error *error)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_conf = &hw->rss_info;
+	const struct rte_flow_action_rss *rss;
+	const struct rte_flow_action *act;
+	uint32_t act_index = 0;
+	uint64_t flow_types;
+	uint16_t n;
+
+	NEXT_ITEM_OF_ACTION(act, actions, act_index);
+	/* Get configuration args from APP cmdline input */
+	rss = act->conf;
+
+	if (rss == NULL || rss->queue_num == 0) {
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  act, "no valid queues");
+	}
+
+	for (n = 0; n < rss->queue_num; n++) {
+		if (rss->queue[n] < dev->data->nb_rx_queues)
+			continue;
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  act,
+					  "queue id > max number of queues");
+	}
+
+	/* Parse flow types of RSS */
+	if (!(rss->types & HNS3_ETH_RSS_SUPPORT) && rss->types)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  act,
+					  "Flow types is unsupported by "
+					  "hns3's RSS");
+
+	flow_types = rss->types & HNS3_ETH_RSS_SUPPORT;
+	if (flow_types != rss->types)
+		hns3_warn(hw, "RSS flow types(%lu) include unsupported flow "
+			  "types", rss->types);
+
+	/* Parse RSS related parameters from RSS configuration */
+	switch (rss->func) {
+	case RTE_ETH_HASH_FUNCTION_DEFAULT:
+	case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+	case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+		break;
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, act,
+					  "input RSS hash functions are not supported");
+	}
+
+	if (rss->level)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, act,
+					  "a nonzero RSS encapsulation level is not supported");
+	if (rss->key_len && rss->key_len != RTE_DIM(rss_conf->key))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, act,
+					  "RSS hash key must be exactly 40 bytes");
+	if (rss->queue_num > RTE_DIM(rss_conf->queue))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, act,
+					  "too many queues for RSS context");
+
+	act_index++;
+
+	/* Check if the next not void action is END */
+	NEXT_ITEM_OF_ACTION(act, actions, act_index);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rss_conf, 0, sizeof(struct hns3_rss_conf));
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  act, "Not supported action.");
+	}
+
+	return 0;
+}
+
+static int
+hns3_disable_rss(struct hns3_hw *hw)
+{
+	int ret;
+
+	/* Redirected the redirection table to queue 0 */
+	ret = hns3_rss_reset_indir_table(hw);
+	if (ret)
+		return ret;
+
+	/* Disable RSS */
+	hw->rss_info.conf.types = 0;
+
+	return 0;
+}
+
+static void
+hns3_parse_rss_key(struct hns3_hw *hw, struct rte_flow_action_rss *rss_conf)
+{
+	if (rss_conf->key == NULL ||
+	    rss_conf->key_len < HNS3_RSS_KEY_SIZE) {
+		hns3_info(hw, "Default RSS hash key to be set");
+		rss_conf->key = hns3_hash_key;
+		rss_conf->key_len = HNS3_RSS_KEY_SIZE;
+	}
+}
+
+static int
+hns3_parse_rss_algorithm(struct hns3_hw *hw, enum rte_eth_hash_function *func,
+			 uint8_t *hash_algo)
+{
+	enum rte_eth_hash_function algo_func = *func;
+	switch (algo_func) {
+	case RTE_ETH_HASH_FUNCTION_DEFAULT:
+		/* Keep *hash_algo as what it used to be */
+		algo_func = hw->rss_info.conf.func;
+		break;
+	case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+		*hash_algo = HNS3_RSS_HASH_ALGO_TOEPLITZ;
+		break;
+	case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+		*hash_algo = HNS3_RSS_HASH_ALGO_SIMPLE;
+		break;
+	default:
+		hns3_err(hw, "Invalid RSS algorithm configuration(%u)",
+			 algo_func);
+		return -EINVAL;
+	}
+	*func = algo_func;
+
+	return 0;
+}
+
+static int
+hns3_hw_rss_hash_set(struct hns3_hw *hw, struct rte_flow_action_rss *rss_config)
+{
+	uint8_t hash_algo =
+		(hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_TOEPLITZ ?
+		 HNS3_RSS_HASH_ALGO_TOEPLITZ : HNS3_RSS_HASH_ALGO_SIMPLE);
+	struct hns3_rss_tuple_cfg *tuple;
+	int ret;
+
+	/* Parse hash key */
+	hns3_parse_rss_key(hw, rss_config);
+
+	/* Parse hash algorithm */
+	ret = hns3_parse_rss_algorithm(hw, &rss_config->func, &hash_algo);
+	if (ret)
+		return ret;
+
+	ret = hns3_set_rss_algo_key(hw, hash_algo, rss_config->key);
+	if (ret)
+		return ret;
+
+	/* Update algorithm of hw */
+	hw->rss_info.conf.func = rss_config->func;
+
+	/* Set flow type supported */
+	tuple = &hw->rss_info.rss_tuple_sets;
+	ret = hns3_set_rss_tuple_by_rss_hf(hw, tuple, rss_config->types);
+	if (ret)
+		hns3_err(hw, "Update RSS tuples by rss hf failed %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_update_indir_table(struct rte_eth_dev *dev,
+			const struct rte_flow_action_rss *conf, uint16_t num)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t indir_tbl[HNS3_RSS_IND_TBL_SIZE];
+	uint16_t j, allow_rss_queues;
+	uint8_t queue_id;
+	uint32_t i;
+
+	if (num == 0) {
+		hns3_err(hw, "No PF queues are configured to enable RSS");
+		return -ENOTSUP;
+	}
+
+	allow_rss_queues = RTE_MIN(dev->data->nb_rx_queues, hw->rss_size_max);
+	/* Fill in redirection table */
+	memcpy(indir_tbl, hw->rss_info.rss_indirection_tbl,
+	       HNS3_RSS_IND_TBL_SIZE);
+	for (i = 0, j = 0; i < HNS3_RSS_IND_TBL_SIZE; i++, j++) {
+		j %= num;
+		if (conf->queue[j] >= allow_rss_queues) {
+			hns3_err(hw, "Invalid queue id(%u) to be set in "
+				     "redirection table, max number of rss "
+				     "queues: %u", conf->queue[j],
+				 allow_rss_queues);
+			return -EINVAL;
+		}
+		queue_id = conf->queue[j];
+		indir_tbl[i] = queue_id;
+	}
+
+	return hns3_set_rss_indir_table(hw, indir_tbl, HNS3_RSS_IND_TBL_SIZE);
+}
+
+static int
+hns3_config_rss_filter(struct rte_eth_dev *dev,
+		       const struct hns3_rss_conf *conf, bool add)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_info;
+	uint64_t flow_types;
+	uint16_t num;
+	int ret;
+
+	struct rte_flow_action_rss rss_flow_conf = {
+		.func = conf->conf.func,
+		.level = conf->conf.level,
+		.types = conf->conf.types,
+		.key_len = conf->conf.key_len,
+		.queue_num = conf->conf.queue_num,
+		.key = conf->conf.key_len ?
+		    (void *)(uintptr_t)conf->conf.key : NULL,
+		.queue = conf->conf.queue,
+	};
+
+	/* The types is Unsupported by hns3' RSS */
+	if (!(rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT) &&
+	    rss_flow_conf.types) {
+		hns3_err(hw, "Flow types(%lu) is unsupported by hns3's RSS",
+			 rss_flow_conf.types);
+		return -EINVAL;
+	}
+
+	/* Filter the unsupported flow types */
+	flow_types = rss_flow_conf.types & HNS3_ETH_RSS_SUPPORT;
+	if (flow_types != rss_flow_conf.types)
+		hns3_warn(hw, "modified RSS types based on hardware "
+			      "support, requested:%lu configured:%lu",
+			      rss_flow_conf.types, flow_types);
+	/* Update the useful flow types */
+	rss_flow_conf.types = flow_types;
+
+	if ((rss_flow_conf.types & ETH_RSS_PROTO_MASK) == 0) {
+		ret = hns3_disable_rss(hw);
+		if (ret)
+			return ret;
+		return 0;
+	}
+
+	rss_info = &hw->rss_info;
+	if (!add) {
+		if (hns3_action_rss_same(&rss_info->conf, &rss_flow_conf)) {
+			ret = hns3_disable_rss(hw);
+			if (ret) {
+				hns3_err(hw, "RSS disable failed(%d)", ret);
+				return ret;
+			}
+			memset(rss_info, 0, sizeof(struct hns3_rss_conf));
+			return 0;
+		}
+		return -EINVAL;
+	}
+
+	/* Get rx queues num */
+	num = dev->data->nb_rx_queues;
+
+	/* Set rx queues to use */
+	num = RTE_MIN(num, rss_flow_conf.queue_num);
+	if (rss_flow_conf.queue_num > num)
+		hns3_warn(hw, "Config queue numbers %u are beyond the scope of truncated",
+			  rss_flow_conf.queue_num);
+	hns3_info(hw, "Max of contiguous %u PF queues are configured", num);
+
+	rte_spinlock_lock(&hw->lock);
+	/* Update redirection talbe of rss */
+	ret = hns3_update_indir_table(dev, &rss_flow_conf, num);
+	if (ret)
+		goto rss_config_err;
+
+	/* Set hash algorithm and flow types by the user's config */
+	ret = hns3_hw_rss_hash_set(hw, &rss_flow_conf);
+	if (ret)
+		goto rss_config_err;
+
+	ret = hns3_rss_conf_copy(rss_info, &rss_flow_conf);
+	if (ret) {
+		hns3_err(hw, "RSS config init fail(%d)", ret);
+		goto rss_config_err;
+	}
+
+rss_config_err:
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+/* Remove the rss filter */
+static int
+hns3_clear_rss_filter(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	if (hw->rss_info.conf.queue_num == 0)
+		return 0;
+
+	return hns3_config_rss_filter(dev, &hw->rss_info, false);
+}
+
+static int
+hns3_flow_parse_rss(struct rte_eth_dev *dev,
+		    const struct hns3_rss_conf *conf, bool add)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	bool ret;
+
+	/* Action rss same */
+	ret = hns3_action_rss_same(&hw->rss_info.conf, &conf->conf);
+	if (ret) {
+		hns3_err(hw, "Enter duplicate RSS configuration : %d", ret);
+		return -EINVAL;
+	}
+
+	return hns3_config_rss_filter(dev, conf, add);
+}
+
 static int
 hns3_flow_args_check(const struct rte_flow_attr *attr,
 		     const struct rte_flow_item pattern[],
@@ -1234,6 +1608,11 @@ hns3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	if (ret)
 		return ret;
 
+	if (find_rss_action(actions)) {
+		memset(&rss_conf, 0, sizeof(struct hns3_rss_conf));
+		return hns3_parse_rss_filter(dev, actions, error);
+	}
+
 	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
 	return hns3_parse_fdir_filter(dev, pattern, actions, &fdir_rule, error);
 }
@@ -1253,8 +1632,11 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
+	const struct hns3_rss_conf *rss_conf;
 	struct hns3_fdir_rule_ele *fdir_rule_ptr;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_flow_mem *flow_node;
+	const struct rte_flow_action *act;
 	struct rte_flow *flow;
 	struct hns3_fdir_rule fdir_rule;
 	int ret;
@@ -1283,6 +1665,32 @@ hns3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	flow_node->flow = flow;
 	TAILQ_INSERT_TAIL(&process_list->flow_list, flow_node, entries);
 
+	act = find_rss_action(actions);
+	if (act) {
+		rss_conf = act->conf;
+
+		ret = hns3_flow_parse_rss(dev, rss_conf, true);
+		if (ret)
+			goto err;
+
+		rss_filter_ptr = rte_zmalloc("hns3 rss filter",
+					     sizeof(struct hns3_rss_conf_ele),
+					     0);
+		if (rss_filter_ptr == NULL) {
+			hns3_err(hw,
+				    "Failed to allocate hns3_rss_filter memory");
+			ret = -ENOMEM;
+			goto err;
+		}
+
+		TAILQ_INSERT_TAIL(&process_list->filter_rss_list,
+				  rss_filter_ptr, entries);
+
+		flow->rule = rss_filter_ptr;
+		flow->filter_type = RTE_ETH_FILTER_HASH;
+		return flow;
+	}
+
 	memset(&fdir_rule, 0, sizeof(struct hns3_fdir_rule));
 	ret = hns3_parse_fdir_filter(dev, pattern, actions, &fdir_rule, error);
 	if (ret)
@@ -1337,6 +1745,7 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	struct hns3_process_private *process_list = dev->process_private;
 	struct hns3_adapter *hns = dev->data->dev_private;
 	struct hns3_fdir_rule_ele *fdir_rule_ptr;
+	struct hns3_rss_conf_ele *rss_filter_ptr;
 	struct hns3_flow_mem *flow_node;
 	struct hns3_hw *hw = &hns->hw;
 	enum rte_filter_type filter_type;
@@ -1366,6 +1775,19 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 		rte_free(fdir_rule_ptr);
 		fdir_rule_ptr = NULL;
 		break;
+	case RTE_ETH_FILTER_HASH:
+		rss_filter_ptr = (struct hns3_rss_conf_ele *)flow->rule;
+		ret = hns3_config_rss_filter(dev, &hw->rss_info, false);
+		if (ret)
+			return rte_flow_error_set(error, EIO,
+						  RTE_FLOW_ERROR_TYPE_HANDLE,
+						  flow,
+						  "Destroy RSS fail.Try again");
+		TAILQ_REMOVE(&process_list->filter_rss_list, rss_filter_ptr,
+			     entries);
+		rte_free(rss_filter_ptr);
+		rss_filter_ptr = NULL;
+		break;
 	default:
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_HANDLE, flow,
@@ -1406,6 +1828,10 @@ hns3_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
 		hns3_counter_flush(dev);
 	}
 
+	ret = hns3_clear_rss_filter(dev);
+	if (ret)
+		return ret;
+
 	hns3_filterlist_flush(dev);
 
 	return 0;
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
new file mode 100644
index 0000000..b0cd161
--- /dev/null
+++ b/drivers/net/hns3/hns3_rss.c
@@ -0,0 +1,1386 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+#include <rte_ethdev.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_spinlock.h>
+
+#include "hns3_cmd.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+
+/*
+ * The hash key used for rss initialization.
+ */
+static const uint8_t hns3_hash_key[] = {
+	0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
+	0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
+	0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
+	0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
+	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA
+};
+
+/*
+ * rss_generic_config command function, opcode:0x0D01.
+ * Used to set algorithm, key_offset and hash key of rss.
+ */
+int
+hns3_set_rss_algo_key(struct hns3_hw *hw, uint8_t hash_algo, const uint8_t *key)
+{
+#define HNS3_KEY_OFFSET_MAX	3
+#define HNS3_SET_HASH_KEY_BYTE_FOUR	2
+
+	struct hns3_rss_generic_config_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint32_t key_offset, key_size;
+	const uint8_t *key_cur;
+	uint8_t cur_offset;
+	int ret;
+
+	req = (struct hns3_rss_generic_config_cmd *)desc.data;
+
+	/*
+	 * key_offset=0, hash key byte0~15 is set to hardware.
+	 * key_offset=1, hash key byte16~31 is set to hardware.
+	 * key_offset=2, hash key byte32~39 is set to hardware.
+	 */
+	for (key_offset = 0; key_offset < HNS3_KEY_OFFSET_MAX; key_offset++) {
+		hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_GENERIC_CONFIG,
+					  false);
+
+		req->hash_config |= (hash_algo & HNS3_RSS_HASH_ALGO_MASK);
+		req->hash_config |= (key_offset << HNS3_RSS_HASH_KEY_OFFSET_B);
+
+		if (key_offset == HNS3_SET_HASH_KEY_BYTE_FOUR)
+			key_size = HNS3_RSS_KEY_SIZE - HNS3_RSS_HASH_KEY_NUM *
+			HNS3_SET_HASH_KEY_BYTE_FOUR;
+		else
+			key_size = HNS3_RSS_HASH_KEY_NUM;
+
+		cur_offset = key_offset * HNS3_RSS_HASH_KEY_NUM;
+		key_cur = key + cur_offset;
+		memcpy(req->hash_key, key_cur, key_size);
+
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			hns3_err(hw, "Configure RSS algo key failed %d", ret);
+			return ret;
+		}
+	}
+	/* Update the shadow RSS key with user specified */
+	memcpy(hw->rss_info.key, key, HNS3_RSS_KEY_SIZE);
+	return 0;
+}
+
+/*
+ * Used to configure the tuple selection for RSS hash input.
+ */
+static int
+hns3_set_rss_input_tuple(struct hns3_hw *hw)
+{
+	struct hns3_rss_conf *rss_config = &hw->rss_info;
+	struct hns3_rss_input_tuple_cmd *req;
+	struct hns3_cmd_desc desc_tuple;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc_tuple, HNS3_OPC_RSS_INPUT_TUPLE, false);
+
+	req = (struct hns3_rss_input_tuple_cmd *)desc_tuple.data;
+
+	req->ipv4_tcp_en = rss_config->rss_tuple_sets.ipv4_tcp_en;
+	req->ipv4_udp_en = rss_config->rss_tuple_sets.ipv4_udp_en;
+	req->ipv4_sctp_en = rss_config->rss_tuple_sets.ipv4_sctp_en;
+	req->ipv4_fragment_en = rss_config->rss_tuple_sets.ipv4_fragment_en;
+	req->ipv6_tcp_en = rss_config->rss_tuple_sets.ipv6_tcp_en;
+	req->ipv6_udp_en = rss_config->rss_tuple_sets.ipv6_udp_en;
+	req->ipv6_sctp_en = rss_config->rss_tuple_sets.ipv6_sctp_en;
+	req->ipv6_fragment_en = rss_config->rss_tuple_sets.ipv6_fragment_en;
+
+	ret = hns3_cmd_send(hw, &desc_tuple, 1);
+	if (ret)
+		hns3_err(hw, "Configure RSS input tuple mode failed %d", ret);
+
+	return ret;
+}
+
+/*
+ * rss_indirection_table command function, opcode:0x0D07.
+ * Used to configure the indirection table of rss.
+ */
+int
+hns3_set_rss_indir_table(struct hns3_hw *hw, uint8_t *indir, uint16_t size)
+{
+	struct hns3_rss_indirection_table_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret, i, j, num;
+
+	req = (struct hns3_rss_indirection_table_cmd *)desc.data;
+
+	for (i = 0; i < size / HNS3_RSS_CFG_TBL_SIZE; i++) {
+		hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INDIR_TABLE,
+					  false);
+		req->start_table_index =
+				rte_cpu_to_le_16(i * HNS3_RSS_CFG_TBL_SIZE);
+		req->rss_set_bitmap = rte_cpu_to_le_16(HNS3_RSS_SET_BITMAP_MSK);
+		for (j = 0; j < HNS3_RSS_CFG_TBL_SIZE; j++) {
+			num = i * HNS3_RSS_CFG_TBL_SIZE + j;
+			req->rss_result[j] = indir[num] % hw->alloc_rss_size;
+		}
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			hns3_err(hw,
+				 "Sets RSS indirection table failed %d size %u",
+				 ret, size);
+			return ret;
+		}
+	}
+
+	/* Update redirection table of hw */
+	memcpy(hw->rss_info.rss_indirection_tbl, indir,	HNS3_RSS_IND_TBL_SIZE);
+
+	return 0;
+}
+
+int
+hns3_rss_reset_indir_table(struct hns3_hw *hw)
+{
+	uint8_t *lut;
+	int ret;
+
+	lut = rte_zmalloc("hns3_rss_lut", HNS3_RSS_IND_TBL_SIZE, 0);
+	if (lut == NULL) {
+		hns3_err(hw, "No hns3_rss_lut memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = hns3_set_rss_indir_table(hw, lut, HNS3_RSS_IND_TBL_SIZE);
+	if (ret)
+		hns3_err(hw, "RSS uninit indir table failed: %d", ret);
+	rte_free(lut);
+
+	return ret;
+}
+
+static void
+hns3_set_dst_port(uint8_t *tuple, enum rte_filter_input_set_op op)
+{
+	uint8_t set = *tuple;
+	hns3_set_bit(set, HNS3_D_PORT_BIT_SHIFT, 1);
+	if (op == RTE_ETH_INPUT_SET_SELECT) {
+		hns3_set_bit(set, HNS3_S_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_D_IP_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_IP_BIT_SHIFT, 0);
+	}
+	*tuple = set;
+}
+
+static void
+hns3_set_src_port(uint8_t *tuple, enum rte_filter_input_set_op op)
+{
+	uint8_t set = *tuple;
+	hns3_set_bit(set, HNS3_S_PORT_BIT_SHIFT, 1);
+	if (op == RTE_ETH_INPUT_SET_SELECT) {
+		hns3_set_bit(set, HNS3_D_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_D_IP_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_IP_BIT_SHIFT, 0);
+	}
+	*tuple = set;
+}
+
+static void
+hns3_set_dst_ip(uint8_t *tuple, enum rte_filter_input_set_op op)
+{
+	uint8_t set = *tuple;
+	hns3_set_bit(set, HNS3_D_IP_BIT_SHIFT, 1);
+	if (op == RTE_ETH_INPUT_SET_SELECT) {
+		hns3_set_bit(set, HNS3_D_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_IP_BIT_SHIFT, 0);
+	}
+	*tuple = set;
+}
+
+static void
+hns3_set_src_ip(uint8_t *tuple, enum rte_filter_input_set_op op)
+{
+	uint8_t set = *tuple;
+	hns3_set_bit(set, HNS3_S_IP_BIT_SHIFT, 1);
+	if (op == RTE_ETH_INPUT_SET_SELECT) {
+		hns3_set_bit(set, HNS3_D_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_D_IP_BIT_SHIFT, 0);
+	}
+	*tuple = set;
+}
+
+static void
+hns3_set_v_tag(uint8_t *tuple, enum rte_filter_input_set_op op)
+{
+	uint8_t set = *tuple;
+	/* Set V_TAG_BIT */
+	hns3_set_bit(set, HNS3_V_TAG_BIT_SHIFT, 1);
+	if (op == RTE_ETH_INPUT_SET_SELECT) {
+		hns3_set_bit(set, HNS3_D_IP_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_IP_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_D_PORT_BIT_SHIFT, 0);
+		hns3_set_bit(set, HNS3_S_PORT_BIT_SHIFT, 0);
+	}
+	*tuple = set;
+}
+
+static void
+hns3_rss_ipv4_tcp_tuple_set(struct hns3_hw *hw,
+				  struct rte_eth_input_set_conf *conf,
+				  uint8_t *new_tuple)
+{
+	uint8_t ipv4_tcp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_TCP_DST_PORT:
+		hns3_set_dst_port(&ipv4_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT:
+		hns3_set_src_port(&ipv4_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP4:
+		hns3_set_dst_ip(&ipv4_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP4:
+		hns3_set_src_ip(&ipv4_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv4-tcp tuple");
+		ipv4_tcp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv4-tcp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv4_tcp_tuple;
+}
+
+static void
+hns3_rss_ipv4_udp_tuple_set(struct hns3_hw *hw,
+				   struct rte_eth_input_set_conf *conf,
+				   uint8_t *new_tuple)
+{
+	uint8_t ipv4_udp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_UDP_DST_PORT:
+		hns3_set_dst_port(&ipv4_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT:
+		hns3_set_src_port(&ipv4_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP4:
+		hns3_set_dst_ip(&ipv4_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP4:
+		hns3_set_src_ip(&ipv4_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv4-udp tuple");
+		ipv4_udp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv4-udp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv4_udp_tuple;
+}
+
+static void
+hns3_rss_ipv4_sctp_tuple_set(struct hns3_hw *hw,
+			     struct rte_eth_input_set_conf *conf,
+			     uint8_t *new_tuple)
+{
+	uint8_t ipv4_sctp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_SCTP_VERIFICATION_TAG:
+		hns3_set_v_tag(&ipv4_sctp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L4_SCTP_DST_PORT:
+		hns3_set_bit(ipv4_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete S_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L4_SCTP_SRC_PORT:
+		hns3_set_bit(ipv4_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete D_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP4:
+		hns3_set_bit(ipv4_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete D_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete S_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP4:
+		hns3_set_bit(ipv4_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete D_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete S_PORT");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+			hns3_set_bit(ipv4_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv4-sctp tuple");
+		ipv4_sctp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv4-sctp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv4_sctp_tuple;
+}
+
+static void
+hns3_rss_ipv4_frag_tuple_set(struct hns3_hw *hw,
+			     struct rte_eth_input_set_conf *conf,
+			     uint8_t *new_tuple)
+{
+	uint8_t ipv4_frag_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L3_DST_IP4:
+		/* Set dst-ip(bit2) */
+		hns3_set_bit(ipv4_frag_tuple, HNS3_D_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear src-ip(bit3) */
+			hns3_set_bit(ipv4_frag_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP4:
+		/* Set src-ip(bit3) */
+		hns3_set_bit(ipv4_frag_tuple, HNS3_S_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear dst-ip(bit2) */
+			hns3_set_bit(ipv4_frag_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv4-frag tuple");
+		/* Clear dst_ip(bit2) and src_ip(bit3) */
+		ipv4_frag_tuple &= ~HNS3_IP_FRAG_BIT_MASK;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv4-frag four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv4_frag_tuple;
+}
+
+static void
+hns3_rss_ipv4_other_tuple_set(struct hns3_hw *hw,
+			      struct rte_eth_input_set_conf *conf,
+			      uint8_t *new_tuple)
+{
+	uint8_t tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L3_DST_IP4:
+		/* Set dst_ip(bit0) */
+		hns3_set_bit(tuple, HNS3_D_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear src_ip(bit1) */
+			hns3_set_bit(tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP4:
+		/* Set src_ip(bit1) */
+		hns3_set_bit(tuple, HNS3_S_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Set dst_ip(bit0) */
+			hns3_set_bit(tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv4-other tuple");
+		/* Clear dst_ip(bit0) and src_ip(bit1) */
+		tuple &= ~HNS3_IP_OTHER_BIT_MASK;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv4-other four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = tuple;
+}
+
+static void
+hns3_rss_ipv6_tcp_tuple_set(struct hns3_hw *hw,
+			    struct rte_eth_input_set_conf *conf,
+			    uint8_t *new_tuple)
+{
+	uint8_t ipv6_tcp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_TCP_DST_PORT:
+		hns3_set_dst_port(&ipv6_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L4_TCP_SRC_PORT:
+		hns3_set_src_port(&ipv6_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP6:
+		hns3_set_dst_ip(&ipv6_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP6:
+		hns3_set_src_ip(&ipv6_tcp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv6-tcp tuple");
+		ipv6_tcp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv6-tcp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv6_tcp_tuple;
+}
+
+static void
+hns3_rss_ipv6_udp_tuple_set(struct hns3_hw *hw,
+			    struct rte_eth_input_set_conf *conf,
+			    uint8_t *new_tuple)
+{
+	uint8_t ipv6_udp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_UDP_DST_PORT:
+		hns3_set_dst_port(&ipv6_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L4_UDP_SRC_PORT:
+		hns3_set_src_port(&ipv6_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP6:
+		hns3_set_dst_ip(&ipv6_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP6:
+		hns3_set_src_ip(&ipv6_udp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv6-udp tuple");
+		ipv6_udp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv6-udp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv6_udp_tuple;
+}
+
+static void
+hns3_rss_ipv6_sctp_tuple_set(struct hns3_hw *hw,
+			     struct rte_eth_input_set_conf *conf,
+			     uint8_t *new_tuple)
+{
+	uint8_t ipv6_sctp_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L4_SCTP_VERIFICATION_TAG:
+		hns3_set_v_tag(&ipv6_sctp_tuple, conf->op);
+		break;
+	case RTE_ETH_INPUT_SET_L3_DST_IP6:
+		hns3_set_bit(ipv6_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete D_PORT");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete S_PORT");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP6:
+		hns3_set_bit(ipv6_sctp_tuple, HNS3_S_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_D_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete D_PORT");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete S_PORT");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+			hns3_set_bit(ipv6_sctp_tuple, HNS3_V_TAG_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete VERI_TAG");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select |add none */
+		hns3_info(hw, "Disable ipv6-sctp tuple");
+		ipv6_sctp_tuple = 0;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv6-sctp four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv6_sctp_tuple;
+}
+
+static void
+hns3_rss_ipv6_frag_tuple_set(struct hns3_hw *hw,
+			     struct rte_eth_input_set_conf *conf,
+			     uint8_t *new_tuple)
+{
+	uint8_t ipv6_frag_tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L3_DST_IP6:
+		/* Set dst-ip(bit2) */
+		hns3_set_bit(ipv6_frag_tuple, HNS3_D_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear src-ip(bit3) */
+			hns3_set_bit(ipv6_frag_tuple, HNS3_S_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP6:
+		/* Set src-ip(bit3) */
+		hns3_set_bit(ipv6_frag_tuple, HNS3_S_IP_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear dst-ip(bit2) */
+			hns3_set_bit(ipv6_frag_tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select|add none */
+		hns3_info(hw, "Disable ipv6-frag tuple");
+		/* Clear dst-ip(bit2) and src-ip(bit3) */
+		ipv6_frag_tuple &= ~HNS3_IP_FRAG_BIT_MASK;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv6-frag four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = ipv6_frag_tuple;
+}
+
+static void
+hns3_rss_ipv6_other_tuple_set(struct hns3_hw *hw,
+			      struct rte_eth_input_set_conf *conf,
+			      uint8_t *new_tuple)
+{
+	uint8_t tuple = *new_tuple;
+	switch (conf->field[0]) {
+	case RTE_ETH_INPUT_SET_L3_DST_IP6:
+		/* Set dst_ip(bit0) */
+		hns3_set_bit(tuple, HNS3_D_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Clear src_ip(bit1) */
+			hns3_set_bit(tuple, HNS3_S_PORT_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete SRC_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_L3_SRC_IP6:
+		/* Set src_ip(bit1) */
+		hns3_set_bit(tuple, HNS3_S_PORT_BIT_SHIFT, 1);
+		if (conf->op == RTE_ETH_INPUT_SET_SELECT) {
+			/* Set dst_ip(bit0) */
+			hns3_set_bit(tuple, HNS3_D_IP_BIT_SHIFT, 0);
+			hns3_info(hw, "Delete DST_IP");
+		}
+		break;
+	case RTE_ETH_INPUT_SET_NONE: /* select|add none */
+		hns3_info(hw, "Disable ipv6-other tuple");
+		/* Clear dst-ip(bit0) and src-ip(bit1) */
+		tuple &= ~HNS3_IP_OTHER_BIT_MASK;
+		break;
+	default:
+		hns3_err(hw, "Invalid ipv6-other four tuples set: %u",
+			 conf->field[0]);
+		return;
+	}
+	*new_tuple = tuple;
+}
+
+static void
+hns3_update_rss_hash_by_tuple_cfg(struct hns3_hw *hw)
+{
+	struct hns3_rss_tuple_cfg *tuple_cfg = &hw->rss_info.rss_tuple_sets;
+	uint32_t flow_types = 0;
+
+	if (tuple_cfg->ipv4_tcp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV4_TCP;
+	if (tuple_cfg->ipv4_udp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV4_UDP;
+	if (tuple_cfg->ipv4_sctp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV4_SCTP;
+	if (tuple_cfg->ipv4_fragment_en & HNS3_IP_FRAG_BIT_MASK)
+		flow_types |= ETH_RSS_FRAG_IPV4;
+	if (tuple_cfg->ipv4_fragment_en & HNS3_IP_OTHER_BIT_MASK)
+		flow_types |= ETH_RSS_NONFRAG_IPV4_OTHER;
+	if (tuple_cfg->ipv6_tcp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV6_TCP;
+	if (tuple_cfg->ipv6_udp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV6_UDP;
+	if (tuple_cfg->ipv6_sctp_en)
+		flow_types |= ETH_RSS_NONFRAG_IPV6_SCTP;
+	if (tuple_cfg->ipv6_fragment_en & HNS3_IP_FRAG_BIT_MASK)
+		flow_types |= ETH_RSS_FRAG_IPV6;
+	if (tuple_cfg->ipv6_fragment_en & HNS3_IP_OTHER_BIT_MASK)
+		flow_types |= ETH_RSS_NONFRAG_IPV6_OTHER;
+
+	hw->rss_info.conf.types = flow_types;
+}
+
+static int
+hns3_set_rss_input_tuple_parse(struct hns3_hw *hw,
+			       struct rte_eth_input_set_conf *conf,
+			       uint64_t rss_hf)
+{
+	struct hns3_rss_tuple_cfg *tuple_cfg = &hw->rss_info.rss_tuple_sets;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	struct hns3_rss_input_tuple_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint64_t flow_types;
+	uint32_t i;
+	int ret;
+
+	/* Filter the unsupported flow types */
+	flow_types = rss_hf & HNS3_ETH_RSS_SUPPORT;
+	if (flow_types == 0) {
+		hns3_err(hw, "RSS tuple command(%lu) is unsupported,"
+			 " the supported mask is %llu",
+			 rss_hf, HNS3_ETH_RSS_SUPPORT);
+		return -ENOTSUP;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INPUT_TUPLE, false);
+
+	req = (struct hns3_rss_input_tuple_cmd *)desc.data;
+
+	req->ipv4_tcp_en = rss_cfg->rss_tuple_sets.ipv4_tcp_en;
+	req->ipv4_udp_en = rss_cfg->rss_tuple_sets.ipv4_udp_en;
+	req->ipv4_sctp_en = rss_cfg->rss_tuple_sets.ipv4_sctp_en;
+	req->ipv4_fragment_en = rss_cfg->rss_tuple_sets.ipv4_fragment_en;
+	req->ipv6_tcp_en = rss_cfg->rss_tuple_sets.ipv6_tcp_en;
+	req->ipv6_udp_en = rss_cfg->rss_tuple_sets.ipv6_udp_en;
+	req->ipv6_sctp_en = rss_cfg->rss_tuple_sets.ipv6_sctp_en;
+	req->ipv6_fragment_en = rss_cfg->rss_tuple_sets.ipv6_fragment_en;
+
+	/* Enable ipv4 or ipv6 tuple by flow type */
+	for (i = 0; i < RTE_ETH_FLOW_MAX; i++) {
+		switch (flow_types & (1ULL << i)) {
+		case ETH_RSS_NONFRAG_IPV4_TCP:
+			hns3_rss_ipv4_tcp_tuple_set(hw, conf,
+						    &req->ipv4_tcp_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV4_UDP:
+			hns3_rss_ipv4_udp_tuple_set(hw, conf,
+						    &req->ipv4_udp_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV4_SCTP:
+			hns3_rss_ipv4_sctp_tuple_set(hw, conf,
+						     &req->ipv4_sctp_en);
+			break;
+		case ETH_RSS_FRAG_IPV4:
+			hns3_rss_ipv4_frag_tuple_set(hw, conf,
+						     &req->ipv4_fragment_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV4_OTHER:
+			hns3_rss_ipv4_other_tuple_set(hw, conf,
+						      &req->ipv4_fragment_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV6_TCP:
+			hns3_rss_ipv6_tcp_tuple_set(hw, conf,
+						    &req->ipv6_tcp_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV6_UDP:
+			hns3_rss_ipv6_udp_tuple_set(hw, conf,
+						    &req->ipv6_udp_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV6_SCTP:
+			hns3_rss_ipv6_sctp_tuple_set(hw, conf,
+						     &req->ipv6_sctp_en);
+			break;
+		case ETH_RSS_FRAG_IPV6:
+			hns3_rss_ipv6_frag_tuple_set(hw, conf,
+						     &req->ipv6_fragment_en);
+			break;
+		case ETH_RSS_NONFRAG_IPV6_OTHER:
+			hns3_rss_ipv6_other_tuple_set(hw, conf,
+						      &req->ipv6_fragment_en);
+			break;
+		default:
+			/*
+			 * Other unsupported flow types won't change tuples set
+			 * of RSS.
+			 */
+			break;
+		}
+	}
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "RSS update tuple failed: %d", ret);
+		return ret;
+	}
+
+	/* Update the tuple of hw */
+	tuple_cfg->ipv4_tcp_en = req->ipv4_tcp_en;
+	tuple_cfg->ipv4_udp_en = req->ipv4_udp_en;
+	tuple_cfg->ipv4_sctp_en = req->ipv4_sctp_en;
+	tuple_cfg->ipv4_fragment_en = req->ipv4_fragment_en;
+	tuple_cfg->ipv6_tcp_en = req->ipv6_tcp_en;
+	tuple_cfg->ipv6_udp_en = req->ipv6_udp_en;
+	tuple_cfg->ipv6_sctp_en = req->ipv6_sctp_en;
+	tuple_cfg->ipv6_fragment_en = req->ipv6_fragment_en;
+
+	hns3_update_rss_hash_by_tuple_cfg(hw);
+
+	return 0;
+}
+
+int
+hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
+			     struct hns3_rss_tuple_cfg *tuple, uint64_t rss_hf)
+{
+	struct hns3_rss_input_tuple_cmd *req;
+	struct hns3_cmd_desc desc;
+	uint32_t i;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INPUT_TUPLE, false);
+
+	req = (struct hns3_rss_input_tuple_cmd *)desc.data;
+
+	/* Enable ipv4 or ipv6 tuple by flow type */
+	for (i = 0; i < RTE_ETH_FLOW_MAX; i++) {
+		switch (rss_hf & (1ULL << i)) {
+		case ETH_RSS_NONFRAG_IPV4_TCP:
+			req->ipv4_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
+			break;
+		case ETH_RSS_NONFRAG_IPV4_UDP:
+			req->ipv4_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
+			break;
+		case ETH_RSS_NONFRAG_IPV4_SCTP:
+			req->ipv4_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
+			break;
+		case ETH_RSS_FRAG_IPV4:
+			req->ipv4_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
+			break;
+		case ETH_RSS_NONFRAG_IPV4_OTHER:
+			req->ipv4_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
+			break;
+		case ETH_RSS_NONFRAG_IPV6_TCP:
+			req->ipv6_tcp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
+			break;
+		case ETH_RSS_NONFRAG_IPV6_UDP:
+			req->ipv6_udp_en = HNS3_RSS_INPUT_TUPLE_OTHER;
+			break;
+		case ETH_RSS_NONFRAG_IPV6_SCTP:
+			req->ipv6_sctp_en = HNS3_RSS_INPUT_TUPLE_SCTP;
+			break;
+		case ETH_RSS_FRAG_IPV6:
+			req->ipv6_fragment_en |= HNS3_IP_FRAG_BIT_MASK;
+			break;
+		case ETH_RSS_NONFRAG_IPV6_OTHER:
+			req->ipv6_fragment_en |= HNS3_IP_OTHER_BIT_MASK;
+			break;
+		default:
+			/* Other unsupported flow types won't change tuples */
+			break;
+		}
+	}
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Update RSS flow types tuples failed %d", ret);
+		return ret;
+	}
+
+	tuple->ipv4_tcp_en = req->ipv4_tcp_en;
+	tuple->ipv4_udp_en = req->ipv4_udp_en;
+	tuple->ipv4_sctp_en = req->ipv4_sctp_en;
+	tuple->ipv4_fragment_en = req->ipv4_fragment_en;
+	tuple->ipv6_tcp_en = req->ipv6_tcp_en;
+	tuple->ipv6_udp_en = req->ipv6_udp_en;
+	tuple->ipv6_sctp_en = req->ipv6_sctp_en;
+	tuple->ipv6_fragment_en = req->ipv6_fragment_en;
+
+	return 0;
+}
+
+/*
+ * Configure RSS hash protocols and hash key.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram rss_conf
+ *   The configuration select of  rss key size and tuple flow_types.
+ * @return
+ *   0 on success, a negative errno value otherwise is set.
+ */
+int
+hns3_dev_rss_hash_update(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_conf *rss_conf)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_tuple_cfg *tuple = &hw->rss_info.rss_tuple_sets;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint8_t algo = rss_cfg->conf.func;
+	uint8_t key_len = rss_conf->rss_key_len;
+	uint64_t rss_hf = rss_conf->rss_hf;
+	uint8_t *key = rss_conf->rss_key;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_set_rss_tuple_by_rss_hf(hw, tuple, rss_hf);
+	if (ret)
+		goto conf_err;
+
+	if (rss_cfg->conf.types && rss_hf == 0) {
+		/* Disable RSS, reset indirection table by local variable */
+		ret = hns3_rss_reset_indir_table(hw);
+		if (ret)
+			goto conf_err;
+	} else if (rss_hf && rss_cfg->conf.types == 0) {
+		/* Enable RSS, restore indirection table by hw's config */
+		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
+					       HNS3_RSS_IND_TBL_SIZE);
+		if (ret)
+			goto conf_err;
+	}
+
+	/* Update supported flow types when set tuple success */
+	rss_cfg->conf.types = rss_hf;
+
+	if (key) {
+		if (key_len != HNS3_RSS_KEY_SIZE) {
+			hns3_err(hw, "The hash key len(%u) is invalid",
+				 key_len);
+			ret = -EINVAL;
+			goto conf_err;
+		}
+		ret = hns3_set_rss_algo_key(hw, algo, key);
+		if (ret)
+			goto conf_err;
+	}
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+
+conf_err:
+	rte_spinlock_unlock(&hw->lock);
+	return ret;
+}
+
+/*
+ * Get rss key and rss_hf types set of RSS hash configuration.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram rss_conf
+ *   The buffer to get rss key size and tuple types.
+ * @return
+ *   0 on success.
+ */
+int
+hns3_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			   struct rte_eth_rss_conf *rss_conf)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+
+	rte_spinlock_lock(&hw->lock);
+	rss_conf->rss_hf = rss_cfg->conf.types;
+
+	/* Get the RSS Key required by the user */
+	if (rss_conf->rss_key)
+		memcpy(rss_conf->rss_key, rss_cfg->key, HNS3_RSS_KEY_SIZE);
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+/*
+ * Update rss redirection table of RSS.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram reta_conf
+ *   Pointer to the configuration select of mask and redirection tables.
+ * @param reta_size
+ *   Redirection table size.
+ * @return
+ *   0 on success, a negative errno value otherwise is set.
+ */
+int
+hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_reta_entry64 *reta_conf,
+			 uint16_t reta_size)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint16_t i, indir_size = HNS3_RSS_IND_TBL_SIZE; /* Table size is 512 */
+	uint8_t indirection_tbl[HNS3_RSS_IND_TBL_SIZE];
+	uint16_t idx, shift, allow_rss_queues;
+	int ret;
+
+	if (reta_size != indir_size || reta_size > ETH_RSS_RETA_SIZE_512) {
+		hns3_err(hw, "The size of hash lookup table configured (%u)"
+			 "doesn't match the number hardware can supported"
+			 "(%u)", reta_size, indir_size);
+		return -EINVAL;
+	}
+	rte_spinlock_lock(&hw->lock);
+	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
+		HNS3_RSS_IND_TBL_SIZE);
+	allow_rss_queues = RTE_MIN(dev->data->nb_rx_queues, hw->rss_size_max);
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].reta[shift] >= allow_rss_queues) {
+			rte_spinlock_unlock(&hw->lock);
+			hns3_err(hw, "Invalid queue id(%u) to be set in "
+				 "redirection table, max number of rss "
+				 "queues: %u", reta_conf[idx].reta[shift],
+				 allow_rss_queues);
+			return -EINVAL;
+		}
+
+		if (reta_conf[idx].mask & (1ULL << shift))
+			indirection_tbl[i] = reta_conf[idx].reta[shift];
+	}
+
+	ret = hns3_set_rss_indir_table(hw, indirection_tbl,
+				       HNS3_RSS_IND_TBL_SIZE);
+
+	rte_spinlock_unlock(&hw->lock);
+	return ret;
+}
+
+/*
+ * Get rss redirection table of RSS hash configuration.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram reta_conf
+ *   Pointer to the configuration select of mask and redirection tables.
+ * @param reta_size
+ *   Redirection table size.
+ * @return
+ *   0 on success, a negative errno value otherwise is set.
+ */
+int
+hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint16_t i, indir_size = HNS3_RSS_IND_TBL_SIZE; /* Table size is 512 */
+	uint16_t idx, shift;
+
+	if (reta_size != indir_size || reta_size > ETH_RSS_RETA_SIZE_512) {
+		hns3_err(hw, "The size of hash lookup table configured (%u)"
+			 " doesn't match the number hardware can supported"
+			 "(%u)", reta_size, indir_size);
+		return -EINVAL;
+	}
+	rte_spinlock_lock(&hw->lock);
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] =
+			  rss_cfg->rss_indirection_tbl[i] % hw->alloc_rss_size;
+	}
+	rte_spinlock_unlock(&hw->lock);
+	return 0;
+}
+
+static void
+hns3_hash_filter_global_config_get(struct hns3_hw *hw,
+				   struct rte_eth_hash_filter_info *info)
+{
+	/* Get algorithm configuration */
+	info->info.global_conf.hash_func = hw->rss_info.conf.func;
+}
+
+static int
+hns3_hash_filter_get(struct hns3_hw *hw,
+		     struct rte_eth_hash_filter_info *info)
+{
+	int ret = 0;
+
+	if (info == NULL) {
+		hns3_err(hw, "Invalid filter info pointer");
+		return -EINVAL;
+	}
+
+	switch (info->info_type) {
+	case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG:
+		hns3_hash_filter_global_config_get(hw, info);
+		break;
+	default:
+		hns3_err(hw, "Hash filter info type (%d) not supported",
+			 info->info_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+hns3_hash_filter_global_config_set(struct hns3_hw *hw,
+				   struct rte_eth_hash_filter_info *info)
+{
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint8_t *hash_key = rss_cfg->key;
+	uint8_t hash_algo;
+	int ret;
+
+	/* Set hash algorithm */
+	switch (info->info.global_conf.hash_func) {
+	case RTE_ETH_HASH_FUNCTION_DEFAULT:
+		/* Keep algorithm as it used to be */
+		return 0;
+	case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+		hash_algo = HNS3_RSS_HASH_ALGO_TOEPLITZ;
+		break;
+	case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+		hash_algo = HNS3_RSS_HASH_ALGO_SIMPLE;
+		break;
+	default:
+		hns3_err(hw, "Invalid RSS hash_algo configuration %u",
+			 info->info.global_conf.hash_func);
+		return -EINVAL;
+	}
+	ret = hns3_set_rss_algo_key(hw, hash_algo, hash_key);
+	if (ret)
+		return ret;
+	/* Update hash algorithm after config success */
+	rss_cfg->conf.func = info->info.global_conf.hash_func;
+
+	return 0;
+}
+
+static int
+hns3_hash_filter_inset_select_set(struct hns3_hw *hw,
+				  struct rte_eth_hash_filter_info *info)
+{
+	struct rte_eth_input_set_conf *conf = &info->info.input_set_conf;
+	uint64_t rss_hf = 1ULL << conf->flow_type;
+
+	if (conf->op != RTE_ETH_INPUT_SET_SELECT &&
+	    conf->op != RTE_ETH_INPUT_SET_ADD) {
+		hns3_err(hw, "Unsupported input set operation(%u)", conf->op);
+		return -ENOTSUP;
+	}
+
+	if (conf->flow_type > RTE_ETH_FLOW_MAX) {
+		hns3_err(hw, "Unsupported flow_type(%u)", conf->flow_type);
+		return -ENOTSUP;
+	}
+
+	/* Set the bit of input tuple */
+	return hns3_set_rss_input_tuple_parse(hw, conf, rss_hf);
+}
+
+static int
+hns3_hash_filter_set(struct hns3_hw *hw,
+		     struct rte_eth_hash_filter_info *info)
+{
+	int ret = 0;
+
+	if (info == NULL) {
+		hns3_err(hw, "Invalid filter info pointer");
+		return -EINVAL;
+	}
+
+	switch (info->info_type) {
+	case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG:
+		rte_spinlock_lock(&hw->lock);
+		ret = hns3_hash_filter_global_config_set(hw, info);
+		rte_spinlock_unlock(&hw->lock);
+		break;
+	case RTE_ETH_HASH_FILTER_INPUT_SET_SELECT:
+		rte_spinlock_lock(&hw->lock);
+		ret = hns3_hash_filter_inset_select_set(hw, info);
+		rte_spinlock_unlock(&hw->lock);
+		break;
+	default:
+		hns3_err(hw, "Hash filter info type (%d) not supported",
+			 info->info_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/* Operations for hash function */
+static int
+hns3_hash_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_op filter_op,
+		      void *arg)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_hash_filter_info *input_arg = arg;
+	int ret = 0;
+
+	switch (filter_op) {
+	case RTE_ETH_FILTER_NOP:
+		break;
+	case RTE_ETH_FILTER_GET:
+		/* Get the hash algorithm and key */
+		ret = hns3_hash_filter_get(hw, input_arg);
+		break;
+	case RTE_ETH_FILTER_SET:
+		/* Set the hash algorithm and key */
+		ret = hns3_hash_filter_set(hw, input_arg);
+		break;
+	default:
+		hns3_err(hw, "Filter operation (%d) not supported", filter_op);
+		ret = -ENOTSUP;
+		break;
+	}
+
+	return ret;
+}
+
+/*
+ * RSS hash algorithm configuration.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram filter_type
+ *   Feature filter types.Select RTE_ETH_FILTER_HASH to operate hash func.
+ * @praram filter_op
+ *   Pointer to Ethernet device.
+ * @praram filter_type
+ *   Generic operations on filters.Select RTE_ETH_FILTER_GET/SET to get/set
+ *   Hash_algorithm.
+ * @praram arg
+ *   Pointer to structure rte_eth_hash_filter_info.
+ * @return
+ *   0 on success, a negative errno value otherwise is set.
+ */
+int
+hns3_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_type filter_type,
+		     enum rte_filter_op filter_op, void *arg)
+{
+	struct hns3_hw *hw;
+	int ret = 0;
+
+	if (dev == NULL)
+		return -EINVAL;
+	hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	switch (filter_type) {
+	case RTE_ETH_FILTER_HASH:
+		ret = hns3_hash_filter_ctrl(dev, filter_op, arg);
+		break;
+	case RTE_ETH_FILTER_GENERIC:
+		if (filter_op != RTE_ETH_FILTER_GET)
+			return -EINVAL;
+		if (hw->adapter_state >= HNS3_NIC_CLOSED)
+			return -ENODEV;
+		*(const void **)arg = &hns3_flow_ops;
+		break;
+	default:
+		hns3_err(hw, "Filter type (%d) not supported", filter_type);
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+/*
+ * Used to configure the tc_size and tc_offset.
+ */
+static int
+hns3_set_rss_tc_mode(struct hns3_hw *hw)
+{
+	uint16_t rss_size = hw->alloc_rss_size;
+	struct hns3_rss_tc_mode_cmd *req;
+	uint16_t tc_offset[HNS3_MAX_TC_NUM];
+	uint8_t tc_valid[HNS3_MAX_TC_NUM];
+	uint16_t tc_size[HNS3_MAX_TC_NUM];
+	struct hns3_cmd_desc desc;
+	uint16_t roundup_size;
+	uint16_t i;
+	int ret;
+
+	req = (struct hns3_rss_tc_mode_cmd *)desc.data;
+
+	roundup_size = roundup_pow_of_two(rss_size);
+	roundup_size = ilog2(roundup_size);
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		tc_valid[i] = !!(hw->hw_tc_map & BIT(i));
+		tc_size[i] = roundup_size;
+		tc_offset[i] = rss_size * i;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_TC_MODE, false);
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		uint16_t mode = 0;
+
+		hns3_set_bit(mode, HNS3_RSS_TC_VALID_B, (tc_valid[i] & 0x1));
+		hns3_set_field(mode, HNS3_RSS_TC_SIZE_M, HNS3_RSS_TC_SIZE_S,
+			       tc_size[i]);
+		hns3_set_field(mode, HNS3_RSS_TC_OFFSET_M, HNS3_RSS_TC_OFFSET_S,
+			       tc_offset[i]);
+
+		req->rss_tc_mode[i] = rte_cpu_to_le_16(mode);
+	}
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Sets rss tc mode failed %d", ret);
+
+	return ret;
+}
+
+static void
+hns3_rss_tuple_uninit(struct hns3_hw *hw)
+{
+	struct hns3_rss_input_tuple_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INPUT_TUPLE, false);
+
+	req = (struct hns3_rss_input_tuple_cmd *)desc.data;
+
+	memset(req, 0, sizeof(struct hns3_rss_tuple_cfg));
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "RSS uninit tuple failed %d", ret);
+		return;
+	}
+}
+
+/*
+ * Set the default rss configuration in the init of driver.
+ */
+void
+hns3_set_default_rss_args(struct hns3_hw *hw)
+{
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint16_t queue_num = hw->alloc_rss_size;
+	int i;
+
+	/* Default hash algorithm */
+	rss_cfg->conf.func = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR;
+	memcpy(rss_cfg->key, hns3_hash_key, HNS3_RSS_KEY_SIZE);
+
+	/* Initialize RSS indirection table */
+	for (i = 0; i < HNS3_RSS_IND_TBL_SIZE; i++)
+		rss_cfg->rss_indirection_tbl[i] = i % queue_num;
+}
+
+/*
+ * RSS initialization for hns3 pmd driver.
+ */
+int
+hns3_config_rss(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	uint8_t hash_algo =
+		(hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_TOEPLITZ ?
+		 HNS3_RSS_HASH_ALGO_TOEPLITZ : HNS3_RSS_HASH_ALGO_SIMPLE);
+	uint8_t *hash_key = rss_cfg->key;
+	int ret, ret1;
+
+	enum rte_eth_rx_mq_mode mq_mode = hw->data->dev_conf.rxmode.mq_mode;
+
+	/* When there is no open RSS, redirect the packet queue 0 */
+	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) {
+		hns3_rss_uninit(hns);
+		return 0;
+	}
+
+	/* Configure RSS hash algorithm and hash key offset */
+	ret = hns3_set_rss_algo_key(hw, hash_algo, hash_key);
+	if (ret)
+		return ret;
+
+	/* Configure the tuple selection for RSS hash input */
+	ret = hns3_set_rss_input_tuple(hw);
+	if (ret)
+		return ret;
+
+	ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
+				       HNS3_RSS_IND_TBL_SIZE);
+	if (ret)
+		goto rss_tuple_uninit;
+
+	ret = hns3_set_rss_tc_mode(hw);
+	if (ret)
+		goto rss_indir_table_uninit;
+
+	return ret;
+
+rss_indir_table_uninit:
+	ret1 = hns3_rss_reset_indir_table(hw);
+	if (ret1 != 0)
+		return ret;
+
+rss_tuple_uninit:
+	hns3_rss_tuple_uninit(hw);
+
+	/* Disable RSS */
+	hw->rss_info.conf.types = 0;
+
+	return ret;
+}
+
+/*
+ * RSS uninitialization for hns3 pmd driver.
+ */
+void
+hns3_rss_uninit(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	hns3_rss_tuple_uninit(hw);
+	ret = hns3_rss_reset_indir_table(hw);
+	if (ret != 0)
+		return;
+
+	/* Disable RSS */
+	hw->rss_info.conf.types = 0;
+}
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
new file mode 100644
index 0000000..217339b
--- /dev/null
+++ b/drivers/net/hns3/hns3_rss.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_RSS_H_
+#define _HNS3_RSS_H_
+#include <rte_ethdev.h>
+#include <rte_flow.h>
+
+#define HNS3_ETH_RSS_SUPPORT ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER)
+
+#define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
+#define HNS3_RSS_KEY_SIZE	40
+#define HNS3_RSS_CFG_TBL_NUM \
+	(HNS3_RSS_IND_TBL_SIZE / HNS3_RSS_CFG_TBL_SIZE)
+#define HNS3_RSS_SET_BITMAP_MSK	0xffff
+
+#define HNS3_RSS_HASH_ALGO_TOEPLITZ	0
+#define HNS3_RSS_HASH_ALGO_SIMPLE	1
+#define HNS3_RSS_HASH_ALGO_SYMMETRIC	2
+#define HNS3_RSS_HASH_ALGO_MASK		0xf
+
+#define HNS3_D_PORT_BIT_SHIFT	0
+#define HNS3_S_PORT_BIT_SHIFT	1
+#define HNS3_D_IP_BIT_SHIFT	2
+#define HNS3_S_IP_BIT_SHIFT	3
+#define HNS3_V_TAG_BIT_SHIFT	4
+#define HNS3_D_PORT_BIT		BIT(HNS3_D_PORT_BIT_SHIFT)
+#define HNS3_S_PORT_BIT		BIT(HNS3_S_PORT_BIT_SHIFT)
+#define HNS3_D_IP_BIT		BIT(HNS3_D_IP_BIT_SHIFT)
+#define HNS3_S_IP_BIT		BIT(HNS3_S_IP_BIT_SHIFT)
+#define HNS3_V_TAG_BIT		BIT(HNS3_V_TAG_BIT_SHIFT)
+
+#define HNS3_RSS_INPUT_TUPLE_OTHER	GENMASK(3, 0)
+#define HNS3_RSS_INPUT_TUPLE_SCTP	GENMASK(4, 0)
+#define HNS3_IP_FRAG_BIT_MASK		GENMASK(3, 2)
+#define HNS3_IP_OTHER_BIT_MASK		GENMASK(1, 0)
+
+struct hns3_rss_tuple_cfg {
+	uint8_t ipv4_tcp_en;      /* Bit8.0~8.3 */
+	uint8_t ipv4_udp_en;      /* Bit9.0~9.3 */
+	uint8_t ipv4_sctp_en;     /* Bit10.0~10.4 */
+	uint8_t ipv4_fragment_en; /* Bit11.0~11.3 */
+	uint8_t ipv6_tcp_en;      /* Bit12.0~12.3 */
+	uint8_t ipv6_udp_en;      /* Bit13.0~13.3 */
+	uint8_t ipv6_sctp_en;     /* Bit14.0~14.4 */
+	uint8_t ipv6_fragment_en; /* Bit15.0~15.3 */
+};
+
+#define HNS3_RSS_QUEUES_BUFFER_NUM	64 /* Same as the Max rx/tx queue num */
+struct hns3_rss_conf {
+	/* RSS parameters :algorithm, flow_types,  key, queue */
+	struct rte_flow_action_rss conf;
+	uint8_t key[HNS3_RSS_KEY_SIZE];  /* Hash key */
+	struct hns3_rss_tuple_cfg rss_tuple_sets;
+	uint8_t rss_indirection_tbl[HNS3_RSS_IND_TBL_SIZE]; /* Shadow table */
+	uint16_t queue[HNS3_RSS_QUEUES_BUFFER_NUM]; /* Queues indices to use */
+};
+
+/* Bit 8 ~Bit 15 */
+#define HNS3_INSET_IPV4_SRC        0x00000100UL
+#define HNS3_INSET_IPV4_DST        0x00000200UL
+#define HNS3_INSET_IPV6_SRC        0x00000400UL
+#define HNS3_INSET_IPV6_DST        0x00000800UL
+#define HNS3_INSET_SRC_PORT        0x00001000UL
+#define HNS3_INSET_DST_PORT        0x00002000UL
+#define HNS3_INSET_SCTP_VT         0x00004000UL
+
+#ifndef ilog2
+static inline int rss_ilog2(uint32_t x)
+{
+	int log = 0;
+	x >>= 1;
+
+	while (x) {
+		log++;
+		x >>= 1;
+	}
+	return log;
+}
+#define ilog2(x) rss_ilog2(x)
+#endif
+
+static inline uint32_t fls(uint32_t x)
+{
+	uint32_t position;
+	uint32_t i;
+
+	if (x == 0)
+		return 0;
+
+	for (i = (x >> 1), position = 0; i != 0; ++position)
+		i >>= 1;
+
+	return position + 1;
+}
+
+static inline uint32_t roundup_pow_of_two(uint32_t x)
+{
+	return 1UL << fls(x - 1);
+}
+
+struct hns3_adapter;
+extern const struct rte_flow_ops hns3_flow_ops;
+
+int hns3_dev_rss_hash_update(struct rte_eth_dev *dev,
+			     struct rte_eth_rss_conf *rss_conf);
+int hns3_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+int hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
+			     struct rte_eth_rss_reta_entry64 *reta_conf,
+			     uint16_t reta_size);
+int hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_reta_entry64 *reta_conf,
+			    uint16_t reta_size);
+int hns3_dev_filter_ctrl(struct rte_eth_dev *dev,
+			 enum rte_filter_type filter_type,
+			 enum rte_filter_op filter_op, void *arg);
+void hns3_set_default_rss_args(struct hns3_hw *hw);
+int hns3_set_rss_indir_table(struct hns3_hw *hw, uint8_t *indir, uint16_t size);
+int hns3_rss_reset_indir_table(struct hns3_hw *hw);
+int hns3_config_rss(struct hns3_adapter *hns);
+void hns3_rss_uninit(struct hns3_adapter *hns);
+int hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
+				 struct hns3_rss_tuple_cfg *tuple,
+				 uint64_t rss_hf);
+int hns3_set_rss_algo_key(struct hns3_hw *hw, uint8_t hash_algo,
+			  const uint8_t *key);
+#endif /* _HNS3_RSS_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (9 preceding siblings ...)
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:07   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN " Wei Hu (Xavier)
                   ` (11 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for MAC PAUSE flow control and priority flow
control of hns3 PMD driver. All user priorities(up) must be mapped to
tc0 when MAC PAUSE flow control is enabled. Ups can be mapped to other
tcs driver permit when PFC is enabled. Flow control function by default
is turned off to ensure that app startup state is the same each time.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
---
 drivers/net/hns3/hns3_dcb.c    | 1647 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_dcb.h    |  166 ++++
 drivers/net/hns3/hns3_ethdev.c |  202 +++++
 3 files changed, 2015 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_dcb.c
 create mode 100644 drivers/net/hns3/hns3_dcb.h

diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
new file mode 100644
index 0000000..0644299
--- /dev/null
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -0,0 +1,1647 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <errno.h>
+#include <inttypes.h>
+#include <stdbool.h>
+#include <string.h>
+#include <unistd.h>
+#include <rte_io.h>
+#include <rte_common.h>
+#include <rte_ethdev.h>
+#include <rte_memcpy.h>
+#include <rte_spinlock.h>
+
+#include "hns3_logs.h"
+#include "hns3_cmd.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_regs.h"
+#include "hns3_ethdev.h"
+#include "hns3_dcb.h"
+
+#define HNS3_SHAPER_BS_U_DEF	5
+#define HNS3_SHAPER_BS_S_DEF	20
+#define BW_MAX_PERCENT		100
+#define HNS3_ETHER_MAX_RATE	100000
+
+/*
+ * hns3_shaper_para_calc: calculate ir parameter for the shaper
+ * @ir: Rate to be config, its unit is Mbps
+ * @shaper_level: the shaper level. eg: port, pg, priority, queueset
+ * @shaper_para: shaper parameter of IR shaper
+ *
+ * the formula:
+ *
+ *		IR_b * (2 ^ IR_u) * 8
+ * IR(Mbps) = -------------------------  *  CLOCK(1000Mbps)
+ *		Tick * (2 ^ IR_s)
+ *
+ * @return: 0: calculate sucessful, negative: fail
+ */
+static int
+hns3_shaper_para_calc(struct hns3_hw *hw, uint32_t ir, uint8_t shaper_level,
+		      struct hns3_shaper_parameter *shaper_para)
+{
+#define SHAPER_DEFAULT_IR_B	126
+#define DIVISOR_CLK		(1000 * 8)
+#define DIVISOR_IR_B_126	(126 * DIVISOR_CLK)
+
+	const uint16_t tick_array[HNS3_SHAPER_LVL_CNT] = {
+		6 * 256,    /* Prioriy level */
+		6 * 32,     /* Prioriy group level */
+		6 * 8,      /* Port level */
+		6 * 256     /* Qset level */
+	};
+	uint8_t ir_u_calc = 0;
+	uint8_t ir_s_calc = 0;
+	uint32_t denominator;
+	uint32_t ir_calc;
+	uint32_t tick;
+
+	/* Calc tick */
+	if (shaper_level >= HNS3_SHAPER_LVL_CNT) {
+		hns3_err(hw,
+			 "shaper_level(%d) is greater than HNS3_SHAPER_LVL_CNT(%d)",
+			 shaper_level, HNS3_SHAPER_LVL_CNT);
+		return -EINVAL;
+	}
+
+	if (ir > HNS3_ETHER_MAX_RATE) {
+		hns3_err(hw, "rate(%d) exceeds the rate driver supported "
+			 "HNS3_ETHER_MAX_RATE(%d)", ir, HNS3_ETHER_MAX_RATE);
+		return -EINVAL;
+	}
+
+	tick = tick_array[shaper_level];
+
+	/*
+	 * Calc the speed if ir_b = 126, ir_u = 0 and ir_s = 0
+	 * the formula is changed to:
+	 *		126 * 1 * 8
+	 * ir_calc = ---------------- * 1000
+	 *		tick * 1
+	 */
+	ir_calc = (DIVISOR_IR_B_126 + (tick >> 1) - 1) / tick;
+
+	if (ir_calc == ir) {
+		shaper_para->ir_b = SHAPER_DEFAULT_IR_B;
+	} else if (ir_calc > ir) {
+		/* Increasing the denominator to select ir_s value */
+		do {
+			ir_s_calc++;
+			ir_calc = DIVISOR_IR_B_126 / (tick * (1 << ir_s_calc));
+		} while (ir_calc > ir);
+
+		if (ir_calc == ir)
+			shaper_para->ir_b = SHAPER_DEFAULT_IR_B;
+		else
+			shaper_para->ir_b = (ir * tick * (1 << ir_s_calc) +
+				 (DIVISOR_CLK >> 1)) / DIVISOR_CLK;
+	} else {
+		/*
+		 * Increasing the numerator to select ir_u value. ir_u_calc will
+		 * get maximum value when ir_calc is minimum and ir is maximum.
+		 * ir_calc gets minimum value when tick is the maximum value.
+		 * At the same time, value of ir_u_calc can only be increased up
+		 * to eight after the while loop if the value of ir is equal
+		 * to HNS3_ETHER_MAX_RATE.
+		 */
+		uint32_t numerator;
+		do {
+			ir_u_calc++;
+			numerator = DIVISOR_IR_B_126 * (1 << ir_u_calc);
+			ir_calc = (numerator + (tick >> 1)) / tick;
+		} while (ir_calc < ir);
+
+		if (ir_calc == ir) {
+			shaper_para->ir_b = SHAPER_DEFAULT_IR_B;
+		} else {
+			--ir_u_calc;
+
+			/*
+			 * The maximum value of ir_u_calc in this branch is
+			 * seven in all cases. Thus, value of denominator can
+			 * not be zero here.
+			 */
+			denominator = DIVISOR_CLK * (1 << ir_u_calc);
+			shaper_para->ir_b =
+				(ir * tick + (denominator >> 1)) / denominator;
+		}
+	}
+
+	shaper_para->ir_u = ir_u_calc;
+	shaper_para->ir_s = ir_s_calc;
+
+	return 0;
+}
+
+static int
+hns3_fill_pri_array(struct hns3_hw *hw, uint8_t *pri, uint8_t pri_id)
+{
+#define HNS3_HALF_BYTE_BIT_OFFSET 4
+	uint8_t tc = hw->dcb_info.prio_tc[pri_id];
+
+	if (tc >= hw->dcb_info.num_tc)
+		return -EINVAL;
+
+	/*
+	 * The register for priority has four bytes, the first bytes includes
+	 *  priority0 and priority1, the higher 4bit stands for priority1
+	 *  while the lower 4bit stands for priority0, as below:
+	 * first byte:	| pri_1 | pri_0 |
+	 * second byte:	| pri_3 | pri_2 |
+	 * third byte:	| pri_5 | pri_4 |
+	 * fourth byte:	| pri_7 | pri_6 |
+	 */
+	pri[pri_id >> 1] |= tc << ((pri_id & 1) * HNS3_HALF_BYTE_BIT_OFFSET);
+
+	return 0;
+}
+
+static int
+hns3_up_to_tc_map(struct hns3_hw *hw)
+{
+	struct hns3_cmd_desc desc;
+	uint8_t *pri = (uint8_t *)desc.data;
+	uint8_t pri_id;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PRI_TO_TC_MAPPING, false);
+
+	for (pri_id = 0; pri_id < HNS3_MAX_USER_PRIO; pri_id++) {
+		ret = hns3_fill_pri_array(hw, pri, pri_id);
+		if (ret)
+			return ret;
+	}
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_pg_to_pri_map_cfg(struct hns3_hw *hw, uint8_t pg_id, uint8_t pri_bit_map)
+{
+	struct hns3_pg_to_pri_link_cmd *map;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PG_TO_PRI_LINK, false);
+
+	map = (struct hns3_pg_to_pri_link_cmd *)desc.data;
+
+	map->pg_id = pg_id;
+	map->pri_bit_map = pri_bit_map;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_pg_to_pri_map(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_pg_info *pg_info;
+	int ret, i;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	for (i = 0; i < hw->dcb_info.num_pg; i++) {
+		/* Cfg pg to priority mapping */
+		pg_info = &hw->dcb_info.pg_info[i];
+		ret = hns3_pg_to_pri_map_cfg(hw, i, pg_info->tc_bit_map);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_qs_to_pri_map_cfg(struct hns3_hw *hw, uint16_t qs_id, uint8_t pri)
+{
+	struct hns3_qs_to_pri_link_cmd *map;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_QS_TO_PRI_LINK, false);
+
+	map = (struct hns3_qs_to_pri_link_cmd *)desc.data;
+
+	map->qs_id = rte_cpu_to_le_16(qs_id);
+	map->priority = pri;
+	map->link_vld = HNS3_DCB_QS_PRI_LINK_VLD_MSK;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_qs_weight_cfg(struct hns3_hw *hw, uint16_t qs_id, uint8_t dwrr)
+{
+	struct hns3_qs_weight_cmd *weight;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_QS_WEIGHT, false);
+
+	weight = (struct hns3_qs_weight_cmd *)desc.data;
+
+	weight->qs_id = rte_cpu_to_le_16(qs_id);
+	weight->dwrr = dwrr;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_ets_tc_dwrr_cfg(struct hns3_hw *hw)
+{
+#define DEFAULT_TC_WEIGHT	1
+#define DEFAULT_TC_OFFSET	14
+	struct hns3_ets_tc_weight_cmd *ets_weight;
+	struct hns3_cmd_desc desc;
+	uint8_t i;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_ETS_TC_WEIGHT, false);
+	ets_weight = (struct hns3_ets_tc_weight_cmd *)desc.data;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		struct hns3_pg_info *pg_info;
+
+		ets_weight->tc_weight[i] = DEFAULT_TC_WEIGHT;
+
+		if (!(hw->hw_tc_map & BIT(i)))
+			continue;
+
+		pg_info = &hw->dcb_info.pg_info[hw->dcb_info.tc_info[i].pgid];
+		ets_weight->tc_weight[i] = pg_info->tc_dwrr[i];
+	}
+
+	ets_weight->weight_offset = DEFAULT_TC_OFFSET;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pri_weight_cfg(struct hns3_hw *hw, uint8_t pri_id, uint8_t dwrr)
+{
+	struct hns3_priority_weight_cmd *weight;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PRI_WEIGHT, false);
+
+	weight = (struct hns3_priority_weight_cmd *)desc.data;
+
+	weight->pri_id = pri_id;
+	weight->dwrr = dwrr;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pg_weight_cfg(struct hns3_hw *hw, uint8_t pg_id, uint8_t dwrr)
+{
+	struct hns3_pg_weight_cmd *weight;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PG_WEIGHT, false);
+
+	weight = (struct hns3_pg_weight_cmd *)desc.data;
+
+	weight->pg_id = pg_id;
+	weight->dwrr = dwrr;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+static int
+hns3_dcb_pg_schd_mode_cfg(struct hns3_hw *hw, uint8_t pg_id)
+{
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PG_SCH_MODE_CFG, false);
+
+	if (hw->dcb_info.pg_info[pg_id].pg_sch_mode == HNS3_SCH_MODE_DWRR)
+		desc.data[1] = rte_cpu_to_le_32(HNS3_DCB_TX_SCHD_DWRR_MSK);
+	else
+		desc.data[1] = 0;
+
+	desc.data[0] = rte_cpu_to_le_32(pg_id);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static uint32_t
+hns3_dcb_get_shapping_para(uint8_t ir_b, uint8_t ir_u, uint8_t ir_s,
+			   uint8_t bs_b, uint8_t bs_s)
+{
+	uint32_t shapping_para = 0;
+
+	hns3_dcb_set_field(shapping_para, IR_B, ir_b);
+	hns3_dcb_set_field(shapping_para, IR_U, ir_u);
+	hns3_dcb_set_field(shapping_para, IR_S, ir_s);
+	hns3_dcb_set_field(shapping_para, BS_B, bs_b);
+	hns3_dcb_set_field(shapping_para, BS_S, bs_s);
+
+	return shapping_para;
+}
+
+static int
+hns3_dcb_port_shaper_cfg(struct hns3_hw *hw)
+{
+	struct hns3_port_shapping_cmd *shap_cfg_cmd;
+	struct hns3_shaper_parameter shaper_parameter;
+	uint32_t shapping_para;
+	uint32_t ir_u, ir_b, ir_s;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	ret = hns3_shaper_para_calc(hw, hw->mac.link_speed,
+				    HNS3_SHAPER_LVL_PORT, &shaper_parameter);
+	if (ret) {
+		hns3_err(hw, "calculate shaper parameter failed: %d", ret);
+		return ret;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PORT_SHAPPING, false);
+	shap_cfg_cmd = (struct hns3_port_shapping_cmd *)desc.data;
+
+	ir_b = shaper_parameter.ir_b;
+	ir_u = shaper_parameter.ir_u;
+	ir_s = shaper_parameter.ir_s;
+	shapping_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+						   HNS3_SHAPER_BS_U_DEF,
+						   HNS3_SHAPER_BS_S_DEF);
+
+	shap_cfg_cmd->port_shapping_para = rte_cpu_to_le_32(shapping_para);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pg_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
+			 uint8_t pg_id, uint32_t shapping_para)
+{
+	struct hns3_pg_shapping_cmd *shap_cfg_cmd;
+	enum hns3_opcode_type opcode;
+	struct hns3_cmd_desc desc;
+
+	opcode = bucket ? HNS3_OPC_TM_PG_P_SHAPPING :
+		 HNS3_OPC_TM_PG_C_SHAPPING;
+	hns3_cmd_setup_basic_desc(&desc, opcode, false);
+
+	shap_cfg_cmd = (struct hns3_pg_shapping_cmd *)desc.data;
+
+	shap_cfg_cmd->pg_id = pg_id;
+
+	shap_cfg_cmd->pg_shapping_para = rte_cpu_to_le_32(shapping_para);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pg_shaper_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_shaper_parameter shaper_parameter;
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t ir_u, ir_b, ir_s;
+	uint32_t shaper_para;
+	uint8_t i;
+	int ret;
+
+	/* Cfg pg schd */
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	/* Pg to pri */
+	for (i = 0; i < hw->dcb_info.num_pg; i++) {
+		/* Calc shaper para */
+		ret = hns3_shaper_para_calc(hw,
+					    hw->dcb_info.pg_info[i].bw_limit,
+					    HNS3_SHAPER_LVL_PG,
+					    &shaper_parameter);
+		if (ret) {
+			hns3_err(hw, "calculate shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+
+		shaper_para = hns3_dcb_get_shapping_para(0, 0, 0,
+							 HNS3_SHAPER_BS_U_DEF,
+							 HNS3_SHAPER_BS_S_DEF);
+
+		ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, i,
+					       shaper_para);
+		if (ret) {
+			hns3_err(hw,
+				 "config PG CIR shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+
+		ir_b = shaper_parameter.ir_b;
+		ir_u = shaper_parameter.ir_u;
+		ir_s = shaper_parameter.ir_s;
+		shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+							 HNS3_SHAPER_BS_U_DEF,
+							 HNS3_SHAPER_BS_S_DEF);
+
+		ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, i,
+					       shaper_para);
+		if (ret) {
+			hns3_err(hw,
+				 "config PG PIR shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_qs_schd_mode_cfg(struct hns3_hw *hw, uint16_t qs_id, uint8_t mode)
+{
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_QS_SCH_MODE_CFG, false);
+
+	if (mode == HNS3_SCH_MODE_DWRR)
+		desc.data[1] = rte_cpu_to_le_32(HNS3_DCB_TX_SCHD_DWRR_MSK);
+	else
+		desc.data[1] = 0;
+
+	desc.data[0] = rte_cpu_to_le_32(qs_id);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pri_schd_mode_cfg(struct hns3_hw *hw, uint8_t pri_id)
+{
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PRI_SCH_MODE_CFG, false);
+
+	if (hw->dcb_info.tc_info[pri_id].tc_sch_mode == HNS3_SCH_MODE_DWRR)
+		desc.data[1] = rte_cpu_to_le_32(HNS3_DCB_TX_SCHD_DWRR_MSK);
+	else
+		desc.data[1] = 0;
+
+	desc.data[0] = rte_cpu_to_le_32(pri_id);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pri_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
+			  uint8_t pri_id, uint32_t shapping_para)
+{
+	struct hns3_pri_shapping_cmd *shap_cfg_cmd;
+	enum hns3_opcode_type opcode;
+	struct hns3_cmd_desc desc;
+
+	opcode = bucket ? HNS3_OPC_TM_PRI_P_SHAPPING :
+		 HNS3_OPC_TM_PRI_C_SHAPPING;
+
+	hns3_cmd_setup_basic_desc(&desc, opcode, false);
+
+	shap_cfg_cmd = (struct hns3_pri_shapping_cmd *)desc.data;
+
+	shap_cfg_cmd->pri_id = pri_id;
+
+	shap_cfg_cmd->pri_shapping_para = rte_cpu_to_le_32(shapping_para);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_dcb_pri_tc_base_shaper_cfg(struct hns3_hw *hw)
+{
+	struct hns3_shaper_parameter shaper_parameter;
+	uint32_t ir_u, ir_b, ir_s;
+	uint32_t shaper_para;
+	int ret, i;
+
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		ret = hns3_shaper_para_calc(hw,
+					    hw->dcb_info.tc_info[i].bw_limit,
+					    HNS3_SHAPER_LVL_PRI,
+					    &shaper_parameter);
+		if (ret) {
+			hns3_err(hw, "calculate shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+
+		shaper_para = hns3_dcb_get_shapping_para(0, 0, 0,
+							 HNS3_SHAPER_BS_U_DEF,
+							 HNS3_SHAPER_BS_S_DEF);
+
+		ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, i,
+						shaper_para);
+		if (ret) {
+			hns3_err(hw,
+				 "config priority CIR shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+
+		ir_b = shaper_parameter.ir_b;
+		ir_u = shaper_parameter.ir_u;
+		ir_s = shaper_parameter.ir_s;
+		shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+							 HNS3_SHAPER_BS_U_DEF,
+							 HNS3_SHAPER_BS_S_DEF);
+
+		ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, i,
+						shaper_para);
+		if (ret) {
+			hns3_err(hw,
+				 "config priority PIR shaper parameter failed: %d",
+				 ret);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+
+static int
+hns3_dcb_pri_shaper_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	ret = hns3_dcb_pri_tc_base_shaper_cfg(hw);
+	if (ret)
+		hns3_err(hw, "config port shaper failed: %d", ret);
+
+	return ret;
+}
+
+void
+hns3_tc_queue_mapping_cfg(struct hns3_hw *hw)
+{
+	struct hns3_tc_queue_info *tc_queue;
+	uint8_t i;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		tc_queue = &hw->tc_queue[i];
+		if (hw->hw_tc_map & BIT(i) && i < hw->num_tc) {
+			tc_queue->enable = true;
+			tc_queue->tqp_offset = i * hw->alloc_rss_size;
+			tc_queue->tqp_count = hw->alloc_rss_size;
+			tc_queue->tc = i;
+		} else {
+			/* Set to default queue if TC is disable */
+			tc_queue->enable = false;
+			tc_queue->tqp_offset = 0;
+			tc_queue->tqp_count = 0;
+			tc_queue->tc = 0;
+		}
+	}
+}
+
+static void
+hns3_dcb_update_tc_queue_mapping(struct hns3_hw *hw, uint16_t queue_num)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint16_t tqpnum_per_tc;
+	uint16_t alloc_tqps;
+
+	alloc_tqps = RTE_MIN(hw->tqps_num, queue_num);
+	hw->num_tc = RTE_MIN(alloc_tqps, hw->dcb_info.num_tc);
+	tqpnum_per_tc = RTE_MIN(hw->rss_size_max, alloc_tqps / hw->num_tc);
+
+	if (hw->alloc_rss_size != tqpnum_per_tc) {
+		PMD_INIT_LOG(INFO, "rss size changes from %d to %d",
+			     hw->alloc_rss_size, tqpnum_per_tc);
+		hw->alloc_rss_size = tqpnum_per_tc;
+	}
+	hw->alloc_tqps = hw->num_tc * hw->alloc_rss_size;
+
+	hns3_tc_queue_mapping_cfg(hw);
+
+	memcpy(pf->prio_tc, hw->dcb_info.prio_tc, HNS3_MAX_USER_PRIO);
+}
+
+int
+hns3_dcb_info_init(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int i, k;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE &&
+	    hw->dcb_info.num_pg != 1)
+		return -EINVAL;
+
+	/* Initializing PG information */
+	memset(hw->dcb_info.pg_info, 0,
+	       sizeof(struct hns3_pg_info) * HNS3_PG_NUM);
+	for (i = 0; i < hw->dcb_info.num_pg; i++) {
+		hw->dcb_info.pg_dwrr[i] = i ? 0 : BW_MAX_PERCENT;
+		hw->dcb_info.pg_info[i].pg_id = i;
+		hw->dcb_info.pg_info[i].pg_sch_mode = HNS3_SCH_MODE_DWRR;
+		hw->dcb_info.pg_info[i].bw_limit = HNS3_ETHER_MAX_RATE;
+
+		if (i != 0)
+			continue;
+
+		hw->dcb_info.pg_info[i].tc_bit_map = hw->hw_tc_map;
+		for (k = 0; k < hw->dcb_info.num_tc; k++)
+			hw->dcb_info.pg_info[i].tc_dwrr[k] = BW_MAX_PERCENT;
+	}
+
+	/* All UPs mapping to TC0 */
+	for (i = 0; i < HNS3_MAX_USER_PRIO; i++)
+		hw->dcb_info.prio_tc[i] = 0;
+
+	/* Initializing tc information */
+	memset(hw->dcb_info.tc_info, 0,
+	       sizeof(struct hns3_tc_info) * HNS3_MAX_TC_NUM);
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		hw->dcb_info.tc_info[i].tc_id = i;
+		hw->dcb_info.tc_info[i].tc_sch_mode = HNS3_SCH_MODE_DWRR;
+		hw->dcb_info.tc_info[i].pgid = 0;
+		hw->dcb_info.tc_info[i].bw_limit =
+			hw->dcb_info.pg_info[0].bw_limit;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_lvl2_schd_mode_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret, i;
+
+	/* Only being config on TC-Based scheduler mode */
+	if (pf->tx_sch_mode == HNS3_FLAG_VNET_BASE_SCH_MODE)
+		return -EINVAL;
+
+	for (i = 0; i < hw->dcb_info.num_pg; i++) {
+		ret = hns3_dcb_pg_schd_mode_cfg(hw, i);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_lvl34_schd_mode_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint8_t i;
+	int ret;
+
+	if (pf->tx_sch_mode == HNS3_FLAG_TC_BASE_SCH_MODE) {
+		for (i = 0; i < hw->dcb_info.num_tc; i++) {
+			ret = hns3_dcb_pri_schd_mode_cfg(hw, i);
+			if (ret)
+				return ret;
+
+			ret = hns3_dcb_qs_schd_mode_cfg(hw, i,
+							HNS3_SCH_MODE_DWRR);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_schd_mode_cfg(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_dcb_lvl2_schd_mode_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config lvl2_schd_mode failed: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_dcb_lvl34_schd_mode_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config lvl34_schd_mode failed: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_pri_tc_base_dwrr_cfg(struct hns3_hw *hw)
+{
+	struct hns3_pg_info *pg_info;
+	uint8_t dwrr;
+	int ret, i;
+
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		pg_info = &hw->dcb_info.pg_info[hw->dcb_info.tc_info[i].pgid];
+		dwrr = pg_info->tc_dwrr[i];
+
+		ret = hns3_dcb_pri_weight_cfg(hw, i, dwrr);
+		if (ret) {
+			hns3_err(hw, "fail to send priority weight cmd: %d", i);
+			return ret;
+		}
+
+		ret = hns3_dcb_qs_weight_cfg(hw, i, BW_MAX_PERCENT);
+		if (ret) {
+			hns3_err(hw, "fail to send qs_weight cmd: %d", i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_pri_dwrr_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	ret = hns3_dcb_pri_tc_base_dwrr_cfg(hw);
+	if (ret)
+		return ret;
+
+	if (!hns3_dev_dcb_supported(hw))
+		return 0;
+
+	ret = hns3_dcb_ets_tc_dwrr_cfg(hw);
+	if (ret == -EOPNOTSUPP) {
+		hns3_warn(hw, "fw %08x does't support ets tc weight cmd",
+			  hw->fw_version);
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static int
+hns3_dcb_pg_dwrr_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret, i;
+
+	/* Cfg pg schd */
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	/* Cfg pg to prio */
+	for (i = 0; i < hw->dcb_info.num_pg; i++) {
+		/* Cfg dwrr */
+		ret = hns3_dcb_pg_weight_cfg(hw, i, hw->dcb_info.pg_dwrr[i]);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_dwrr_cfg(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_dcb_pg_dwrr_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config pg_dwrr failed: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_dcb_pri_dwrr_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config pri_dwrr failed: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_shaper_cfg(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_dcb_port_shaper_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config port shaper failed: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_dcb_pg_shaper_cfg(hw);
+	if (ret) {
+		hns3_err(hw, "config pg shaper failed: %d", ret);
+		return ret;
+	}
+
+	return hns3_dcb_pri_shaper_cfg(hw);
+}
+
+static int
+hns3_q_to_qs_map_cfg(struct hns3_hw *hw, uint16_t q_id, uint16_t qs_id)
+{
+	struct hns3_nq_to_qs_link_cmd *map;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_NQ_TO_QS_LINK, false);
+
+	map = (struct hns3_nq_to_qs_link_cmd *)desc.data;
+
+	map->nq_id = rte_cpu_to_le_16(q_id);
+	map->qset_id = rte_cpu_to_le_16(qs_id | HNS3_DCB_Q_QS_LINK_VLD_MSK);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_q_to_qs_map(struct hns3_hw *hw)
+{
+	struct hns3_tc_queue_info *tc_queue;
+	uint16_t q_id;
+	uint32_t i, j;
+	int ret;
+
+	for (i = 0; i < hw->num_tc; i++) {
+		tc_queue = &hw->tc_queue[i];
+		for (j = 0; j < tc_queue->tqp_count; j++) {
+			q_id = tc_queue->tqp_offset + j;
+			ret = hns3_q_to_qs_map_cfg(hw, q_id, i);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_pri_q_qs_cfg(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	uint32_t i;
+	int ret;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE)
+		return -EINVAL;
+
+	/* Cfg qs -> pri mapping */
+	for (i = 0; i < hw->num_tc; i++) {
+		ret = hns3_qs_to_pri_map_cfg(hw, i, i);
+		if (ret) {
+			hns3_err(hw, "qs_to_pri mapping fail: %d", ret);
+			return ret;
+		}
+	}
+
+	/* Cfg q -> qs mapping */
+	ret = hns3_q_to_qs_map(hw);
+	if (ret) {
+		hns3_err(hw, "nq_to_qs mapping fail: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_map_cfg(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_up_to_tc_map(hw);
+	if (ret) {
+		hns3_err(hw, "up_to_tc mapping fail: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_pg_to_pri_map(hw);
+	if (ret) {
+		hns3_err(hw, "pri_to_pg mapping fail: %d", ret);
+		return ret;
+	}
+
+	return hns3_pri_q_qs_cfg(hw);
+}
+
+static int
+hns3_dcb_schd_setup_hw(struct hns3_hw *hw)
+{
+	int ret;
+
+	/* Cfg dcb mapping  */
+	ret = hns3_dcb_map_cfg(hw);
+	if (ret)
+		return ret;
+
+	/* Cfg dcb shaper */
+	ret = hns3_dcb_shaper_cfg(hw);
+	if (ret)
+		return ret;
+
+	/* Cfg dwrr */
+	ret = hns3_dcb_dwrr_cfg(hw);
+	if (ret)
+		return ret;
+
+	/* Cfg schd mode for each level schd */
+	return hns3_dcb_schd_mode_cfg(hw);
+}
+
+static int
+hns3_pause_param_cfg(struct hns3_hw *hw, const uint8_t *addr,
+		     uint8_t pause_trans_gap, uint16_t pause_trans_time)
+{
+	struct hns3_cfg_pause_param_cmd *pause_param;
+	struct hns3_cmd_desc desc;
+
+	pause_param = (struct hns3_cfg_pause_param_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_MAC_PARA, false);
+
+	memcpy(pause_param->mac_addr, addr, RTE_ETHER_ADDR_LEN);
+	memcpy(pause_param->mac_addr_extra, addr, RTE_ETHER_ADDR_LEN);
+	pause_param->pause_trans_gap = pause_trans_gap;
+	pause_param->pause_trans_time = rte_cpu_to_le_16(pause_trans_time);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+int
+hns3_pause_addr_cfg(struct hns3_hw *hw, const uint8_t *mac_addr)
+{
+	struct hns3_cfg_pause_param_cmd *pause_param;
+	struct hns3_cmd_desc desc;
+	uint16_t trans_time;
+	uint8_t trans_gap;
+	int ret;
+
+	pause_param = (struct hns3_cfg_pause_param_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_MAC_PARA, true);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		return ret;
+
+	trans_gap = pause_param->pause_trans_gap;
+	trans_time = rte_le_to_cpu_16(pause_param->pause_trans_time);
+
+	return hns3_pause_param_cfg(hw, mac_addr, trans_gap, trans_time);
+}
+
+static int
+hns3_pause_param_setup_hw(struct hns3_hw *hw, uint16_t pause_time)
+{
+#define PAUSE_TIME_DIV_BY	2
+#define PAUSE_TIME_MIN_VALUE	0x4
+
+	struct hns3_mac *mac = &hw->mac;
+	uint8_t pause_trans_gap;
+
+	/*
+	 * Pause transmit gap must be less than "pause_time / 2", otherwise
+	 * the behavior of MAC is undefined.
+	 */
+	if (pause_time > PAUSE_TIME_DIV_BY * HNS3_DEFAULT_PAUSE_TRANS_GAP)
+		pause_trans_gap = HNS3_DEFAULT_PAUSE_TRANS_GAP;
+	else if (pause_time >= PAUSE_TIME_MIN_VALUE &&
+		 pause_time <= PAUSE_TIME_DIV_BY * HNS3_DEFAULT_PAUSE_TRANS_GAP)
+		pause_trans_gap = pause_time / PAUSE_TIME_DIV_BY - 1;
+	else {
+		hns3_warn(hw, "pause_time(%d) is adjusted to 4", pause_time);
+		pause_time = PAUSE_TIME_MIN_VALUE;
+		pause_trans_gap = pause_time / PAUSE_TIME_DIV_BY - 1;
+	}
+
+	return hns3_pause_param_cfg(hw, mac->mac_addr,
+				    pause_trans_gap, pause_time);
+}
+
+static int
+hns3_mac_pause_en_cfg(struct hns3_hw *hw, bool tx, bool rx)
+{
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_MAC_PAUSE_EN, false);
+
+	desc.data[0] = rte_cpu_to_le_32((tx ? HNS3_TX_MAC_PAUSE_EN_MSK : 0) |
+		(rx ? HNS3_RX_MAC_PAUSE_EN_MSK : 0));
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_pfc_pause_en_cfg(struct hns3_hw *hw, uint8_t pfc_bitmap, bool tx, bool rx)
+{
+	struct hns3_cmd_desc desc;
+	struct hns3_pfc_en_cmd *pfc = (struct hns3_pfc_en_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_PFC_PAUSE_EN, false);
+
+	pfc->tx_rx_en_bitmap = (uint8_t)((tx ? HNS3_TX_MAC_PAUSE_EN_MSK : 0) |
+					(rx ? HNS3_RX_MAC_PAUSE_EN_MSK : 0));
+
+	pfc->pri_en_bitmap = pfc_bitmap;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_qs_bp_cfg(struct hns3_hw *hw, uint8_t tc, uint8_t grp_id, uint32_t bit_map)
+{
+	struct hns3_bp_to_qs_map_cmd *bp_to_qs_map_cmd;
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_BP_TO_QSET_MAPPING, false);
+
+	bp_to_qs_map_cmd = (struct hns3_bp_to_qs_map_cmd *)desc.data;
+
+	bp_to_qs_map_cmd->tc_id = tc;
+	bp_to_qs_map_cmd->qs_group_id = grp_id;
+	bp_to_qs_map_cmd->qs_bit_map = rte_cpu_to_le_32(bit_map);
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static void
+hns3_get_rx_tx_en_status(struct hns3_hw *hw, bool *tx_en, bool *rx_en)
+{
+	switch (hw->current_mode) {
+	case HNS3_FC_NONE:
+		*tx_en = false;
+		*rx_en = false;
+		break;
+	case HNS3_FC_RX_PAUSE:
+		*tx_en = false;
+		*rx_en = true;
+		break;
+	case HNS3_FC_TX_PAUSE:
+		*tx_en = true;
+		*rx_en = false;
+		break;
+	case HNS3_FC_FULL:
+		*tx_en = true;
+		*rx_en = true;
+		break;
+	default:
+		*tx_en = false;
+		*rx_en = false;
+		break;
+	}
+}
+
+static int
+hns3_mac_pause_setup_hw(struct hns3_hw *hw)
+{
+	bool tx_en, rx_en;
+
+	if (hw->current_fc_status == HNS3_FC_STATUS_MAC_PAUSE)
+		hns3_get_rx_tx_en_status(hw, &tx_en, &rx_en);
+	else {
+		tx_en = false;
+		rx_en = false;
+	}
+
+	return hns3_mac_pause_en_cfg(hw, tx_en, rx_en);
+}
+
+static int
+hns3_pfc_setup_hw(struct hns3_hw *hw)
+{
+	bool tx_en, rx_en;
+
+	if (hw->current_fc_status == HNS3_FC_STATUS_PFC)
+		hns3_get_rx_tx_en_status(hw, &tx_en, &rx_en);
+	else {
+		tx_en = false;
+		rx_en = false;
+	}
+
+	return hns3_pfc_pause_en_cfg(hw, hw->dcb_info.pfc_en, tx_en, rx_en);
+}
+
+/*
+ * Each Tc has a 1024 queue sets to backpress, it divides to
+ * 32 group, each group contains 32 queue sets, which can be
+ * represented by uint32_t bitmap.
+ */
+static int
+hns3_bp_setup_hw(struct hns3_hw *hw, uint8_t tc)
+{
+	uint32_t qs_bitmap;
+	int ret;
+	int i;
+
+	for (i = 0; i < HNS3_BP_GRP_NUM; i++) {
+		uint8_t grp, sub_grp;
+		qs_bitmap = 0;
+
+		grp = hns3_get_field(tc, HNS3_BP_GRP_ID_M, HNS3_BP_GRP_ID_S);
+		sub_grp = hns3_get_field(tc, HNS3_BP_SUB_GRP_ID_M,
+					 HNS3_BP_SUB_GRP_ID_S);
+		if (i == grp)
+			qs_bitmap |= (1 << sub_grp);
+
+		ret = hns3_qs_bp_cfg(hw, tc, i, qs_bitmap);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_bp_setup(struct hns3_hw *hw)
+{
+	int ret, i;
+
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		ret = hns3_bp_setup_hw(hw, i);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_dcb_pause_setup_hw(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	ret = hns3_pause_param_setup_hw(hw, pf->pause_time);
+	if (ret) {
+		hns3_err(hw, "Fail to set pause parameter. ret = %d", ret);
+		return ret;
+	}
+
+	ret = hns3_mac_pause_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "Fail to setup MAC pause. ret = %d", ret);
+		return ret;
+	}
+
+	/* Only DCB-supported dev supports qset back pressure and pfc cmd */
+	if (!hns3_dev_dcb_supported(hw))
+		return 0;
+
+	ret = hns3_pfc_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "config pfc failed! ret = %d", ret);
+		return ret;
+	}
+
+	return hns3_dcb_bp_setup(hw);
+}
+
+static uint8_t
+hns3_dcb_undrop_tc_map(struct hns3_hw *hw, uint8_t pfc_en)
+{
+	uint8_t pfc_map = 0;
+	uint8_t *prio_tc;
+	uint8_t i, j;
+
+	prio_tc = hw->dcb_info.prio_tc;
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		for (j = 0; j < HNS3_MAX_USER_PRIO; j++) {
+			if (prio_tc[j] == i && pfc_en & BIT(j)) {
+				pfc_map |= BIT(i);
+				break;
+			}
+		}
+	}
+
+	return pfc_map;
+}
+
+static void
+hns3_dcb_cfg_validate(struct hns3_adapter *hns, uint8_t *tc, bool *changed)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t max_tc = 0;
+	uint8_t pfc_en;
+	int i;
+
+	dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	for (i = 0; i < HNS3_MAX_USER_PRIO; i++) {
+		if (dcb_rx_conf->dcb_tc[i] != hw->dcb_info.prio_tc[i])
+			*changed = true;
+
+		if (dcb_rx_conf->dcb_tc[i] > max_tc)
+			max_tc = dcb_rx_conf->dcb_tc[i];
+	}
+	*tc = max_tc + 1;
+	if (*tc != hw->dcb_info.num_tc)
+		*changed = true;
+
+	/*
+	 * We ensure that dcb information can be reconfigured
+	 * after the hns3_priority_flow_ctrl_set function called.
+	 */
+	if (hw->current_mode != HNS3_FC_FULL)
+		*changed = true;
+	pfc_en = RTE_LEN2MASK((uint8_t)dcb_rx_conf->nb_tcs, uint8_t);
+	if (hw->dcb_info.pfc_en != pfc_en)
+		*changed = true;
+}
+
+static void
+hns3_dcb_info_cfg(struct hns3_adapter *hns)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t tc_bw, bw_rest;
+	uint8_t i, j;
+
+	dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	pf->local_max_tc = (uint8_t)dcb_rx_conf->nb_tcs;
+	pf->pfc_max = (uint8_t)dcb_rx_conf->nb_tcs;
+
+	/* Config pg0 */
+	memset(hw->dcb_info.pg_info, 0,
+	       sizeof(struct hns3_pg_info) * HNS3_PG_NUM);
+	hw->dcb_info.pg_dwrr[0] = BW_MAX_PERCENT;
+	hw->dcb_info.pg_info[0].pg_id = 0;
+	hw->dcb_info.pg_info[0].pg_sch_mode = HNS3_SCH_MODE_DWRR;
+	hw->dcb_info.pg_info[0].bw_limit = HNS3_ETHER_MAX_RATE;
+	hw->dcb_info.pg_info[0].tc_bit_map = hw->hw_tc_map;
+
+	/* Each tc has same bw for valid tc by default */
+	tc_bw = BW_MAX_PERCENT / hw->dcb_info.num_tc;
+	for (i = 0; i < hw->dcb_info.num_tc; i++)
+		hw->dcb_info.pg_info[0].tc_dwrr[i] = tc_bw;
+	/* To ensure the sum of tc_dwrr is equal to 100 */
+	bw_rest = BW_MAX_PERCENT % hw->dcb_info.num_tc;
+	for (j = 0; j < bw_rest; j++)
+		hw->dcb_info.pg_info[0].tc_dwrr[j]++;
+	for (; i < dcb_rx_conf->nb_tcs; i++)
+		hw->dcb_info.pg_info[0].tc_dwrr[i] = 0;
+
+	/* All tcs map to pg0 */
+	memset(hw->dcb_info.tc_info, 0,
+	       sizeof(struct hns3_tc_info) * HNS3_MAX_TC_NUM);
+	for (i = 0; i < hw->dcb_info.num_tc; i++) {
+		hw->dcb_info.tc_info[i].tc_id = i;
+		hw->dcb_info.tc_info[i].tc_sch_mode = HNS3_SCH_MODE_DWRR;
+		hw->dcb_info.tc_info[i].pgid = 0;
+		hw->dcb_info.tc_info[i].bw_limit =
+					hw->dcb_info.pg_info[0].bw_limit;
+	}
+
+	for (i = 0; i < HNS3_MAX_USER_PRIO; i++)
+		hw->dcb_info.prio_tc[i] = dcb_rx_conf->dcb_tc[i];
+
+	hns3_dcb_update_tc_queue_mapping(hw, hw->data->nb_rx_queues);
+}
+
+static void
+hns3_dcb_info_update(struct hns3_adapter *hns, uint8_t num_tc)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t bit_map = 0;
+	uint8_t i;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE &&
+	    hw->dcb_info.num_pg != 1)
+		return;
+
+	/* Currently not support uncontinuous tc */
+	hw->dcb_info.num_tc = num_tc;
+	for (i = 0; i < hw->dcb_info.num_tc; i++)
+		bit_map |= BIT(i);
+
+	if (!bit_map) {
+		bit_map = 1;
+		hw->dcb_info.num_tc = 1;
+	}
+
+	hw->hw_tc_map = bit_map;
+
+	hns3_dcb_info_cfg(hns);
+}
+
+static int
+hns3_dcb_hw_configure(struct hns3_adapter *hns)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_fc_status fc_status = hw->current_fc_status;
+	enum hns3_fc_mode current_mode = hw->current_mode;
+	uint8_t hw_pfc_map = hw->dcb_info.hw_pfc_map;
+	int ret, status;
+
+	if (pf->tx_sch_mode != HNS3_FLAG_TC_BASE_SCH_MODE &&
+	    pf->tx_sch_mode != HNS3_FLAG_VNET_BASE_SCH_MODE)
+		return -ENOTSUP;
+
+	ret = hns3_dcb_schd_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "dcb schdule configure failed! ret = %d", ret);
+		return ret;
+	}
+
+	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+		if (dcb_rx_conf->nb_tcs == 0)
+			hw->dcb_info.pfc_en = 1; /* tc0 only */
+		else
+			hw->dcb_info.pfc_en =
+			RTE_LEN2MASK((uint8_t)dcb_rx_conf->nb_tcs, uint8_t);
+
+		hw->dcb_info.hw_pfc_map =
+				hns3_dcb_undrop_tc_map(hw, hw->dcb_info.pfc_en);
+
+		ret = hns3_buffer_alloc(hw);
+		if (ret)
+			return ret;
+
+		hw->current_fc_status = HNS3_FC_STATUS_PFC;
+		hw->current_mode = HNS3_FC_FULL;
+		ret = hns3_dcb_pause_setup_hw(hw);
+		if (ret) {
+			hns3_err(hw, "setup pfc failed! ret = %d", ret);
+			goto pfc_setup_fail;
+		}
+	} else {
+		/*
+		 * Although dcb_capability_en is lack of ETH_DCB_PFC_SUPPORT
+		 * flag, the DCB information is configured, such as tc numbers.
+		 * Therefore, refreshing the allocation of packet buffer is
+		 * necessary.
+		 */
+		ret = hns3_buffer_alloc(hw);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+
+pfc_setup_fail:
+	hw->current_mode = current_mode;
+	hw->current_fc_status = fc_status;
+	hw->dcb_info.hw_pfc_map = hw_pfc_map;
+	status = hns3_buffer_alloc(hw);
+	if (status)
+		hns3_err(hw, "recover packet buffer fail! status = %d", status);
+
+	return ret;
+}
+
+/*
+ * hns3_dcb_configure - setup dcb related config
+ * @hns: pointer to hns3 adapter
+ * Returns 0 on success, negative value on failure.
+ */
+int
+hns3_dcb_configure(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	bool map_changed = false;
+	uint8_t num_tc = 0;
+	int ret;
+
+	hns3_dcb_cfg_validate(hns, &num_tc, &map_changed);
+	if (map_changed || rte_atomic16_read(&hw->reset.resetting)) {
+		hns3_dcb_info_update(hns, num_tc);
+		ret = hns3_dcb_hw_configure(hns);
+		if (ret) {
+			hns3_err(hw, "dcb sw configure fails: %d", ret);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+int
+hns3_dcb_init_hw(struct hns3_hw *hw)
+{
+	int ret;
+
+	ret = hns3_dcb_schd_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "dcb schedule setup failed: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_dcb_pause_setup_hw(hw);
+	if (ret)
+		hns3_err(hw, "PAUSE setup failed: %d", ret);
+
+	return ret;
+}
+
+int
+hns3_dcb_init(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/*
+	 * According to the 'adapter_state' identifier, the following branch
+	 * is only executed to initialize default configurations of dcb during
+	 * the initializing driver process. Due to driver saving dcb-related
+	 * information before reset triggered, the reinit dev stage of the
+	 * reset process can not access to the branch, or those information
+	 * will be changed.
+	 */
+	if (hw->adapter_state == HNS3_NIC_UNINITIALIZED) {
+		hw->requested_mode = HNS3_FC_NONE;
+		hw->current_mode = hw->requested_mode;
+		pf->pause_time = HNS3_DEFAULT_PAUSE_TRANS_TIME;
+		hw->current_fc_status = HNS3_FC_STATUS_NONE;
+
+		ret = hns3_dcb_info_init(hw);
+		if (ret) {
+			hns3_err(hw, "dcb info init failed: %d", ret);
+			return ret;
+		}
+		hns3_dcb_update_tc_queue_mapping(hw, hw->tqps_num);
+	}
+
+	/*
+	 * DCB hardware will be configured by following the function during
+	 * the initializing driver process and the reset process. However,
+	 * driver will restore directly configurations of dcb hardware based
+	 * on dcb-related information soft maintained when driver
+	 * initialization has finished and reset is coming.
+	 */
+	ret = hns3_dcb_init_hw(hw);
+	if (ret) {
+		hns3_err(hw, "dcb init hardware failed: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3_update_queue_map_configure(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t queue_num = hw->data->nb_rx_queues;
+	int ret;
+
+	hns3_dcb_update_tc_queue_mapping(hw, queue_num);
+	ret = hns3_q_to_qs_map(hw);
+	if (ret) {
+		hns3_err(hw, "failed to map nq to qs! ret = %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+int
+hns3_dcb_cfg_update(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	enum rte_eth_rx_mq_mode mq_mode = hw->data->dev_conf.rxmode.mq_mode;
+	int ret;
+
+	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+		ret = hns3_dcb_configure(hns);
+		if (ret) {
+			hns3_err(hw, "Failed to config dcb: %d", ret);
+			return ret;
+		}
+	} else {
+		/*
+		 * Update queue map without PFC configuration,
+		 * due to queues reconfigured by user.
+		 */
+		ret = hns3_update_queue_map_configure(hns);
+		if (ret)
+			hns3_err(hw,
+				 "Failed to update queue mapping configure: %d",
+				 ret);
+	}
+
+	return ret;
+}
+
+/*
+ * hns3_dcb_pfc_enable - Enable priority flow control
+ * @dev: pointer to ethernet device
+ *
+ * Configures the pfc settings for one porority.
+ */
+int
+hns3_dcb_pfc_enable(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	enum hns3_fc_status fc_status = hw->current_fc_status;
+	enum hns3_fc_mode current_mode = hw->current_mode;
+	uint8_t hw_pfc_map = hw->dcb_info.hw_pfc_map;
+	uint8_t pfc_en = hw->dcb_info.pfc_en;
+	uint8_t priority = pfc_conf->priority;
+	uint16_t pause_time = pf->pause_time;
+	int ret, status;
+
+	pf->pause_time = pfc_conf->fc.pause_time;
+	hw->current_mode = hw->requested_mode;
+	hw->current_fc_status = HNS3_FC_STATUS_PFC;
+	hw->dcb_info.pfc_en |= BIT(priority);
+	hw->dcb_info.hw_pfc_map =
+			hns3_dcb_undrop_tc_map(hw, hw->dcb_info.pfc_en);
+	ret = hns3_buffer_alloc(hw);
+	if (ret)
+		goto pfc_setup_fail;
+
+	/*
+	 * The flow control mode of all UPs will be changed based on
+	 * current_mode coming from user.
+	 */
+	ret = hns3_dcb_pause_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "enable pfc failed! ret = %d", ret);
+		goto pfc_setup_fail;
+	}
+
+	return 0;
+
+pfc_setup_fail:
+	hw->current_mode = current_mode;
+	hw->current_fc_status = fc_status;
+	pf->pause_time = pause_time;
+	hw->dcb_info.pfc_en = pfc_en;
+	hw->dcb_info.hw_pfc_map = hw_pfc_map;
+	status = hns3_buffer_alloc(hw);
+	if (status)
+		hns3_err(hw, "recover packet buffer fail: %d", status);
+
+	return ret;
+}
+
+/*
+ * hns3_fc_enable - Enable MAC pause
+ * @dev: pointer to ethernet device
+ *
+ * Configures the MAC pause settings.
+ */
+int
+hns3_fc_enable(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	enum hns3_fc_status fc_status = hw->current_fc_status;
+	enum hns3_fc_mode current_mode = hw->current_mode;
+	uint16_t pause_time = pf->pause_time;
+	int ret;
+
+	pf->pause_time = fc_conf->pause_time;
+	hw->current_mode = hw->requested_mode;
+
+	/*
+	 * In fact, current_fc_status is HNS3_FC_STATUS_NONE when mode
+	 * of flow control is configured to be HNS3_FC_NONE.
+	 */
+	if (hw->current_mode == HNS3_FC_NONE)
+		hw->current_fc_status = HNS3_FC_STATUS_NONE;
+	else
+		hw->current_fc_status = HNS3_FC_STATUS_MAC_PAUSE;
+
+	ret = hns3_dcb_pause_setup_hw(hw);
+	if (ret) {
+		hns3_err(hw, "enable MAC Pause failed! ret = %d", ret);
+		goto setup_fc_fail;
+	}
+
+	return 0;
+
+setup_fc_fail:
+	hw->current_mode = current_mode;
+	hw->current_fc_status = fc_status;
+	pf->pause_time = pause_time;
+
+	return ret;
+}
diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h
new file mode 100644
index 0000000..9ec4e70
--- /dev/null
+++ b/drivers/net/hns3/hns3_dcb.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_DCB_H_
+#define _HNS3_DCB_H_
+
+/* MAC Pause */
+#define HNS3_TX_MAC_PAUSE_EN_MSK	BIT(0)
+#define HNS3_RX_MAC_PAUSE_EN_MSK	BIT(1)
+
+#define HNS3_DEFAULT_PAUSE_TRANS_GAP	0x18
+#define HNS3_DEFAULT_PAUSE_TRANS_TIME	0xFFFF
+
+/* SP or DWRR */
+#define HNS3_DCB_TX_SCHD_DWRR_MSK	BIT(0)
+#define HNS3_DCB_TX_SCHD_SP_MSK		(0xFE)
+
+enum hns3_shap_bucket {
+	HNS3_DCB_SHAP_C_BUCKET = 0,
+	HNS3_DCB_SHAP_P_BUCKET,
+};
+
+struct hns3_priority_weight_cmd {
+	uint8_t pri_id;
+	uint8_t dwrr;
+};
+
+struct hns3_qs_weight_cmd {
+	uint16_t qs_id;
+	uint8_t dwrr;
+};
+
+struct hns3_pg_weight_cmd {
+	uint8_t pg_id;
+	uint8_t dwrr;
+};
+
+struct hns3_ets_tc_weight_cmd {
+	uint8_t tc_weight[HNS3_MAX_TC_NUM];
+	uint8_t weight_offset;
+	uint8_t rsvd[15];
+};
+
+struct hns3_qs_to_pri_link_cmd {
+	uint16_t qs_id;
+	uint16_t rsvd;
+	uint8_t priority;
+#define HNS3_DCB_QS_PRI_LINK_VLD_MSK	BIT(0)
+	uint8_t link_vld;
+};
+
+struct hns3_nq_to_qs_link_cmd {
+	uint16_t nq_id;
+	uint16_t rsvd;
+#define HNS3_DCB_Q_QS_LINK_VLD_MSK	BIT(10)
+	uint16_t qset_id;
+};
+
+#define HNS3_DCB_SHAP_IR_B_MSK  GENMASK(7, 0)
+#define HNS3_DCB_SHAP_IR_B_LSH	0
+#define HNS3_DCB_SHAP_IR_U_MSK  GENMASK(11, 8)
+#define HNS3_DCB_SHAP_IR_U_LSH	8
+#define HNS3_DCB_SHAP_IR_S_MSK  GENMASK(15, 12)
+#define HNS3_DCB_SHAP_IR_S_LSH	12
+#define HNS3_DCB_SHAP_BS_B_MSK  GENMASK(20, 16)
+#define HNS3_DCB_SHAP_BS_B_LSH	16
+#define HNS3_DCB_SHAP_BS_S_MSK  GENMASK(25, 21)
+#define HNS3_DCB_SHAP_BS_S_LSH	21
+
+struct hns3_pri_shapping_cmd {
+	uint8_t pri_id;
+	uint8_t rsvd[3];
+	uint32_t pri_shapping_para;
+};
+
+struct hns3_pg_shapping_cmd {
+	uint8_t pg_id;
+	uint8_t rsvd[3];
+	uint32_t pg_shapping_para;
+};
+
+#define HNS3_BP_GRP_NUM		32
+#define HNS3_BP_SUB_GRP_ID_S		0
+#define HNS3_BP_SUB_GRP_ID_M		GENMASK(4, 0)
+#define HNS3_BP_GRP_ID_S		5
+#define HNS3_BP_GRP_ID_M		GENMASK(9, 5)
+struct hns3_bp_to_qs_map_cmd {
+	uint8_t tc_id;
+	uint8_t rsvd[2];
+	uint8_t qs_group_id;
+	uint32_t qs_bit_map;
+	uint32_t rsvd1;
+};
+
+struct hns3_pfc_en_cmd {
+	uint8_t tx_rx_en_bitmap;
+	uint8_t pri_en_bitmap;
+};
+
+struct hns3_port_shapping_cmd {
+	uint32_t port_shapping_para;
+};
+
+struct hns3_cfg_pause_param_cmd {
+	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+	uint8_t pause_trans_gap;
+	uint8_t rsvd;
+	uint16_t pause_trans_time;
+	uint8_t rsvd1[6];
+	/* extra mac address to do double check for pause frame */
+	uint8_t mac_addr_extra[RTE_ETHER_ADDR_LEN];
+	uint16_t rsvd2;
+};
+
+struct hns3_pg_to_pri_link_cmd {
+	uint8_t pg_id;
+	uint8_t rsvd1[3];
+	uint8_t pri_bit_map;
+};
+
+enum hns3_shaper_level {
+	HNS3_SHAPER_LVL_PRI	= 0,
+	HNS3_SHAPER_LVL_PG	= 1,
+	HNS3_SHAPER_LVL_PORT	= 2,
+	HNS3_SHAPER_LVL_QSET	= 3,
+	HNS3_SHAPER_LVL_CNT	= 4,
+	HNS3_SHAPER_LVL_VF	= 0,
+	HNS3_SHAPER_LVL_PF	= 1,
+};
+
+struct hns3_shaper_parameter {
+	uint32_t ir_b;  /* IR_B parameter of IR shaper */
+	uint32_t ir_u;  /* IR_U parameter of IR shaper */
+	uint32_t ir_s;  /* IR_S parameter of IR shaper */
+};
+
+#define hns3_dcb_set_field(dest, string, val) \
+			   hns3_set_field((dest), \
+			   (HNS3_DCB_SHAP_##string##_MSK), \
+			   (HNS3_DCB_SHAP_##string##_LSH), val)
+#define hns3_dcb_get_field(src, string) \
+			hns3_get_field((src), (HNS3_DCB_SHAP_##string##_MSK), \
+				       (HNS3_DCB_SHAP_##string##_LSH))
+
+int hns3_pause_addr_cfg(struct hns3_hw *hw, const uint8_t *mac_addr);
+
+int hns3_dcb_configure(struct hns3_adapter *hns);
+
+int hns3_dcb_init(struct hns3_hw *hw);
+
+int hns3_dcb_init_hw(struct hns3_hw *hw);
+
+int hns3_dcb_info_init(struct hns3_hw *hw);
+
+int
+hns3_fc_enable(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf);
+
+int
+hns3_dcb_pfc_enable(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf);
+
+void hns3_tc_queue_mapping_cfg(struct hns3_hw *hw);
+
+int hns3_dcb_cfg_update(struct hns3_adapter *hns);
+
+#endif /* _HNS3_DCB_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 723ada1..a91c1cd 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -35,6 +35,7 @@
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_regs.h"
+#include "hns3_dcb.h"
 
 #define HNS3_DEFAULT_PORT_CONF_BURST_SIZE	32
 #define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
@@ -2616,6 +2617,12 @@ hns3_init_hardware(struct hns3_adapter *hns)
 		goto err_mac_init;
 	}
 
+	ret = hns3_dcb_init(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init dcb: %d", ret);
+		goto err_mac_init;
+	}
+
 	ret = hns3_init_fd_config(hns);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to init flow director: %d", ret);
@@ -2738,11 +2745,200 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 	hw->adapter_state = HNS3_NIC_CLOSED;
 }
 
+static int
+hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	fc_conf->pause_time = pf->pause_time;
+
+	/* return fc current mode */
+	switch (hw->current_mode) {
+	case HNS3_FC_FULL:
+		fc_conf->mode = RTE_FC_FULL;
+		break;
+	case HNS3_FC_TX_PAUSE:
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+		break;
+	case HNS3_FC_RX_PAUSE:
+		fc_conf->mode = RTE_FC_RX_PAUSE;
+		break;
+	case HNS3_FC_NONE:
+	default:
+		fc_conf->mode = RTE_FC_NONE;
+		break;
+	}
+
+	return 0;
+}
+
+static void
+hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
+{
+	switch (mode) {
+	case RTE_FC_NONE:
+		hw->requested_mode = HNS3_FC_NONE;
+		break;
+	case RTE_FC_RX_PAUSE:
+		hw->requested_mode = HNS3_FC_RX_PAUSE;
+		break;
+	case RTE_FC_TX_PAUSE:
+		hw->requested_mode = HNS3_FC_TX_PAUSE;
+		break;
+	case RTE_FC_FULL:
+		hw->requested_mode = HNS3_FC_FULL;
+		break;
+	default:
+		hw->requested_mode = HNS3_FC_NONE;
+		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
+			  "configured to RTE_FC_NONE", mode);
+		break;
+	}
+}
+
+static int
+hns3_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	if (fc_conf->high_water || fc_conf->low_water ||
+	    fc_conf->send_xon || fc_conf->mac_ctrl_frame_fwd) {
+		hns3_err(hw, "Unsupported flow control settings specified, "
+			 "high_water(%u), low_water(%u), send_xon(%u) and "
+			 "mac_ctrl_frame_fwd(%u) must be set to '0'",
+			 fc_conf->high_water, fc_conf->low_water,
+			 fc_conf->send_xon, fc_conf->mac_ctrl_frame_fwd);
+		return -EINVAL;
+	}
+	if (fc_conf->autoneg) {
+		hns3_err(hw, "Unsupported fc auto-negotiation setting.");
+		return -EINVAL;
+	}
+	if (!fc_conf->pause_time) {
+		hns3_err(hw, "Invalid pause time %d setting.",
+			 fc_conf->pause_time);
+		return -EINVAL;
+	}
+
+	if (!(hw->current_fc_status == HNS3_FC_STATUS_NONE ||
+	    hw->current_fc_status == HNS3_FC_STATUS_MAC_PAUSE)) {
+		hns3_err(hw, "PFC is enabled. Cannot set MAC pause. "
+			 "current_fc_status = %d", hw->current_fc_status);
+		return -EOPNOTSUPP;
+	}
+
+	hns3_get_fc_mode(hw, fc_conf->mode);
+	if (hw->requested_mode == hw->current_mode &&
+	    pf->pause_time == fc_conf->pause_time)
+		return 0;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_fc_enable(dev, fc_conf);
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3_priority_flow_ctrl_set(struct rte_eth_dev *dev,
+			    struct rte_eth_pfc_conf *pfc_conf)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint8_t priority;
+	int ret;
+
+	if (!hns3_dev_dcb_supported(hw)) {
+		hns3_err(hw, "This port does not support dcb configurations.");
+		return -EOPNOTSUPP;
+	}
+
+	if (pfc_conf->fc.high_water || pfc_conf->fc.low_water ||
+	    pfc_conf->fc.send_xon || pfc_conf->fc.mac_ctrl_frame_fwd) {
+		hns3_err(hw, "Unsupported flow control settings specified, "
+			 "high_water(%u), low_water(%u), send_xon(%u) and "
+			 "mac_ctrl_frame_fwd(%u) must be set to '0'",
+			 pfc_conf->fc.high_water, pfc_conf->fc.low_water,
+			 pfc_conf->fc.send_xon,
+			 pfc_conf->fc.mac_ctrl_frame_fwd);
+		return -EINVAL;
+	}
+	if (pfc_conf->fc.autoneg) {
+		hns3_err(hw, "Unsupported fc auto-negotiation setting.");
+		return -EINVAL;
+	}
+	if (pfc_conf->fc.pause_time == 0) {
+		hns3_err(hw, "Invalid pause time %d setting.",
+			 pfc_conf->fc.pause_time);
+		return -EINVAL;
+	}
+
+	if (!(hw->current_fc_status == HNS3_FC_STATUS_NONE ||
+	    hw->current_fc_status == HNS3_FC_STATUS_PFC)) {
+		hns3_err(hw, "MAC pause is enabled. Cannot set PFC."
+			     "current_fc_status = %d", hw->current_fc_status);
+		return -EOPNOTSUPP;
+	}
+
+	priority = pfc_conf->priority;
+	hns3_get_fc_mode(hw, pfc_conf->fc.mode);
+	if (hw->dcb_info.pfc_en & BIT(priority) &&
+	    hw->requested_mode == hw->current_mode &&
+	    pfc_conf->fc.pause_time == pf->pause_time)
+		return 0;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_dcb_pfc_enable(dev, pfc_conf);
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+	int i;
+
+	rte_spinlock_lock(&hw->lock);
+	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = pf->local_max_tc;
+	else
+		dcb_info->nb_tcs = 1;
+
+	for (i = 0; i < HNS3_MAX_USER_PRIO; i++)
+		dcb_info->prio_tc[i] = hw->dcb_info.prio_tc[i];
+	for (i = 0; i < dcb_info->nb_tcs; i++)
+		dcb_info->tc_bws[i] = hw->dcb_info.pg_info[0].tc_dwrr[i];
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		dcb_info->tc_queue.tc_rxq[0][i].base =
+					hw->tc_queue[i].tqp_offset;
+		dcb_info->tc_queue.tc_txq[0][i].base =
+					hw->tc_queue[i].tqp_offset;
+		dcb_info->tc_queue.tc_rxq[0][i].nb_queue =
+					hw->tc_queue[i].tqp_count;
+		dcb_info->tc_queue.tc_txq[0][i].nb_queue =
+					hw->tc_queue[i].tqp_count;
+	}
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
 static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_close          = hns3_dev_close,
 	.mtu_set            = hns3_dev_mtu_set,
 	.dev_infos_get          = hns3_dev_infos_get,
 	.fw_version_get         = hns3_fw_version_get,
+	.flow_ctrl_get          = hns3_flow_ctrl_get,
+	.flow_ctrl_set          = hns3_flow_ctrl_set,
+	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,
 	.mac_addr_add           = hns3_add_mac_addr,
 	.mac_addr_remove        = hns3_remove_mac_addr,
 	.mac_addr_set           = hns3_set_default_mac_addr,
@@ -2753,6 +2949,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.reta_update            = hns3_dev_rss_reta_update,
 	.reta_query             = hns3_dev_rss_reta_query,
 	.filter_ctrl            = hns3_dev_filter_ctrl,
+	.get_dcb_info           = hns3_get_dcb_info,
 };
 
 static int
@@ -2783,6 +2980,11 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->dev_ops = &hns3_eth_dev_ops;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
+	if (device_id == HNS3_DEV_ID_25GE_RDMA ||
+	    device_id == HNS3_DEV_ID_50GE_RDMA ||
+	    device_id == HNS3_DEV_ID_100G_RDMA_MACSEC)
+		hns3_set_bit(hw->flag, HNS3_DEV_SUPPORT_DCB_B, 1);
+
 	hns->is_vf = false;
 	hw->data = eth_dev->data;
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (10 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:08   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox " Wei Hu (Xavier)
                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for VLAN related operation of hns3 PMD driver.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c | 670 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 670 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index a91c1cd..cdade9d 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -40,10 +40,670 @@
 #define HNS3_DEFAULT_PORT_CONF_BURST_SIZE	32
 #define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
 
+#define HNS3_PORT_BASE_VLAN_DISABLE	0
+#define HNS3_PORT_BASE_VLAN_ENABLE	1
+#define HNS3_INVLID_PVID		0xFFFF
+
+#define HNS3_FILTER_TYPE_VF		0
+#define HNS3_FILTER_TYPE_PORT		1
+#define HNS3_FILTER_FE_EGRESS_V1_B	BIT(0)
+#define HNS3_FILTER_FE_NIC_INGRESS_B	BIT(0)
+#define HNS3_FILTER_FE_NIC_EGRESS_B	BIT(1)
+#define HNS3_FILTER_FE_ROCE_INGRESS_B	BIT(2)
+#define HNS3_FILTER_FE_ROCE_EGRESS_B	BIT(3)
+#define HNS3_FILTER_FE_EGRESS		(HNS3_FILTER_FE_NIC_EGRESS_B \
+					| HNS3_FILTER_FE_ROCE_EGRESS_B)
+#define HNS3_FILTER_FE_INGRESS		(HNS3_FILTER_FE_NIC_INGRESS_B \
+					| HNS3_FILTER_FE_ROCE_INGRESS_B)
+
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
 static int hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid,
+				    int on);
+
+static int
+hns3_set_port_vlan_filter(struct hns3_adapter *hns, uint16_t vlan_id, int on)
+{
+#define HNS3_VLAN_OFFSET_160		160
+	struct hns3_vlan_filter_pf_cfg_cmd *req;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t vlan_offset_byte_val;
+	struct hns3_cmd_desc desc;
+	uint8_t vlan_offset_byte;
+	uint8_t vlan_offset_160;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_VLAN_FILTER_PF_CFG, false);
+
+	vlan_offset_160 = vlan_id / HNS3_VLAN_OFFSET_160;
+	vlan_offset_byte = (vlan_id % HNS3_VLAN_OFFSET_160) / 8;
+	vlan_offset_byte_val = 1 << (vlan_id % 8);
+
+	req = (struct hns3_vlan_filter_pf_cfg_cmd *)desc.data;
+	req->vlan_offset = vlan_offset_160;
+	req->vlan_cfg = on ? 0 : 1;
+	req->vlan_offset_bitmap[vlan_offset_byte] = vlan_offset_byte_val;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "set port vlan id failed, vlan_id =%u, ret =%d",
+			 vlan_id, ret);
+
+	return ret;
+}
+
+static void
+hns3_rm_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id)
+{
+	struct hns3_user_vlan_table *vlan_entry;
+	struct hns3_pf *pf = &hns->pf;
+
+	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
+		if (vlan_entry->vlan_id == vlan_id) {
+			if (vlan_entry->hd_tbl_status)
+				hns3_set_port_vlan_filter(hns, vlan_id, 0);
+			LIST_REMOVE(vlan_entry, next);
+			rte_free(vlan_entry);
+			break;
+		}
+	}
+}
+
+static void
+hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id,
+			bool writen_to_tbl)
+{
+	struct hns3_user_vlan_table *vlan_entry;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_pf *pf = &hns->pf;
+
+	vlan_entry = rte_zmalloc("hns3_vlan_tbl", sizeof(*vlan_entry), 0);
+	if (vlan_entry == NULL) {
+		hns3_err(hw, "Failed to malloc hns3 vlan table");
+		return;
+	}
+
+	vlan_entry->hd_tbl_status = writen_to_tbl;
+	vlan_entry->vlan_id = vlan_id;
+
+	LIST_INSERT_HEAD(&pf->vlan_list, vlan_entry, next);
+}
+
+static int
+hns3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_vlan_filter_configure(hns, vlan_id, on);
+	rte_spinlock_unlock(&hw->lock);
+	return ret;
+}
+
+static int
+hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
+			 uint16_t tpid)
+{
+	struct hns3_rx_vlan_type_cfg_cmd *rx_req;
+	struct hns3_tx_vlan_type_cfg_cmd *tx_req;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
+	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
+		return -EINVAL;
+	}
+
+	if (tpid != RTE_ETHER_TYPE_VLAN) {
+		hns3_err(hw, "Unsupported vlan tpid, vlan_type =%d", vlan_type);
+		return -EINVAL;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
+	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
+
+	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
+		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
+	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
+		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
+		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
+		rx_req->in_sec_vlan_type = rte_cpu_to_le_16(tpid);
+	}
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Send rxvlan protocol type command fail, ret =%d",
+			 ret);
+		return ret;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_INSERT, false);
+
+	tx_req = (struct hns3_tx_vlan_type_cfg_cmd *)desc.data;
+	tx_req->ot_vlan_type = rte_cpu_to_le_16(tpid);
+	tx_req->in_vlan_type = rte_cpu_to_le_16(tpid);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Send txvlan protocol type command fail, ret =%d",
+			 ret);
+	return ret;
+}
+
+static int
+hns3_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
+		   uint16_t tpid)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_vlan_tpid_configure(hns, vlan_type, tpid);
+	rte_spinlock_unlock(&hw->lock);
+	return ret;
+}
+
+static int
+hns3_set_vlan_rx_offload_cfg(struct hns3_adapter *hns,
+			     struct hns3_rx_vtag_cfg *vcfg)
+{
+	struct hns3_vport_vtag_rx_cfg_cmd *req;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc;
+	uint16_t vport_id;
+	uint8_t bitmap;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_VLAN_PORT_RX_CFG, false);
+
+	req = (struct hns3_vport_vtag_rx_cfg_cmd *)desc.data;
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_REM_TAG1_EN_B,
+		     vcfg->strip_tag1_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_REM_TAG2_EN_B,
+		     vcfg->strip_tag2_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG1_EN_B,
+		     vcfg->vlan1_vlan_prionly ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG2_EN_B,
+		     vcfg->vlan2_vlan_prionly ? 1 : 0);
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * the PF-related vf_id is 0, just need to configure parameters for
+	 * vport_id 0.
+	 */
+	vport_id = 0;
+	req->vf_offset = vport_id / HNS3_VF_NUM_PER_CMD;
+	bitmap = 1 << (vport_id % HNS3_VF_NUM_PER_BYTE);
+	req->vf_bitmap[req->vf_offset] = bitmap;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Send port rxvlan cfg command fail, ret =%d", ret);
+	return ret;
+}
+
+static void
+hns3_update_rx_offload_cfg(struct hns3_adapter *hns,
+			   struct hns3_rx_vtag_cfg *vcfg)
+{
+	struct hns3_pf *pf = &hns->pf;
+	memcpy(&pf->vtag_config.rx_vcfg, vcfg, sizeof(pf->vtag_config.rx_vcfg));
+}
+
+static void
+hns3_update_tx_offload_cfg(struct hns3_adapter *hns,
+			   struct hns3_tx_vtag_cfg *vcfg)
+{
+	struct hns3_pf *pf = &hns->pf;
+	memcpy(&pf->vtag_config.tx_vcfg, vcfg, sizeof(pf->vtag_config.tx_vcfg));
+}
+
+static int
+hns3_en_hw_strip_rxvtag(struct hns3_adapter *hns, bool enable)
+{
+	struct hns3_rx_vtag_cfg rxvlan_cfg;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	if (pf->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) {
+		rxvlan_cfg.strip_tag1_en = false;
+		rxvlan_cfg.strip_tag2_en = enable;
+	} else {
+		rxvlan_cfg.strip_tag1_en = enable;
+		rxvlan_cfg.strip_tag2_en = true;
+	}
+
+	rxvlan_cfg.vlan1_vlan_prionly = false;
+	rxvlan_cfg.vlan2_vlan_prionly = false;
+	rxvlan_cfg.rx_vlan_offload_en = enable;
+
+	ret = hns3_set_vlan_rx_offload_cfg(hns, &rxvlan_cfg);
+	if (ret) {
+		hns3_err(hw, "enable strip rx vtag failed, ret =%d", ret);
+		return ret;
+	}
+
+	hns3_update_rx_offload_cfg(hns, &rxvlan_cfg);
+
+	return ret;
+}
+
+static int
+hns3_set_vlan_filter_ctrl(struct hns3_hw *hw, uint8_t vlan_type,
+			  uint8_t fe_type, bool filter_en, uint8_t vf_id)
+{
+	struct hns3_vlan_filter_ctrl_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_VLAN_FILTER_CTRL, false);
+
+	req = (struct hns3_vlan_filter_ctrl_cmd *)desc.data;
+	req->vlan_type = vlan_type;
+	req->vlan_fe = filter_en ? fe_type : 0;
+	req->vf_id = vf_id;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "set vlan filter fail, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3_enable_vlan_filter(struct hns3_adapter *hns, bool enable)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_set_vlan_filter_ctrl(hw, HNS3_FILTER_TYPE_VF,
+					HNS3_FILTER_FE_EGRESS, false, 0);
+	if (ret) {
+		hns3_err(hw, "hns3 enable filter fail, ret =%d", ret);
+		return ret;
+	}
+
+	ret = hns3_set_vlan_filter_ctrl(hw, HNS3_FILTER_TYPE_PORT,
+					HNS3_FILTER_FE_INGRESS, enable, 0);
+	if (ret)
+		hns3_err(hw, "hns3 enable filter fail, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_rxmode *rxmode;
+	unsigned int tmp_mask;
+	bool enable;
+	int ret = 0;
+
+	rte_spinlock_lock(&hw->lock);
+	rxmode = &dev->data->dev_conf.rxmode;
+	tmp_mask = (unsigned int)mask;
+	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		    true : false;
+
+		ret = hns3_en_hw_strip_rxvtag(hns, enable);
+		if (ret) {
+			rte_spinlock_unlock(&hw->lock);
+			hns3_err(hw, "failed to enable rx strip, ret =%d", ret);
+			return ret;
+		}
+	}
+
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3_set_vlan_tx_offload_cfg(struct hns3_adapter *hns,
+			     struct hns3_tx_vtag_cfg *vcfg)
+{
+	struct hns3_vport_vtag_tx_cfg_cmd *req;
+	struct hns3_cmd_desc desc;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t vport_id;
+	uint8_t bitmap;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_VLAN_PORT_TX_CFG, false);
+
+	req = (struct hns3_vport_vtag_tx_cfg_cmd *)desc.data;
+	req->def_vlan_tag1 = vcfg->default_tag1;
+	req->def_vlan_tag2 = vcfg->default_tag2;
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_ACCEPT_TAG1_B,
+		     vcfg->accept_tag1 ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_ACCEPT_UNTAG1_B,
+		     vcfg->accept_untag1 ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_ACCEPT_TAG2_B,
+		     vcfg->accept_tag2 ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_ACCEPT_UNTAG2_B,
+		     vcfg->accept_untag2 ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_PORT_INS_TAG1_EN_B,
+		     vcfg->insert_tag1_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_PORT_INS_TAG2_EN_B,
+		     vcfg->insert_tag2_en ? 1 : 0);
+	hns3_set_bit(req->vport_vlan_cfg, HNS3_CFG_NIC_ROCE_SEL_B, 0);
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * the PF-related vf_id is 0, just need to configure parameters for
+	 * vport_id 0.
+	 */
+	vport_id = 0;
+	req->vf_offset = vport_id / HNS3_VF_NUM_PER_CMD;
+	bitmap = 1 << (vport_id % HNS3_VF_NUM_PER_BYTE);
+	req->vf_bitmap[req->vf_offset] = bitmap;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Send port txvlan cfg command fail, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3_vlan_txvlan_cfg(struct hns3_adapter *hns, uint16_t port_base_vlan_state,
+		     uint16_t pvid)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_vtag_cfg txvlan_cfg;
+	int ret;
+
+	if (port_base_vlan_state == HNS3_PORT_BASE_VLAN_DISABLE) {
+		txvlan_cfg.accept_tag1 = true;
+		txvlan_cfg.insert_tag1_en = false;
+		txvlan_cfg.default_tag1 = 0;
+	} else {
+		txvlan_cfg.accept_tag1 = false;
+		txvlan_cfg.insert_tag1_en = true;
+		txvlan_cfg.default_tag1 = pvid;
+	}
+
+	txvlan_cfg.accept_untag1 = true;
+	txvlan_cfg.accept_tag2 = true;
+	txvlan_cfg.accept_untag2 = true;
+	txvlan_cfg.insert_tag2_en = false;
+	txvlan_cfg.default_tag2 = 0;
+
+	ret = hns3_set_vlan_tx_offload_cfg(hns, &txvlan_cfg);
+	if (ret) {
+		hns3_err(hw, "pf vlan set pvid failed, pvid =%u ,ret =%d", pvid,
+			 ret);
+		return ret;
+	}
+
+	hns3_update_tx_offload_cfg(hns, &txvlan_cfg);
+	return ret;
+}
+
+static void
+hns3_store_port_base_vlan_info(struct hns3_adapter *hns, uint16_t pvid, int on)
+{
+	struct hns3_pf *pf = &hns->pf;
+
+	pf->port_base_vlan_cfg.state = on ?
+	    HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE;
+
+	pf->port_base_vlan_cfg.pvid = pvid;
+}
+
+static void
+hns3_rm_all_vlan_table(struct hns3_adapter *hns, bool is_del_list)
+{
+	struct hns3_user_vlan_table *vlan_entry;
+	struct hns3_pf *pf = &hns->pf;
+
+	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
+		if (vlan_entry->hd_tbl_status)
+			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 0);
+
+		vlan_entry->hd_tbl_status = false;
+	}
+
+	if (is_del_list) {
+		vlan_entry = LIST_FIRST(&pf->vlan_list);
+		while (vlan_entry) {
+			LIST_REMOVE(vlan_entry, next);
+			rte_free(vlan_entry);
+			vlan_entry = LIST_FIRST(&pf->vlan_list);
+		}
+	}
+}
+
+static void
+hns3_add_all_vlan_table(struct hns3_adapter *hns)
+{
+	struct hns3_user_vlan_table *vlan_entry;
+	struct hns3_pf *pf = &hns->pf;
+
+	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
+		if (!vlan_entry->hd_tbl_status)
+			hns3_set_port_vlan_filter(hns, vlan_entry->vlan_id, 1);
+
+		vlan_entry->hd_tbl_status = true;
+	}
+}
+
+static void
+hns3_remove_all_vlan_table(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_pf *pf = &hns->pf;
+	int ret;
+
+	hns3_rm_all_vlan_table(hns, true);
+	if (pf->port_base_vlan_cfg.pvid != HNS3_INVLID_PVID) {
+		ret = hns3_set_port_vlan_filter(hns,
+						pf->port_base_vlan_cfg.pvid, 0);
+		if (ret) {
+			hns3_err(hw, "Failed to remove all vlan table, ret =%d",
+				 ret);
+			return;
+		}
+	}
+}
+
+static int
+hns3_update_vlan_filter_entries(struct hns3_adapter *hns,
+				uint16_t port_base_vlan_state,
+				uint16_t new_pvid, uint16_t old_pvid)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	int ret = 0;
+
+	if (port_base_vlan_state == HNS3_PORT_BASE_VLAN_ENABLE) {
+		if (old_pvid != HNS3_INVLID_PVID && old_pvid != 0) {
+			ret = hns3_set_port_vlan_filter(hns, old_pvid, 0);
+			if (ret) {
+				hns3_err(hw,
+					 "Failed to clear clear old pvid filter, ret =%d",
+					 ret);
+				return ret;
+			}
+		}
+
+		hns3_rm_all_vlan_table(hns, false);
+		return hns3_set_port_vlan_filter(hns, new_pvid, 1);
+	}
+
+	if (new_pvid != 0) {
+		ret = hns3_set_port_vlan_filter(hns, new_pvid, 0);
+		if (ret) {
+			hns3_err(hw, "Failed to set port vlan filter, ret =%d",
+				 ret);
+			return ret;
+		}
+	}
+
+	if (new_pvid == pf->port_base_vlan_cfg.pvid)
+		hns3_add_all_vlan_table(hns);
+
+	return ret;
+}
+
+static int
+hns3_en_rx_strip_all(struct hns3_adapter *hns, int on)
+{
+	struct hns3_rx_vtag_cfg rx_vlan_cfg;
+	struct hns3_hw *hw = &hns->hw;
+	bool rx_strip_en;
+	int ret;
+
+	rx_strip_en = on ? true : false;
+	rx_vlan_cfg.strip_tag1_en = rx_strip_en;
+	rx_vlan_cfg.strip_tag2_en = rx_strip_en;
+	rx_vlan_cfg.vlan1_vlan_prionly = false;
+	rx_vlan_cfg.vlan2_vlan_prionly = false;
+	rx_vlan_cfg.rx_vlan_offload_en = rx_strip_en;
+
+	ret = hns3_set_vlan_rx_offload_cfg(hns, &rx_vlan_cfg);
+	if (ret) {
+		hns3_err(hw, "enable strip rx failed, ret =%d", ret);
+		return ret;
+	}
+
+	hns3_update_rx_offload_cfg(hns, &rx_vlan_cfg);
+	return ret;
+}
+
+static int
+hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid, int on)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t port_base_vlan_state;
+	uint16_t old_pvid;
+	int ret;
+
+	if (on == 0 && pvid != pf->port_base_vlan_cfg.pvid) {
+		if (pf->port_base_vlan_cfg.pvid != HNS3_INVLID_PVID)
+			hns3_warn(hw, "Invalid operation! As current pvid set "
+				  "is %u, disable pvid %u is invalid",
+				  pf->port_base_vlan_cfg.pvid, pvid);
+		return 0;
+	}
+
+	port_base_vlan_state = on ? HNS3_PORT_BASE_VLAN_ENABLE :
+				    HNS3_PORT_BASE_VLAN_DISABLE;
+	ret = hns3_vlan_txvlan_cfg(hns, port_base_vlan_state, pvid);
+	if (ret) {
+		hns3_err(hw, "Failed to config tx vlan, ret =%d", ret);
+		return ret;
+	}
+
+	ret = hns3_en_rx_strip_all(hns, on);
+	if (ret) {
+		hns3_err(hw, "Failed to config rx vlan strip, ret =%d", ret);
+		return ret;
+	}
+
+	if (pvid == HNS3_INVLID_PVID)
+		goto out;
+	old_pvid = pf->port_base_vlan_cfg.pvid;
+	ret = hns3_update_vlan_filter_entries(hns, port_base_vlan_state, pvid,
+					      old_pvid);
+	if (ret) {
+		hns3_err(hw, "Failed to update vlan filter entries, ret =%d",
+			 ret);
+		return ret;
+	}
+
+out:
+	hns3_store_port_base_vlan_info(hns, pvid, on);
+	return ret;
+}
+
+static int
+hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_vlan_pvid_configure(hns, pvid, on);
+	rte_spinlock_unlock(&hw->lock);
+	return ret;
+}
+
+static void
+init_port_base_vlan_info(struct hns3_hw *hw)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_pf *pf = &hns->pf;
+
+	pf->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE;
+	pf->port_base_vlan_cfg.pvid = HNS3_INVLID_PVID;
+}
+
+static int
+hns3_default_vlan_config(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_set_port_vlan_filter(hns, 0, 1);
+	if (ret)
+		hns3_err(hw, "default vlan 0 config failed, ret =%d", ret);
+	return ret;
+}
+
+static int
+hns3_init_vlan_config(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	init_port_base_vlan_info(hw);
+
+	ret = hns3_enable_vlan_filter(hns, true);
+	if (ret) {
+		hns3_err(hw, "vlan init fail in pf, ret =%d", ret);
+		return ret;
+	}
+
+	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+				       RTE_ETHER_TYPE_VLAN);
+	if (ret) {
+		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
+		return ret;
+	}
+
+	ret = hns3_vlan_pvid_configure(hns, HNS3_INVLID_PVID, 0);
+	if (ret) {
+		hns3_err(hw, "pvid set fail in pf, ret =%d", ret);
+		return ret;
+	}
+
+	ret = hns3_en_hw_strip_rxvtag(hns, false);
+	if (ret) {
+		hns3_err(hw, "rx strip configure fail in pf, ret =%d",
+			 ret);
+		return ret;
+	}
+
+	return hns3_default_vlan_config(hns);
+}
+
 
 static int
 hns3_config_tso(struct hns3_hw *hw, unsigned int tso_mss_min,
@@ -2617,6 +3277,12 @@ hns3_init_hardware(struct hns3_adapter *hns)
 		goto err_mac_init;
 	}
 
+	ret = hns3_init_vlan_config(hns);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init vlan: %d", ret);
+		goto err_mac_init;
+	}
+
 	ret = hns3_dcb_init(hw);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to init dcb: %d", ret);
@@ -2949,6 +3615,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.reta_update            = hns3_dev_rss_reta_update,
 	.reta_query             = hns3_dev_rss_reta_query,
 	.filter_ctrl            = hns3_dev_filter_ctrl,
+	.vlan_filter_set        = hns3_vlan_filter_set,
+	.vlan_tpid_set          = hns3_vlan_tpid_set,
+	.vlan_offload_set       = hns3_vlan_offload_set,
+	.vlan_pvid_set          = hns3_vlan_pvid_set,
 	.get_dcb_info           = hns3_get_dcb_info,
 };
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox of hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (11 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:08   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF " Wei Hu (Xavier)
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for mailbox of hns3 PMD driver, mailbox is
used for communication between PF and VF driver.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c    |   1 +
 drivers/net/hns3/hns3_dcb.c    |   1 +
 drivers/net/hns3/hns3_ethdev.c |   1 +
 drivers/net/hns3/hns3_ethdev.h |   2 +
 drivers/net/hns3/hns3_fdir.c   |   1 +
 drivers/net/hns3/hns3_flow.c   |   1 +
 drivers/net/hns3/hns3_mbx.c    | 337 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_mbx.h    | 136 +++++++++++++++++
 drivers/net/hns3/hns3_rss.c    |   1 +
 9 files changed, 481 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_mbx.c
 create mode 100644 drivers/net/hns3/hns3_mbx.h

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 07fe92e..2a58f95 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -27,6 +27,7 @@
 #include <rte_io.h>
 
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 0644299..6fb97de 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -15,6 +15,7 @@
 
 #include "hns3_logs.h"
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_regs.h"
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index cdade9d..df4749e 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -30,6 +30,7 @@
 #include <rte_pci.h>
 
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 31301af..413db04 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -327,6 +327,8 @@ struct hns3_hw {
 	struct rte_eth_dev_data *data;
 	void *io_base;
 	struct hns3_cmq cmq;
+	struct hns3_mbx_resp_status mbx_resp; /* mailbox response */
+	struct hns3_mbx_arq_ring arq;         /* mailbox async rx queue */
 	struct hns3_mac mac;
 	unsigned int secondary_cnt; /* Number of secondary processes init'd. */
 	uint32_t fw_version;
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 77c93ba..544ac7e 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -11,6 +11,7 @@
 #include <rte_malloc.h>
 
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 9717fbf..47c9b3a 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -9,6 +9,7 @@
 #include <rte_malloc.h>
 
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
new file mode 100644
index 0000000..485d810
--- /dev/null
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -0,0 +1,337 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <errno.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_io.h>
+#include <rte_spinlock.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_regs.h"
+#include "hns3_logs.h"
+
+#define HNS3_REG_MSG_DATA_OFFSET	4
+#define HNS3_CMD_CODE_OFFSET		2
+
+static const struct errno_respcode_map err_code_map[] = {
+	{0, 0},
+	{1, -EPERM},
+	{2, -ENOENT},
+	{5, -EIO},
+	{11, -EAGAIN},
+	{12, -ENOMEM},
+	{16, -EBUSY},
+	{22, -EINVAL},
+	{28, -ENOSPC},
+	{95, -EOPNOTSUPP},
+};
+
+static int
+hns3_resp_to_errno(uint16_t resp_code)
+{
+	uint32_t i, num;
+
+	num = sizeof(err_code_map) / sizeof(struct errno_respcode_map);
+	for (i = 0; i < num; i++) {
+		if (err_code_map[i].resp_code == resp_code)
+			return err_code_map[i].err_no;
+	}
+
+	return -EIO;
+}
+
+static void
+hns3_poll_all_sync_msg(void)
+{
+	struct rte_eth_dev *eth_dev;
+	struct hns3_adapter *adapter;
+	const char *name;
+	uint16_t port_id;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		eth_dev = &rte_eth_devices[port_id];
+		name = eth_dev->device->driver->name;
+		if (strcmp(name, "net_hns3") && strcmp(name, "net_hns3_vf"))
+			continue;
+		adapter = eth_dev->data->dev_private;
+		if (!adapter || adapter->hw.adapter_state == HNS3_NIC_CLOSED)
+			continue;
+		/* Synchronous msg, the mbx_resp.req_msg_data is non-zero */
+		if (adapter->hw.mbx_resp.req_msg_data)
+			hns3_dev_handle_mbx_msg(&adapter->hw);
+	}
+}
+
+static int
+hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code0, uint16_t code1,
+		  uint8_t *resp_data, uint16_t resp_len)
+{
+#define HNS3_MAX_RETRY_MS	500
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	struct hns3_mbx_resp_status *mbx_resp;
+	bool in_irq = false;
+	uint64_t now;
+	uint64_t end;
+
+	if (resp_len > HNS3_MBX_MAX_RESP_DATA_SIZE) {
+		hns3_err(hw, "VF mbx response len(=%d) exceeds maximum(=%d)",
+			 resp_len, HNS3_MBX_MAX_RESP_DATA_SIZE);
+		return -EINVAL;
+	}
+
+	now = get_timeofday_ms();
+	end = now + HNS3_MAX_RETRY_MS;
+	while ((hw->mbx_resp.head != hw->mbx_resp.tail + hw->mbx_resp.lost) &&
+	       (now < end)) {
+		rte_delay_ms(HNS3_POLL_RESPONE_MS);
+		now = get_timeofday_ms();
+	}
+	hw->mbx_resp.req_msg_data = 0;
+	if (now >= end) {
+		hw->mbx_resp.lost++;
+		hns3_err(hw,
+			 "VF could not get mbx(%d,%d) head(%d) tail(%d) lost(%d) from PF in_irq:%d",
+			 code0, code1, hw->mbx_resp.head, hw->mbx_resp.tail,
+			 hw->mbx_resp.lost, in_irq);
+		return -ETIME;
+	}
+	rte_io_rmb();
+	mbx_resp = &hw->mbx_resp;
+
+	if (mbx_resp->resp_status)
+		return mbx_resp->resp_status;
+
+	if (resp_data)
+		memcpy(resp_data, &mbx_resp->additional_info[0], resp_len);
+
+	return 0;
+}
+
+int
+hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
+		  const uint8_t *msg_data, uint8_t msg_len, bool need_resp,
+		  uint8_t *resp_data, uint16_t resp_len)
+{
+	struct hns3_mbx_vf_to_pf_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
+
+	/* first two bytes are reserved for code & subcode */
+	if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) {
+		hns3_err(hw,
+			 "VF send mbx msg fail, msg len %d exceeds max payload len %d",
+			 msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET);
+		return -EINVAL;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = code;
+	req->msg[1] = subcode;
+	if (msg_data)
+		memcpy(&req->msg[HNS3_CMD_CODE_OFFSET], msg_data, msg_len);
+
+	/* synchronous send */
+	if (need_resp) {
+		req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT;
+		rte_spinlock_lock(&hw->mbx_resp.lock);
+		hw->mbx_resp.req_msg_data = (uint32_t)code << 16 | subcode;
+		hw->mbx_resp.head++;
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			rte_spinlock_unlock(&hw->mbx_resp.lock);
+			hns3_err(hw, "VF failed(=%d) to send mbx message to PF",
+				 ret);
+			return ret;
+		}
+
+		ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len);
+		rte_spinlock_unlock(&hw->mbx_resp.lock);
+	} else {
+		/* asynchronous send */
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			hns3_err(hw, "VF failed(=%d) to send mbx message to PF",
+				 ret);
+			return ret;
+		}
+	}
+
+	return ret;
+}
+
+static bool
+hns3_cmd_crq_empty(struct hns3_hw *hw)
+{
+	uint32_t tail = hns3_read_dev(hw, HNS3_CMDQ_RX_TAIL_REG);
+
+	return tail == hw->cmq.crq.next_to_use;
+}
+
+static void
+hns3_mbx_handler(struct hns3_hw *hw)
+{
+	struct hns3_mac *mac = &hw->mac;
+	enum hns3_reset_level reset_level;
+	uint16_t *msg_q;
+	uint32_t tail;
+
+	tail = hw->arq.tail;
+
+	/* process all the async queue messages */
+	while (tail != hw->arq.head) {
+		msg_q = hw->arq.msg_q[hw->arq.head];
+
+		switch (msg_q[0]) {
+		case HNS3_MBX_LINK_STAT_CHANGE:
+			memcpy(&mac->link_speed, &msg_q[2],
+				   sizeof(mac->link_speed));
+			mac->link_status = rte_le_to_cpu_16(msg_q[1]);
+			mac->link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]);
+			break;
+		case HNS3_MBX_ASSERTING_RESET:
+			/* PF has asserted reset hence VF should go in pending
+			 * state and poll for the hardware reset status till it
+			 * has been completely reset. After this stack should
+			 * eventually be re-initialized.
+			 */
+			reset_level = rte_le_to_cpu_16(msg_q[1]);
+			hns3_atomic_set_bit(reset_level, &hw->reset.pending);
+
+			hns3_warn(hw, "PF inform reset level %d", reset_level);
+			hw->reset.stats.request_cnt++;
+			break;
+		default:
+			hns3_err(hw, "Fetched unsupported(%d) message from arq",
+				 msg_q[0]);
+			break;
+		}
+
+		hns3_mbx_head_ptr_move_arq(hw->arq);
+		msg_q = hw->arq.msg_q[hw->arq.head];
+	}
+}
+
+/*
+ * Case1: receive response after timeout, req_msg_data
+ *        is 0, not equal resp_msg, do lost--
+ * Case2: receive last response during new send_mbx_msg,
+ *	  req_msg_data is different with resp_msg, let
+ *	  lost--, continue to wait for response.
+ */
+static void
+hns3_update_resp_position(struct hns3_hw *hw, uint32_t resp_msg)
+{
+	struct hns3_mbx_resp_status *resp = &hw->mbx_resp;
+	uint32_t tail = resp->tail + 1;
+
+	if (tail > resp->head)
+		tail = resp->head;
+	if (resp->req_msg_data != resp_msg) {
+		if (resp->lost)
+			resp->lost--;
+		hns3_warn(hw, "Received a mismatched response req_msg(%x) "
+			  "resp_msg(%x) head(%d) tail(%d) lost(%d)",
+			  resp->req_msg_data, resp_msg, resp->head, tail,
+			  resp->lost);
+	} else if (tail + resp->lost > resp->head) {
+		resp->lost--;
+		hns3_warn(hw, "Received a new response again resp_msg(%x) "
+			  "head(%d) tail(%d) lost(%d)", resp_msg,
+			  resp->head, tail, resp->lost);
+	}
+	rte_io_wmb();
+	resp->tail = tail;
+}
+
+void
+hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
+{
+	struct hns3_mbx_resp_status *resp = &hw->mbx_resp;
+	struct hns3_cmq_ring *crq = &hw->cmq.crq;
+	struct hns3_mbx_pf_to_vf_cmd *req;
+	struct hns3_cmd_desc *desc;
+	uint32_t msg_data;
+	uint16_t *msg_q;
+	uint16_t flag;
+	uint8_t *temp;
+	int i;
+
+	while (!hns3_cmd_crq_empty(hw)) {
+		if (rte_atomic16_read(&hw->reset.disable_cmd))
+			return;
+
+		desc = &crq->desc[crq->next_to_use];
+		req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data;
+
+		flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag);
+		if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) {
+			hns3_warn(hw,
+				  "dropped invalid mailbox message, code = %d",
+				  req->msg[0]);
+
+			/* dropping/not processing this invalid message */
+			crq->desc[crq->next_to_use].flag = 0;
+			hns3_mbx_ring_ptr_move_crq(crq);
+			continue;
+		}
+
+		switch (req->msg[0]) {
+		case HNS3_MBX_PF_VF_RESP:
+			resp->resp_status = hns3_resp_to_errno(req->msg[3]);
+
+			temp = (uint8_t *)&req->msg[4];
+			for (i = 0; i < HNS3_MBX_MAX_RESP_DATA_SIZE &&
+			     i < HNS3_REG_MSG_DATA_OFFSET; i++) {
+				resp->additional_info[i] = *temp;
+				temp++;
+			}
+			msg_data = (uint32_t)req->msg[1] << 16 | req->msg[2];
+			hns3_update_resp_position(hw, msg_data);
+			break;
+		case HNS3_MBX_LINK_STAT_CHANGE:
+		case HNS3_MBX_ASSERTING_RESET:
+			msg_q = hw->arq.msg_q[hw->arq.tail];
+			memcpy(&msg_q[0], req->msg,
+			       HNS3_MBX_MAX_ARQ_MSG_SIZE * sizeof(uint16_t));
+			hns3_mbx_tail_ptr_move_arq(hw->arq);
+
+			hns3_mbx_handler(hw);
+			break;
+		default:
+			hns3_err(hw,
+				 "VF received unsupported(%d) mbx msg from PF",
+				 req->msg[0]);
+			break;
+		}
+
+		crq->desc[crq->next_to_use].flag = 0;
+		hns3_mbx_ring_ptr_move_crq(crq);
+	}
+
+	/* Write back CMDQ_RQ header pointer, IMP need this pointer */
+	hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use);
+}
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
new file mode 100644
index 0000000..ee6e823
--- /dev/null
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_MBX_H_
+#define _HNS3_MBX_H_
+
+#define HNS3_MBX_VF_MSG_DATA_NUM	16
+
+enum HNS3_MBX_OPCODE {
+	HNS3_MBX_RESET = 0x01,          /* (VF -> PF) assert reset */
+	HNS3_MBX_ASSERTING_RESET,       /* (PF -> VF) PF is asserting reset */
+	HNS3_MBX_SET_UNICAST,           /* (VF -> PF) set UC addr */
+	HNS3_MBX_SET_MULTICAST,         /* (VF -> PF) set MC addr */
+	HNS3_MBX_SET_VLAN,              /* (VF -> PF) set VLAN */
+	HNS3_MBX_MAP_RING_TO_VECTOR,    /* (VF -> PF) map ring-to-vector */
+	HNS3_MBX_UNMAP_RING_TO_VECTOR,  /* (VF -> PF) unamp ring-to-vector */
+	HNS3_MBX_SET_PROMISC_MODE,      /* (VF -> PF) set promiscuous mode */
+	HNS3_MBX_SET_MACVLAN,           /* (VF -> PF) set unicast filter */
+	HNS3_MBX_API_NEGOTIATE,         /* (VF -> PF) negotiate API version */
+	HNS3_MBX_GET_QINFO,             /* (VF -> PF) get queue config */
+	HNS3_MBX_GET_QDEPTH,            /* (VF -> PF) get queue depth */
+	HNS3_MBX_GET_TCINFO,            /* (VF -> PF) get TC config */
+	HNS3_MBX_GET_RETA,              /* (VF -> PF) get RETA */
+	HNS3_MBX_GET_RSS_KEY,           /* (VF -> PF) get RSS key */
+	HNS3_MBX_GET_MAC_ADDR,          /* (VF -> PF) get MAC addr */
+	HNS3_MBX_PF_VF_RESP,            /* (PF -> VF) generate respone to VF */
+	HNS3_MBX_GET_BDNUM,             /* (VF -> PF) get BD num */
+	HNS3_MBX_GET_BUFSIZE,           /* (VF -> PF) get buffer size */
+	HNS3_MBX_GET_STREAMID,          /* (VF -> PF) get stream id */
+	HNS3_MBX_SET_AESTART,           /* (VF -> PF) start ae */
+	HNS3_MBX_SET_TSOSTATS,          /* (VF -> PF) get tso stats */
+	HNS3_MBX_LINK_STAT_CHANGE,      /* (PF -> VF) link status has changed */
+	HNS3_MBX_GET_BASE_CONFIG,       /* (VF -> PF) get config */
+	HNS3_MBX_BIND_FUNC_QUEUE,       /* (VF -> PF) bind function and queue */
+	HNS3_MBX_GET_LINK_STATUS,       /* (VF -> PF) get link status */
+	HNS3_MBX_QUEUE_RESET,           /* (VF -> PF) reset queue */
+	HNS3_MBX_KEEP_ALIVE,            /* (VF -> PF) send keep alive cmd */
+	HNS3_MBX_SET_ALIVE,             /* (VF -> PF) set alive state */
+	HNS3_MBX_SET_MTU,               /* (VF -> PF) set mtu */
+	HNS3_MBX_GET_QID_IN_PF,         /* (VF -> PF) get queue id in pf */
+};
+
+/* below are per-VF mac-vlan subcodes */
+enum hns3_mbx_mac_vlan_subcode {
+	HNS3_MBX_MAC_VLAN_UC_MODIFY = 0,        /* modify UC mac addr */
+	HNS3_MBX_MAC_VLAN_UC_ADD,               /* add a new UC mac addr */
+	HNS3_MBX_MAC_VLAN_UC_REMOVE,            /* remove a new UC mac addr */
+	HNS3_MBX_MAC_VLAN_MC_MODIFY,            /* modify MC mac addr */
+	HNS3_MBX_MAC_VLAN_MC_ADD,               /* add new MC mac addr */
+	HNS3_MBX_MAC_VLAN_MC_REMOVE,            /* remove MC mac addr */
+};
+
+/* below are per-VF vlan cfg subcodes */
+enum hns3_mbx_vlan_cfg_subcode {
+	HNS3_MBX_VLAN_FILTER = 0,               /* set vlan filter */
+	HNS3_MBX_VLAN_TX_OFF_CFG,               /* set tx side vlan offload */
+	HNS3_MBX_VLAN_RX_OFF_CFG,               /* set rx side vlan offload */
+};
+
+#define HNS3_MBX_MAX_MSG_SIZE	16
+#define HNS3_MBX_MAX_RESP_DATA_SIZE	8
+#define HNS3_MBX_RING_MAP_BASIC_MSG_NUM	3
+#define HNS3_MBX_RING_NODE_VARIABLE_NUM	3
+
+struct hns3_mbx_resp_status {
+	rte_spinlock_t lock; /* protects against contending sync cmd resp */
+	uint32_t req_msg_data;
+	uint32_t head;
+	uint32_t tail;
+	uint32_t lost;
+	int resp_status;
+	uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE];
+};
+
+struct errno_respcode_map {
+	uint16_t resp_code;
+	int err_no;
+};
+
+#define HNS3_MBX_NEED_RESP_BIT                BIT(0)
+
+struct hns3_mbx_vf_to_pf_cmd {
+	uint8_t rsv;
+	uint8_t mbx_src_vfid;                   /* Auto filled by IMP */
+	uint8_t mbx_need_resp;
+	uint8_t rsv1;
+	uint8_t msg_len;
+	uint8_t rsv2[3];
+	uint8_t msg[HNS3_MBX_MAX_MSG_SIZE];
+};
+
+struct hns3_mbx_pf_to_vf_cmd {
+	uint8_t dest_vfid;
+	uint8_t rsv[3];
+	uint8_t msg_len;
+	uint8_t rsv1[3];
+	uint16_t msg[8];
+};
+
+struct hns3_vf_rst_cmd {
+	uint8_t dest_vfid;
+	uint8_t vf_rst;
+	uint8_t rsv[22];
+};
+
+struct hns3_pf_rst_done_cmd {
+	uint8_t pf_rst_done;
+	uint8_t rsv[23];
+};
+
+#define HNS3_PF_RESET_DONE_BIT		BIT(0)
+
+/* used by VF to store the received Async responses from PF */
+struct hns3_mbx_arq_ring {
+#define HNS3_MBX_MAX_ARQ_MSG_SIZE	8
+#define HNS3_MBX_MAX_ARQ_MSG_NUM	1024
+	uint32_t head;
+	uint32_t tail;
+	uint32_t count;
+	uint16_t msg_q[HNS3_MBX_MAX_ARQ_MSG_NUM][HNS3_MBX_MAX_ARQ_MSG_SIZE];
+};
+
+#define hns3_mbx_ring_ptr_move_crq(crq) \
+	((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num)
+#define hns3_mbx_tail_ptr_move_arq(arq) \
+	((arq).tail = ((arq).tail + 1) % HNS3_MBX_MAX_ARQ_MSG_SIZE)
+#define hns3_mbx_head_ptr_move_arq(arq) \
+		((arq).head = ((arq).head + 1) % HNS3_MBX_MAX_ARQ_MSG_SIZE)
+
+struct hns3_hw;
+void hns3_dev_handle_mbx_msg(struct hns3_hw *hw);
+int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
+		      const uint8_t *msg_data, uint8_t msg_len, bool need_resp,
+		      uint8_t *resp_data, uint16_t resp_len);
+#endif /* _HNS3_MBX_H_ */
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index b0cd161..c4ef11b 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -10,6 +10,7 @@
 #include <rte_spinlock.h>
 
 #include "hns3_cmd.h"
+#include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (12 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:11   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation Wei Hu (Xavier)
                   ` (8 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds support for hns3 VF PMD driver.

In current version, we only support VF device is bound to vfio_pci or
igb_uio and then taken over by DPDK when PF device is taken over by kernel
mode hns3 ethdev driver, VF is not supported when PF devcie is taken over
by DPDK.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev_vf.c | 1287 +++++++++++++++++++++++++++++++++++++
 1 file changed, 1287 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_ethdev_vf.c

diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
new file mode 100644
index 0000000..43b27ed
--- /dev/null
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -0,0 +1,1287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <arpa/inet.h>
+#include <errno.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <sys/time.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <arpa/inet.h>
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_interrupts.h>
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_pci.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+#include "hns3_regs.h"
+#include "hns3_dcb.h"
+
+#define HNS3VF_KEEP_ALIVE_INTERVAL	2000000 /* us */
+#define HNS3VF_SERVICE_INTERVAL		1000000 /* us */
+
+#define HNS3VF_RESET_WAIT_MS	20
+#define HNS3VF_RESET_WAIT_CNT	2000
+
+static int hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int hns3vf_dev_configure_vlan(struct rte_eth_dev *dev);
+
+static int
+hns3vf_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+		    __attribute__ ((unused)) uint32_t idx,
+		    __attribute__ ((unused)) uint32_t pool)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
+				HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes,
+				RTE_ETHER_ADDR_LEN, false, NULL, 0);
+	rte_spinlock_unlock(&hw->lock);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to add mac addr(%s) for vf: %d", mac_str,
+			 ret);
+	}
+
+	return ret;
+}
+
+static void
+hns3vf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t idx)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	/* index will be checked by upper level rte interface */
+	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[idx];
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
+				HNS3_MBX_MAC_VLAN_UC_REMOVE,
+				mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false,
+				NULL, 0);
+	rte_spinlock_unlock(&hw->lock);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to remove mac addr(%s) for vf: %d",
+			 mac_str, ret);
+	}
+}
+
+static int
+hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
+			    struct rte_ether_addr *mac_addr)
+{
+#define HNS3_TWO_ETHER_ADDR_LEN (RTE_ETHER_ADDR_LEN * 2)
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_ether_addr *old_addr;
+	uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret;
+
+	if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to set mac addr, addr(%s) invalid.",
+			 mac_str);
+		return -EINVAL;
+	}
+
+	old_addr = (struct rte_ether_addr *)hw->mac.mac_addr;
+	rte_spinlock_lock(&hw->lock);
+	memcpy(addr_bytes, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes,
+	       RTE_ETHER_ADDR_LEN);
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
+				HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes,
+				HNS3_TWO_ETHER_ADDR_LEN, false, NULL, 0);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to set mac addr(%s) for vf: %d", mac_str,
+			 ret);
+	}
+
+	rte_ether_addr_copy(mac_addr,
+			    (struct rte_ether_addr *)hw->mac.mac_addr);
+	rte_spinlock_unlock(&hw->lock);
+
+	return ret;
+}
+
+static int
+hns3vf_configure_mac_addr(struct hns3_adapter *hns, bool del)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_ether_addr *addr;
+	enum hns3_mbx_mac_vlan_subcode opcode;
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int ret = 0;
+	int i;
+
+	if (del)
+		opcode = HNS3_MBX_MAC_VLAN_UC_REMOVE;
+	else
+		opcode = HNS3_MBX_MAC_VLAN_UC_ADD;
+	for (i = 0; i < HNS3_UC_MACADDR_NUM; i++) {
+		addr = &hw->data->mac_addrs[i];
+		if (!rte_is_valid_assigned_ether_addr(addr))
+			continue;
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		hns3_dbg(hw, "rm mac addr: %s", mac_str);
+		ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, opcode,
+					addr->addr_bytes, RTE_ETHER_ADDR_LEN,
+					false, NULL, 0);
+		if (ret) {
+			hns3_err(hw, "Failed to remove mac addr for vf: %d",
+				 ret);
+			break;
+		}
+	}
+	return ret;
+}
+
+static int
+hns3vf_add_mc_mac_addr(struct hns3_adapter *hns,
+		       struct rte_ether_addr *mac_addr)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST,
+				HNS3_MBX_MAC_VLAN_MC_ADD,
+				mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false,
+				NULL, 0);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to add mc mac addr(%s) for vf: %d",
+			 mac_str, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3vf_remove_mc_mac_addr(struct hns3_adapter *hns,
+			  struct rte_ether_addr *mac_addr)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST,
+				HNS3_MBX_MAC_VLAN_MC_REMOVE,
+				mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false,
+				NULL, 0);
+	if (ret) {
+		rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+				      mac_addr);
+		hns3_err(hw, "Failed to remove mc mac addr(%s) for vf: %d",
+			 mac_str, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3vf_set_mc_mac_addr_list(struct rte_eth_dev *dev,
+			    struct rte_ether_addr *mc_addr_set,
+			    uint32_t nb_mc_addr)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_ether_addr *addr;
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	int cur_addr_num;
+	int set_addr_num;
+	int num;
+	int ret;
+	int i;
+
+	if (nb_mc_addr > HNS3_MC_MACADDR_NUM) {
+		hns3_err(hw, "Failed to set mc mac addr, nb_mc_addr(%d) "
+			 "invalid. valid range: 0~%d",
+			 nb_mc_addr, HNS3_MC_MACADDR_NUM);
+		return -EINVAL;
+	}
+
+	set_addr_num = (int)nb_mc_addr;
+	for (i = 0; i < set_addr_num; i++) {
+		addr = &mc_addr_set[i];
+		if (!rte_is_multicast_ether_addr(addr)) {
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      addr);
+			hns3_err(hw,
+				 "Failed to set mc mac addr, addr(%s) invalid.",
+				 mac_str);
+			return -EINVAL;
+		}
+	}
+	rte_spinlock_lock(&hw->lock);
+	cur_addr_num = hw->mc_addrs_num;
+	for (i = 0; i < cur_addr_num; i++) {
+		num = cur_addr_num - i - 1;
+		addr = &hw->mc_addrs[num];
+		ret = hns3vf_remove_mc_mac_addr(hns, addr);
+		if (ret) {
+			rte_spinlock_unlock(&hw->lock);
+			return ret;
+		}
+
+		hw->mc_addrs_num--;
+	}
+
+	for (i = 0; i < set_addr_num; i++) {
+		addr = &mc_addr_set[i];
+		ret = hns3vf_add_mc_mac_addr(hns, addr);
+		if (ret) {
+			rte_spinlock_unlock(&hw->lock);
+			return ret;
+		}
+
+		rte_ether_addr_copy(addr, &hw->mc_addrs[hw->mc_addrs_num]);
+		hw->mc_addrs_num++;
+	}
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static int
+hns3vf_configure_all_mc_mac_addr(struct hns3_adapter *hns, bool del)
+{
+	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_ether_addr *addr;
+	int err = 0;
+	int ret;
+	int i;
+
+	for (i = 0; i < hw->mc_addrs_num; i++) {
+		addr = &hw->mc_addrs[i];
+		if (!rte_is_multicast_ether_addr(addr))
+			continue;
+		if (del)
+			ret = hns3vf_remove_mc_mac_addr(hns, addr);
+		else
+			ret = hns3vf_add_mc_mac_addr(hns, addr);
+		if (ret) {
+			err = ret;
+			rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+					      addr);
+			hns3_err(hw, "Failed to %s mc mac addr: %s for vf: %d",
+				 del ? "Remove" : "Restore", mac_str, ret);
+		}
+	}
+	return err;
+}
+
+static int
+hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc)
+{
+	struct hns3_mbx_vf_to_pf_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
+	req->msg[0] = HNS3_MBX_SET_PROMISC_MODE;
+	req->msg[1] = en_bc_pmc ? 1 : 0;
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Set promisc mode fail, status is %d", ret);
+
+	return ret;
+}
+
+static int
+hns3vf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	enum rte_eth_rx_mq_mode mq_mode = conf->rxmode.mq_mode;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_tx_queues;
+	struct rte_eth_rss_conf rss_conf;
+	uint16_t mtu;
+	int ret;
+
+	/*
+	 * Hardware does not support where the number of rx and tx queues is
+	 * not equal in hip08.
+	 */
+	if (nb_rx_q != nb_tx_q) {
+		hns3_err(hw,
+			 "nb_rx_queues(%u) not equal with nb_tx_queues(%u)! "
+			 "Hardware does not support this configuration!",
+			 nb_rx_q, nb_tx_q);
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		hns3_err(hw, "setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	hw->adapter_state = HNS3_NIC_CONFIGURING;
+
+	/* When RSS is not configured, redirect the packet queue 0 */
+	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
+		rss_conf = conf->rx_adv_conf.rss_conf;
+		if (rss_conf.rss_key == NULL) {
+			rss_conf.rss_key = rss_cfg->key;
+			rss_conf.rss_key_len = HNS3_RSS_KEY_SIZE;
+		}
+
+		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
+		if (ret)
+			goto cfg_err;
+	}
+
+	/*
+	 * If jumbo frames are enabled, MTU needs to be refreshed
+	 * according to the maximum RX packet length.
+	 */
+	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		/*
+		 * Security of max_rx_pkt_len is guaranteed in dpdk frame.
+		 * Maximum value of max_rx_pkt_len is HNS3_MAX_FRAME_LEN, so it
+		 * can safely assign to "uint16_t" type variable.
+		 */
+		mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(conf->rxmode.max_rx_pkt_len);
+		ret = hns3vf_dev_mtu_set(dev, mtu);
+		if (ret)
+			goto cfg_err;
+		dev->data->mtu = mtu;
+	}
+
+	ret = hns3vf_dev_configure_vlan(dev);
+	if (ret)
+		goto cfg_err;
+
+	hw->adapter_state = HNS3_NIC_CONFIGURED;
+	return 0;
+
+cfg_err:
+	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	return ret;
+}
+
+static int
+hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu)
+{
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu,
+				sizeof(mtu), true, NULL, 0);
+	if (ret)
+		hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret);
+
+	return ret;
+}
+
+static int
+hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
+	int ret;
+
+	if (mtu < RTE_ETHER_MIN_MTU || frame_size > HNS3_MAX_FRAME_LEN) {
+		hns3_err(hw, "Failed to set mtu, mtu(%u) invalid. valid "
+			 "range: %d~%d", mtu, RTE_ETHER_MIN_MTU, HNS3_MAX_MTU);
+		return -EINVAL;
+	}
+
+	if (dev->data->dev_started) {
+		hns3_err(hw, "Failed to set mtu, port %u must be stopped "
+			 "before configuration", dev->data->port_id);
+		return -EBUSY;
+	}
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3vf_config_mtu(hw, mtu);
+	if (ret) {
+		rte_spinlock_unlock(&hw->lock);
+		return ret;
+	}
+	if (frame_size > RTE_ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+						DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static void
+hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	info->max_rx_queues = hw->tqps_num;
+	info->max_tx_queues = hw->tqps_num;
+	info->max_rx_pktlen = HNS3_MAX_FRAME_LEN; /* CRC included */
+	info->min_rx_bufsize = hw->rx_buf_len;
+	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
+	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
+	info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
+				 DEV_RX_OFFLOAD_UDP_CKSUM |
+				 DEV_RX_OFFLOAD_TCP_CKSUM |
+				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 DEV_RX_OFFLOAD_KEEP_CRC |
+				 DEV_RX_OFFLOAD_SCATTER |
+				 DEV_RX_OFFLOAD_VLAN_STRIP |
+				 DEV_RX_OFFLOAD_QINQ_STRIP |
+				 DEV_RX_OFFLOAD_VLAN_FILTER |
+				 DEV_RX_OFFLOAD_JUMBO_FRAME);
+	info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	info->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM |
+				 DEV_TX_OFFLOAD_UDP_CKSUM |
+				 DEV_TX_OFFLOAD_TCP_CKSUM |
+				 DEV_TX_OFFLOAD_VLAN_INSERT |
+				 DEV_TX_OFFLOAD_QINQ_INSERT |
+				 DEV_TX_OFFLOAD_MULTI_SEGS |
+				 info->tx_queue_offload_capa);
+
+	info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = HNS3_MAX_RING_DESC,
+		.nb_min = HNS3_MIN_RING_DESC,
+		.nb_align = HNS3_ALIGN_RING_DESC,
+	};
+
+	info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = HNS3_MAX_RING_DESC,
+		.nb_min = HNS3_MIN_RING_DESC,
+		.nb_align = HNS3_ALIGN_RING_DESC,
+	};
+
+	info->vmdq_queue_num = 0;
+
+	info->reta_size = HNS3_RSS_IND_TBL_SIZE;
+	info->hash_key_size = HNS3_RSS_KEY_SIZE;
+	info->flow_type_rss_offloads = HNS3_ETH_RSS_SUPPORT;
+	info->default_rxportconf.ring_size = HNS3_DEFAULT_RING_DESC;
+	info->default_txportconf.ring_size = HNS3_DEFAULT_RING_DESC;
+}
+
+static void
+hns3vf_clear_event_cause(struct hns3_hw *hw, uint32_t regclr)
+{
+	hns3_write_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG, regclr);
+}
+
+static void
+hns3vf_disable_irq0(struct hns3_hw *hw)
+{
+	hns3_write_dev(hw, HNS3_MISC_VECTOR_REG_BASE, 0);
+}
+
+static void
+hns3vf_enable_irq0(struct hns3_hw *hw)
+{
+	hns3_write_dev(hw, HNS3_MISC_VECTOR_REG_BASE, 1);
+}
+
+static enum hns3vf_evt_cause
+hns3vf_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
+{
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3vf_evt_cause ret;
+	uint32_t cmdq_stat_reg;
+	uint32_t rst_ing_reg;
+	uint32_t val;
+
+	/* Fetch the events from their corresponding regs */
+	cmdq_stat_reg = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_STAT_REG);
+
+	/* Check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HNS3_VECTOR0_RX_CMDQ_INT_B) & cmdq_stat_reg) {
+		val = cmdq_stat_reg & ~BIT(HNS3_VECTOR0_RX_CMDQ_INT_B);
+		ret = HNS3VF_VECTOR0_EVENT_MBX;
+		goto out;
+	}
+
+	val = 0;
+	ret = HNS3VF_VECTOR0_EVENT_OTHER;
+out:
+	if (clearval)
+		*clearval = val;
+	return ret;
+}
+
+static void
+hns3vf_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3vf_evt_cause event_cause;
+	uint32_t clearval;
+
+	/* Disable interrupt */
+	hns3vf_disable_irq0(hw);
+
+	/* Read out interrupt causes */
+	event_cause = hns3vf_check_event_cause(hns, &clearval);
+
+	switch (event_cause) {
+	case HNS3VF_VECTOR0_EVENT_MBX:
+		hns3_dev_handle_mbx_msg(hw);
+		break;
+	default:
+		break;
+	}
+
+	/* Clear interrupt causes */
+	hns3vf_clear_event_cause(hw, clearval);
+
+	/* Enable interrupt */
+	hns3vf_enable_irq0(hw);
+}
+
+static int
+hns3vf_check_tqp_info(struct hns3_hw *hw)
+{
+	uint16_t tqps_num;
+
+	tqps_num = hw->tqps_num;
+	if (tqps_num > HNS3_MAX_TQP_NUM_PER_FUNC || tqps_num == 0) {
+		PMD_INIT_LOG(ERR, "Get invalid tqps_num(%u) from PF. valid "
+				  "range: 1~%d",
+			     tqps_num, HNS3_MAX_TQP_NUM_PER_FUNC);
+		return -EINVAL;
+	}
+
+	if (hw->rx_buf_len == 0)
+		hw->rx_buf_len = HNS3_DEFAULT_RX_BUF_LEN;
+	hw->alloc_rss_size = RTE_MIN(hw->rss_size_max, hw->tqps_num);
+
+	return 0;
+}
+
+static int
+hns3vf_get_queue_info(struct hns3_hw *hw)
+{
+#define HNS3VF_TQPS_RSS_INFO_LEN	6
+	uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN];
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true,
+				resp_msg, HNS3VF_TQPS_RSS_INFO_LEN);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret);
+		return ret;
+	}
+
+	memcpy(&hw->tqps_num, &resp_msg[0], sizeof(uint16_t));
+	memcpy(&hw->rss_size_max, &resp_msg[2], sizeof(uint16_t));
+	memcpy(&hw->rx_buf_len, &resp_msg[4], sizeof(uint16_t));
+
+	return hns3vf_check_tqp_info(hw);
+}
+
+static int
+hns3vf_get_queue_depth(struct hns3_hw *hw)
+{
+#define HNS3VF_TQPS_DEPTH_INFO_LEN	4
+	uint8_t resp_msg[HNS3VF_TQPS_DEPTH_INFO_LEN];
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QDEPTH, 0, NULL, 0, true,
+				resp_msg, HNS3VF_TQPS_DEPTH_INFO_LEN);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get tqp depth info from PF: %d",
+			     ret);
+		return ret;
+	}
+
+	memcpy(&hw->num_tx_desc, &resp_msg[0], sizeof(uint16_t));
+	memcpy(&hw->num_rx_desc, &resp_msg[2], sizeof(uint16_t));
+
+	return 0;
+}
+
+static int
+hns3vf_get_tc_info(struct hns3_hw *hw)
+{
+	uint8_t resp_msg;
+	int ret;
+
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_TCINFO, 0, NULL, 0,
+				true, &resp_msg, sizeof(resp_msg));
+	if (ret) {
+		hns3_err(hw, "VF request to get TC info from PF failed %d",
+			 ret);
+		return ret;
+	}
+
+	hw->hw_tc_map = resp_msg;
+
+	return 0;
+}
+
+static int
+hns3vf_get_configuration(struct hns3_hw *hw)
+{
+	int ret;
+
+	hw->mac.media_type = HNS3_MEDIA_TYPE_NONE;
+
+	/* Get queue configuration from PF */
+	ret = hns3vf_get_queue_info(hw);
+	if (ret)
+		return ret;
+
+	/* Get queue depth info from PF */
+	ret = hns3vf_get_queue_depth(hw);
+	if (ret)
+		return ret;
+
+	/* Get tc configuration from PF */
+	return hns3vf_get_tc_info(hw);
+}
+
+static void
+hns3vf_set_tc_info(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t nb_rx_q = hw->data->nb_rx_queues;
+	uint16_t new_tqps;
+	uint8_t i;
+
+	hw->num_tc = 0;
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++)
+		if (hw->hw_tc_map & BIT(i))
+			hw->num_tc++;
+
+	new_tqps = RTE_MIN(hw->tqps_num, nb_rx_q);
+	hw->alloc_rss_size = RTE_MIN(hw->rss_size_max, new_tqps / hw->num_tc);
+	hw->alloc_tqps = hw->alloc_rss_size * hw->num_tc;
+
+	hns3_tc_queue_mapping_cfg(hw);
+}
+
+static void
+hns3vf_request_link_info(struct hns3_hw *hw)
+{
+	uint8_t resp_msg;
+	int ret;
+
+	if (rte_atomic16_read(&hw->reset.resetting))
+		return;
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false,
+				&resp_msg, sizeof(resp_msg));
+	if (ret)
+		hns3_err(hw, "Failed to fetch link status from PF: %d", ret);
+}
+
+static int
+hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
+{
+#define HNS3VF_VLAN_MBX_MSG_LEN 5
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN];
+	uint16_t proto = htons(RTE_ETHER_TYPE_VLAN);
+	uint8_t is_kill = on ? 0 : 1;
+
+	msg_data[0] = is_kill;
+	memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
+	memcpy(&msg_data[3], &proto, sizeof(proto));
+
+	return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER,
+				 msg_data, HNS3VF_VLAN_MBX_MSG_LEN, false, NULL,
+				 0);
+}
+
+static int
+hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3vf_vlan_filter_configure(hns, vlan_id, on);
+	rte_spinlock_unlock(&hw->lock);
+	if (ret)
+		hns3_err(hw, "vf set vlan id failed, vlan_id =%u, ret =%d",
+			 vlan_id, ret);
+
+	return ret;
+}
+
+static int
+hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable)
+{
+	uint8_t msg_data;
+	int ret;
+
+	msg_data = enable ? 1 : 0;
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG,
+				&msg_data, sizeof(msg_data), false, NULL, 0);
+	if (ret)
+		hns3_err(hw, "vf enable strip failed, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	unsigned int tmp_mask;
+
+	tmp_mask = (unsigned int)mask;
+	/* Vlan stripping setting */
+	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+		rte_spinlock_lock(&hw->lock);
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			hns3vf_en_hw_strip_rxvtag(hw, true);
+		else
+			hns3vf_en_hw_strip_rxvtag(hw, false);
+		rte_spinlock_unlock(&hw->lock);
+	}
+
+	return 0;
+}
+
+static int
+hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct rte_eth_dev_data *data = dev->data;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	if (data->dev_conf.txmode.hw_vlan_reject_tagged ||
+	    data->dev_conf.txmode.hw_vlan_reject_untagged ||
+	    data->dev_conf.txmode.hw_vlan_insert_pvid) {
+		hns3_warn(hw, "hw_vlan_reject_tagged, hw_vlan_reject_untagged "
+			      "or hw_vlan_insert_pvid is not support!");
+	}
+
+	/* Apply vlan offload setting */
+	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+	if (ret)
+		hns3_err(hw, "dev config vlan offload failed, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3vf_set_alive(struct hns3_hw *hw, bool alive)
+{
+	uint8_t msg_data;
+
+	msg_data = alive ? 1 : 0;
+	return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data,
+				 sizeof(msg_data), false, NULL, 0);
+}
+
+static void
+hns3vf_keep_alive_handler(void *param)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint8_t respmsg;
+	int ret;
+
+	if (!hns3vf_is_reset_pending(hns)) {
+		ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0,
+					false, &respmsg, sizeof(uint8_t));
+		if (ret)
+			hns3_err(hw, "VF sends keeping alive cmd failed(=%d)",
+				 ret);
+	} else
+		hns3_warn(hw, "Cancel keeping alive when reset is pending");
+
+	rte_eal_alarm_set(HNS3VF_KEEP_ALIVE_INTERVAL, hns3vf_keep_alive_handler,
+			  eth_dev);
+}
+
+static void
+hns3vf_service_handler(void *param)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	/*
+	 * The query link status and reset processing are executed in the
+	 * interrupt thread.When the IMP reset occurs, IMP will not respond,
+	 * and the query operation will time out after 30ms. In the case of
+	 * multiple PF/VFs, each query failure timeout causes the IMP reset
+	 * interrupt to fail to respond within 100ms.
+	 * Before querying the link status, check whether there is a reset
+	 * pending, and if so, abandon the query.
+	 */
+	if (!hns3vf_is_reset_pending(hns))
+		hns3vf_request_link_info(hw);
+	else
+		hns3_warn(hw, "Cancel the query when reset is pending");
+
+	rte_eal_alarm_set(HNS3VF_SERVICE_INTERVAL, hns3vf_service_handler,
+			  eth_dev);
+}
+
+static int
+hns3vf_init_hardware(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t mtu = hw->data->mtu;
+	int ret;
+
+	ret = hns3vf_set_promisc_mode(hw, true);
+	if (ret)
+		return ret;
+
+	ret = hns3vf_config_mtu(hw, mtu);
+	if (ret)
+		goto err_init_hardware;
+
+	ret = hns3vf_vlan_filter_configure(hns, 0, 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize VLAN config: %d", ret);
+		goto err_init_hardware;
+	}
+
+	ret = hns3_config_gro(hw, false);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to config gro: %d", ret);
+		goto err_init_hardware;
+	}
+
+	ret = hns3vf_set_alive(hw, true);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to VF send alive to PF: %d", ret);
+		goto err_init_hardware;
+	}
+
+	hns3vf_request_link_info(hw);
+	return 0;
+
+err_init_hardware:
+	(void)hns3vf_set_promisc_mode(hw, false);
+	return ret;
+}
+
+static int
+hns3vf_init_vf(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Get hardware io base address from pcie BAR2 IO space */
+	hw->io_base = pci_dev->mem_resource[2].addr;
+
+	/* Firmware command queue initialize */
+	ret = hns3_cmd_init_queue(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init cmd queue: %d", ret);
+		goto err_cmd_init_queue;
+	}
+
+	/* Firmware command initialize */
+	ret = hns3_cmd_init(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init cmd: %d", ret);
+		goto err_cmd_init;
+	}
+
+	rte_spinlock_init(&hw->mbx_resp.lock);
+
+	hns3vf_clear_event_cause(hw, 0);
+
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 hns3vf_interrupt_handler, eth_dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
+		goto err_intr_callback_register;
+	}
+
+	/* Enable interrupt */
+	rte_intr_enable(&pci_dev->intr_handle);
+	hns3vf_enable_irq0(hw);
+
+	/* Get configuration from PF */
+	ret = hns3vf_get_configuration(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fetch configuration: %d", ret);
+		goto err_get_config;
+	}
+
+	rte_eth_random_addr(hw->mac.mac_addr); /* Generate a random mac addr */
+
+	ret = hns3vf_init_hardware(hns);
+	if (ret)
+		goto err_get_config;
+
+	hns3_set_default_rss_args(hw);
+
+	hns3_stats_reset(eth_dev);
+	return 0;
+
+err_get_config:
+	hns3vf_disable_irq0(hw);
+	rte_intr_disable(&pci_dev->intr_handle);
+	hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+			     eth_dev);
+err_intr_callback_register:
+	hns3_cmd_uninit(hw);
+
+err_cmd_init:
+	hns3_cmd_destroy_queue(hw);
+
+err_cmd_init_queue:
+	hw->io_base = NULL;
+
+	return ret;
+}
+
+static void
+hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hns3_rss_uninit(hns);
+	(void)hns3vf_set_alive(hw, false);
+	(void)hns3vf_set_promisc_mode(hw, false);
+	hns3vf_disable_irq0(hw);
+	rte_intr_disable(&pci_dev->intr_handle);
+	hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+			     eth_dev);
+	hns3_cmd_uninit(hw);
+	hns3_cmd_destroy_queue(hw);
+	hw->io_base = NULL;
+}
+
+static int
+hns3vf_do_stop(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	bool reset_queue;
+
+	hw->mac.link_status = ETH_LINK_DOWN;
+
+	hns3vf_configure_mac_addr(hns, true);
+	reset_queue = true;
+	return hns3_stop_queues(hns, reset_queue);
+}
+
+static void
+hns3vf_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_state = HNS3_NIC_STOPPING;
+	hns3_set_rxtx_function(eth_dev);
+	rte_wmb();
+	/* Disable datapath on secondary process. */
+	hns3_mp_req_stop_rxtx(eth_dev);
+	/* Prevent crashes when queues are still in use. */
+	rte_delay_ms(hw->tqps_num);
+
+	rte_spinlock_lock(&hw->lock);
+	hns3vf_do_stop(hns);
+	hns3_dev_release_mbufs(hns);
+	hw->adapter_state = HNS3_NIC_CONFIGURED;
+	rte_spinlock_unlock(&hw->lock);
+}
+
+static void
+hns3vf_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		hns3vf_dev_stop(eth_dev);
+
+	hw->adapter_state = HNS3_NIC_CLOSING;
+	rte_eal_alarm_cancel(hns3vf_keep_alive_handler, eth_dev);
+	rte_eal_alarm_cancel(hns3vf_service_handler, eth_dev);
+	hns3vf_configure_all_mc_mac_addr(hns, true);
+	hns3vf_uninit_vf(eth_dev);
+	hns3_free_all_queues(eth_dev);
+	hw->adapter_state = HNS3_NIC_CLOSED;
+}
+
+static int
+hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
+		       __rte_unused int wait_to_complete)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_mac *mac = &hw->mac;
+	struct rte_eth_link new_link;
+
+	hns3vf_request_link_info(hw);
+
+	memset(&new_link, 0, sizeof(new_link));
+	switch (mac->link_speed) {
+	case ETH_SPEED_NUM_10M:
+	case ETH_SPEED_NUM_100M:
+	case ETH_SPEED_NUM_1G:
+	case ETH_SPEED_NUM_10G:
+	case ETH_SPEED_NUM_25G:
+	case ETH_SPEED_NUM_40G:
+	case ETH_SPEED_NUM_50G:
+	case ETH_SPEED_NUM_100G:
+		new_link.link_speed = mac->link_speed;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	}
+
+	new_link.link_duplex = mac->link_duplex;
+	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_autoneg =
+	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(eth_dev, &new_link);
+}
+
+static int
+hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	hns3vf_set_tc_info(hns);
+
+	ret = hns3_start_queues(hns, reset_queue);
+	if (ret) {
+		hns3_err(hw, "Failed to start queues: %d", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+hns3vf_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_spinlock_lock(&hw->lock);
+	hw->adapter_state = HNS3_NIC_STARTING;
+	ret = hns3vf_do_start(hns, true);
+	if (ret) {
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+		rte_spinlock_unlock(&hw->lock);
+		return ret;
+	}
+	hw->adapter_state = HNS3_NIC_STARTED;
+	rte_spinlock_unlock(&hw->lock);
+	return 0;
+}
+
+static const struct eth_dev_ops hns3vf_eth_dev_ops = {
+	.dev_start          = hns3vf_dev_start,
+	.dev_stop           = hns3vf_dev_stop,
+	.dev_close          = hns3vf_dev_close,
+	.mtu_set            = hns3vf_dev_mtu_set,
+	.dev_infos_get      = hns3vf_dev_infos_get,
+	.dev_configure      = hns3vf_dev_configure,
+	.mac_addr_add       = hns3vf_add_mac_addr,
+	.mac_addr_remove    = hns3vf_remove_mac_addr,
+	.mac_addr_set       = hns3vf_set_default_mac_addr,
+	.set_mc_addr_list   = hns3vf_set_mc_mac_addr_list,
+	.link_update        = hns3vf_dev_link_update,
+	.rss_hash_update    = hns3_dev_rss_hash_update,
+	.rss_hash_conf_get  = hns3_dev_rss_hash_conf_get,
+	.reta_update        = hns3_dev_rss_reta_update,
+	.reta_query         = hns3_dev_rss_reta_query,
+	.filter_ctrl        = hns3_dev_filter_ctrl,
+	.vlan_filter_set    = hns3vf_vlan_filter_set,
+	.vlan_offload_set   = hns3vf_vlan_offload_set,
+};
+
+static int
+hns3vf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->process_private = (struct hns3_process_private *)
+	    rte_zmalloc_socket("hns3_filter_list",
+			       sizeof(struct hns3_process_private),
+			       RTE_CACHE_LINE_SIZE, eth_dev->device->numa_node);
+	if (eth_dev->process_private == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
+		return -ENOMEM;
+	}
+
+	/* initialize flow filter lists */
+	hns3_filterlist_init(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	eth_dev->dev_ops = &hns3vf_eth_dev_ops;
+
+	hw->adapter_state = HNS3_NIC_UNINITIALIZED;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+	hns->is_vf = true;
+	hw->data = eth_dev->data;
+
+	ret = hns3vf_init_vf(eth_dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init vf: %d", ret);
+		goto err_init_vf;
+	}
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("hns3vf-mac",
+					       sizeof(struct rte_ether_addr) *
+					       HNS3_UC_MACADDR_NUM, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %ld bytes needed to "
+			     "store MAC addresses",
+			     sizeof(struct rte_ether_addr) *
+			     HNS3_UC_MACADDR_NUM);
+		ret = -ENOMEM;
+		goto err_rte_zmalloc;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.mac_addr,
+			    &eth_dev->data->mac_addrs[0]);
+	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	rte_eal_alarm_set(HNS3VF_KEEP_ALIVE_INTERVAL, hns3vf_keep_alive_handler,
+			  eth_dev);
+	rte_eal_alarm_set(HNS3VF_SERVICE_INTERVAL, hns3vf_service_handler,
+			  eth_dev);
+	return 0;
+
+err_rte_zmalloc:
+	hns3vf_uninit_vf(eth_dev);
+
+err_init_vf:
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
+	rte_free(eth_dev->process_private);
+	eth_dev->process_private = NULL;
+
+	return ret;
+}
+
+static int
+hns3vf_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
+
+	if (hw->adapter_state < HNS3_NIC_CLOSING)
+		hns3vf_dev_close(eth_dev);
+
+	rte_free(eth_dev->process_private);
+	eth_dev->process_private = NULL;
+	hw->adapter_state = HNS3_NIC_REMOVED;
+	return 0;
+}
+
+static int
+eth_hns3vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct hns3_adapter),
+					     hns3vf_dev_init);
+}
+
+static int
+eth_hns3vf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, hns3vf_dev_uninit);
+}
+
+static const struct rte_pci_id pci_id_hns3vf_map[] = {
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_VF) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_RDMA_PFC_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct rte_pci_driver rte_hns3vf_pmd = {
+	.id_table = pci_id_hns3vf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = eth_hns3vf_pci_probe,
+	.remove = eth_hns3vf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_hns3_vf, rte_hns3vf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_hns3_vf, pci_id_hns3vf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_hns3_vf, "* igb_uio | vfio-pci");
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (13 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-23 15:42   ` Aaron Conole
  2019-08-30 15:13   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops Wei Hu (Xavier)
                   ` (7 subsequent siblings)
  22 siblings, 2 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds queue related operation, package sending and
receiving function codes.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_dcb.c       |    1 +
 drivers/net/hns3/hns3_ethdev.c    |    8 +
 drivers/net/hns3/hns3_ethdev_vf.c |    7 +
 drivers/net/hns3/hns3_rxtx.c      | 1343 +++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_rxtx.h      |  287 ++++++++
 5 files changed, 1646 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_rxtx.c
 create mode 100644 drivers/net/hns3/hns3_rxtx.h

diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 6fb97de..b86a4b0 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -13,6 +13,7 @@
 #include <rte_memcpy.h>
 #include <rte_spinlock.h>
 
+#include "hns3_rxtx.h"
 #include "hns3_logs.h"
 #include "hns3_cmd.h"
 #include "hns3_mbx.h"
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index df4749e..73b34d2 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -35,6 +35,7 @@
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
+#include "hns3_rxtx.h"
 #include "hns3_regs.h"
 #include "hns3_dcb.h"
 
@@ -3603,6 +3604,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.mtu_set            = hns3_dev_mtu_set,
 	.dev_infos_get          = hns3_dev_infos_get,
 	.fw_version_get         = hns3_fw_version_get,
+	.rx_queue_setup         = hns3_rx_queue_setup,
+	.tx_queue_setup         = hns3_tx_queue_setup,
+	.rx_queue_release       = hns3_dev_rx_queue_release,
+	.tx_queue_release       = hns3_dev_tx_queue_release,
 	.flow_ctrl_get          = hns3_flow_ctrl_get,
 	.flow_ctrl_set          = hns3_flow_ctrl_set,
 	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,
@@ -3645,6 +3650,7 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow filter lists */
 	hns3_filterlist_init(eth_dev);
 
+	hns3_set_rxtx_function(eth_dev);
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
@@ -3698,6 +3704,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 
 err_init_pf:
 	eth_dev->dev_ops = NULL;
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
 	rte_free(eth_dev->process_private);
 	eth_dev->process_private = NULL;
 	return ret;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 43b27ed..abab2b4 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -35,6 +35,7 @@
 #include "hns3_fdir.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
+#include "hns3_rxtx.h"
 #include "hns3_regs.h"
 #include "hns3_dcb.h"
 
@@ -1133,6 +1134,7 @@ hns3vf_dev_start(struct rte_eth_dev *eth_dev)
 	}
 	hw->adapter_state = HNS3_NIC_STARTED;
 	rte_spinlock_unlock(&hw->lock);
+	hns3_set_rxtx_function(eth_dev);
 	return 0;
 }
 
@@ -1142,6 +1144,10 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.dev_close          = hns3vf_dev_close,
 	.mtu_set            = hns3vf_dev_mtu_set,
 	.dev_infos_get      = hns3vf_dev_infos_get,
+	.rx_queue_setup     = hns3_rx_queue_setup,
+	.tx_queue_setup     = hns3_tx_queue_setup,
+	.rx_queue_release   = hns3_dev_rx_queue_release,
+	.tx_queue_release   = hns3_dev_tx_queue_release,
 	.dev_configure      = hns3vf_dev_configure,
 	.mac_addr_add       = hns3vf_add_mac_addr,
 	.mac_addr_remove    = hns3vf_remove_mac_addr,
@@ -1179,6 +1185,7 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	/* initialize flow filter lists */
 	hns3_filterlist_init(eth_dev);
 
+	hns3_set_rxtx_function(eth_dev);
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
new file mode 100644
index 0000000..8a3ca4f
--- /dev/null
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -0,0 +1,1343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdarg.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_pci.h>
+#include <rte_spinlock.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_rxtx.h"
+#include "hns3_regs.h"
+#include "hns3_logs.h"
+
+#define HNS3_CFG_DESC_NUM(num)	((num) / 8 - 1)
+#define DEFAULT_RX_FREE_THRESH	16
+
+static void
+hns3_rx_queue_release_mbufs(struct hns3_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (rxq->sw_ring) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i].mbuf) {
+				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+				rxq->sw_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+static void
+hns3_tx_queue_release_mbufs(struct hns3_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (txq->sw_ring) {
+		for (i = 0; i < txq->nb_tx_desc; i++) {
+			if (txq->sw_ring[i].mbuf) {
+				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+				txq->sw_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+static void
+hns3_rx_queue_release(void *queue)
+{
+	struct hns3_rx_queue *rxq = queue;
+	if (rxq) {
+		hns3_rx_queue_release_mbufs(rxq);
+		if (rxq->mz)
+			rte_memzone_free(rxq->mz);
+		if (rxq->sw_ring)
+			rte_free(rxq->sw_ring);
+		rte_free(rxq);
+	}
+}
+
+static void
+hns3_tx_queue_release(void *queue)
+{
+	struct hns3_tx_queue *txq = queue;
+	if (txq) {
+		hns3_tx_queue_release_mbufs(txq);
+		if (txq->mz)
+			rte_memzone_free(txq->mz);
+		if (txq->sw_ring)
+			rte_free(txq->sw_ring);
+		rte_free(txq);
+	}
+}
+
+void
+hns3_dev_rx_queue_release(void *queue)
+{
+	struct hns3_rx_queue *rxq = queue;
+	struct hns3_adapter *hns;
+
+	if (rxq == NULL)
+		return;
+
+	hns = rxq->hns;
+	rte_spinlock_lock(&hns->hw.lock);
+	hns3_rx_queue_release(queue);
+	rte_spinlock_unlock(&hns->hw.lock);
+}
+
+void
+hns3_dev_tx_queue_release(void *queue)
+{
+	struct hns3_tx_queue *txq = queue;
+	struct hns3_adapter *hns;
+
+	if (txq == NULL)
+		return;
+
+	hns = txq->hns;
+	rte_spinlock_lock(&hns->hw.lock);
+	hns3_tx_queue_release(queue);
+	rte_spinlock_unlock(&hns->hw.lock);
+}
+
+void
+hns3_free_all_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	if (dev->data->rx_queues)
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			hns3_rx_queue_release(dev->data->rx_queues[i]);
+			dev->data->rx_queues[i] = NULL;
+		}
+
+	if (dev->data->tx_queues)
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			hns3_tx_queue_release(dev->data->tx_queues[i]);
+			dev->data->tx_queues[i] = NULL;
+		}
+}
+
+static int
+hns3_alloc_rx_queue_mbufs(struct hns3_hw *hw, struct hns3_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+		if (unlikely(mbuf == NULL)) {
+			hns3_err(hw, "Failed to allocate RXD[%d] for rx queue!",
+				 i);
+			hns3_rx_queue_release_mbufs(rxq);
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		rxq->sw_ring[i].mbuf = mbuf;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+		rxq->rx_ring[i].addr = dma_addr;
+		rxq->rx_ring[i].rx.bd_base_info = 0;
+	}
+
+	return 0;
+}
+
+static int
+hns3_buf_size2type(uint32_t buf_size)
+{
+	int bd_size_type;
+
+	switch (buf_size) {
+	case 512:
+		bd_size_type = HNS3_BD_SIZE_512_TYPE;
+		break;
+	case 1024:
+		bd_size_type = HNS3_BD_SIZE_1024_TYPE;
+		break;
+	case 4096:
+		bd_size_type = HNS3_BD_SIZE_4096_TYPE;
+		break;
+	default:
+		bd_size_type = HNS3_BD_SIZE_2048_TYPE;
+	}
+
+	return bd_size_type;
+}
+
+static void
+hns3_init_rx_queue_hw(struct hns3_rx_queue *rxq)
+{
+	uint32_t rx_buf_len = rxq->rx_buf_len;
+	uint64_t dma_addr = rxq->rx_ring_phys_addr;
+
+	hns3_write_dev(rxq, HNS3_RING_RX_BASEADDR_L_REG, (uint32_t)dma_addr);
+	hns3_write_dev(rxq, HNS3_RING_RX_BASEADDR_H_REG,
+		       (uint32_t)((dma_addr >> 31) >> 1));
+
+	hns3_write_dev(rxq, HNS3_RING_RX_BD_LEN_REG,
+		       hns3_buf_size2type(rx_buf_len));
+	hns3_write_dev(rxq, HNS3_RING_RX_BD_NUM_REG,
+		       HNS3_CFG_DESC_NUM(rxq->nb_rx_desc));
+}
+
+static void
+hns3_init_tx_queue_hw(struct hns3_tx_queue *txq)
+{
+	uint64_t dma_addr = txq->tx_ring_phys_addr;
+
+	hns3_write_dev(txq, HNS3_RING_TX_BASEADDR_L_REG, (uint32_t)dma_addr);
+	hns3_write_dev(txq, HNS3_RING_TX_BASEADDR_H_REG,
+		       (uint32_t)((dma_addr >> 31) >> 1));
+
+	hns3_write_dev(txq, HNS3_RING_TX_BD_NUM_REG,
+		       HNS3_CFG_DESC_NUM(txq->nb_tx_desc));
+}
+
+static void
+hns3_enable_all_queues(struct hns3_hw *hw, bool en)
+{
+	struct hns3_rx_queue *rxq;
+	struct hns3_tx_queue *txq;
+	uint32_t rcb_reg;
+	int i;
+
+	for (i = 0; i < hw->data->nb_rx_queues; i++) {
+		rxq = hw->data->rx_queues[i];
+		txq = hw->data->tx_queues[i];
+		if (rxq == NULL || txq == NULL ||
+		    (en && (rxq->rx_deferred_start || txq->tx_deferred_start)))
+			continue;
+		rcb_reg = hns3_read_dev(rxq, HNS3_RING_EN_REG);
+		if (en)
+			rcb_reg |= BIT(HNS3_RING_EN_B);
+		else
+			rcb_reg &= ~BIT(HNS3_RING_EN_B);
+		hns3_write_dev(rxq, HNS3_RING_EN_REG, rcb_reg);
+	}
+}
+
+static int
+hns3_tqp_enable(struct hns3_hw *hw, uint16_t queue_id, bool enable)
+{
+	struct hns3_cfg_com_tqp_queue_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	req = (struct hns3_cfg_com_tqp_queue_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_COM_TQP_QUEUE, false);
+	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
+	req->stream_id = 0;
+	hns3_set_bit(req->enable, HNS3_TQP_ENABLE_B, enable ? 1 : 0);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "TQP enable fail, ret = %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_send_reset_tqp_cmd(struct hns3_hw *hw, uint16_t queue_id, bool enable)
+{
+	struct hns3_reset_tqp_queue_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RESET_TQP_QUEUE, false);
+
+	req = (struct hns3_reset_tqp_queue_cmd *)desc.data;
+	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
+	hns3_set_bit(req->reset_req, HNS3_TQP_RESET_B, enable ? 1 : 0);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "Send tqp reset cmd error, ret = %d", ret);
+
+	return ret;
+}
+
+static int
+hns3_get_reset_status(struct hns3_hw *hw, uint16_t queue_id)
+{
+	struct hns3_reset_tqp_queue_cmd *req;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RESET_TQP_QUEUE, true);
+
+	req = (struct hns3_reset_tqp_queue_cmd *)desc.data;
+	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Get reset status error, ret =%d", ret);
+		return ret;
+	}
+
+	return hns3_get_bit(req->ready_to_reset, HNS3_TQP_RESET_B);
+}
+
+static int
+hns3_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
+{
+#define HNS3_TQP_RESET_TRY_MS	200
+	uint64_t end;
+	int reset_status;
+	int ret;
+
+	ret = hns3_tqp_enable(hw, queue_id, false);
+	if (ret)
+		return ret;
+
+	/*
+	 * In current version VF is not supported when PF is taken over by DPDK,
+	 * all task queue pairs are mapped to PF function, so PF's queue id is
+	 * equals to the global queue id in PF range.
+	 */
+	ret = hns3_send_reset_tqp_cmd(hw, queue_id, true);
+	if (ret) {
+		hns3_err(hw, "Send reset tqp cmd fail, ret = %d", ret);
+		return ret;
+	}
+	ret = -ETIMEDOUT;
+	end = get_timeofday_ms() + HNS3_TQP_RESET_TRY_MS;
+	do {
+		/* Wait for tqp hw reset */
+		rte_delay_ms(HNS3_POLL_RESPONE_MS);
+		reset_status = hns3_get_reset_status(hw, queue_id);
+		if (reset_status) {
+			ret = 0;
+			break;
+		}
+	} while (get_timeofday_ms() < end);
+
+	if (ret) {
+		hns3_err(hw, "Reset TQP fail, ret = %d", ret);
+		return ret;
+	}
+
+	ret = hns3_send_reset_tqp_cmd(hw, queue_id, false);
+	if (ret)
+		hns3_err(hw, "Deassert the soft reset fail, ret = %d", ret);
+
+	return ret;
+}
+
+static int
+hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
+{
+	uint8_t msg_data[2];
+	int ret;
+
+	/* Disable VF's queue before send queue reset msg to PF */
+	ret = hns3_tqp_enable(hw, queue_id, false);
+	if (ret)
+		return ret;
+
+	memcpy(msg_data, &queue_id, sizeof(uint16_t));
+
+	return hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data,
+				 sizeof(msg_data), true, NULL, 0);
+}
+
+static int
+hns3_reset_queue(struct hns3_adapter *hns, uint16_t queue_id)
+{
+	struct hns3_hw *hw = &hns->hw;
+	if (hns->is_vf)
+		return hns3vf_reset_tqp(hw, queue_id);
+	else
+		return hns3_reset_tqp(hw, queue_id);
+}
+
+int
+hns3_reset_all_queues(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+	uint16_t i;
+
+	for (i = 0; i < hw->data->nb_rx_queues; i++) {
+		ret = hns3_reset_queue(hns, i);
+		if (ret) {
+			hns3_err(hw, "Failed to reset No.%d queue: %d", i, ret);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+static int
+hns3_dev_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rx_queue *rxq;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rxq = hw->data->rx_queues[idx];
+
+	ret = hns3_alloc_rx_queue_mbufs(hw, rxq);
+	if (ret) {
+		hns3_err(hw, "Failed to alloc mbuf for No.%d rx queue: %d",
+			    idx, ret);
+		return ret;
+	}
+
+	rxq->next_to_use = 0;
+	rxq->next_to_clean = 0;
+	hns3_init_rx_queue_hw(rxq);
+
+	return 0;
+}
+
+static void
+hns3_dev_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+	struct hns3_desc *desc;
+	int i;
+
+	txq = hw->data->tx_queues[idx];
+
+	/* Clear tx bd */
+	desc = txq->tx_ring;
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		desc->tx.tp_fe_sc_vld_ra_ri = 0;
+		desc++;
+	}
+
+	txq->next_to_use = 0;
+	txq->next_to_clean = 0;
+	txq->tx_bd_ready   = txq->nb_tx_desc;
+	hns3_init_tx_queue_hw(txq);
+}
+
+static void
+hns3_init_tx_ring_tc(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+	int i, num;
+
+	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
+		struct hns3_tc_queue_info *tc_queue = &hw->tc_queue[i];
+		int j;
+
+		if (!tc_queue->enable)
+			continue;
+
+		for (j = 0; j < tc_queue->tqp_count; j++) {
+			num = tc_queue->tqp_offset + j;
+			txq = hw->data->tx_queues[num];
+			if (txq == NULL)
+				continue;
+
+			hns3_write_dev(txq, HNS3_RING_TX_TC_REG, tc_queue->tc);
+		}
+	}
+}
+
+int
+hns3_start_queues(struct hns3_adapter *hns, bool reset_queue)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_dev_data *dev_data = hw->data;
+	struct hns3_rx_queue *rxq;
+	struct hns3_tx_queue *txq;
+	int ret;
+	int i;
+	int j;
+
+	/* Initialize RSS for queues */
+	ret = hns3_config_rss(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to configure rss %d", ret);
+		return ret;
+	}
+
+	if (reset_queue) {
+		ret = hns3_reset_all_queues(hns);
+		if (ret) {
+			hns3_err(hw, "Failed to reset all queues %d", ret);
+			return ret;
+		}
+	}
+
+	/*
+	 * Hardware does not support where the number of rx and tx queues is
+	 * not equal in hip08. In .dev_configure callback function we will
+	 * check the two values, here we think that the number of rx and tx
+	 * queues is equal.
+	 */
+	for (i = 0; i < hw->data->nb_rx_queues; i++) {
+		rxq = dev_data->rx_queues[i];
+		txq = dev_data->tx_queues[i];
+		if (rxq == NULL || txq == NULL || rxq->rx_deferred_start ||
+		    txq->tx_deferred_start)
+			continue;
+
+		ret = hns3_dev_rx_queue_start(hns, i);
+		if (ret) {
+			hns3_err(hw, "Failed to start No.%d rx queue: %d", i,
+				 ret);
+			goto out;
+		}
+		hns3_dev_tx_queue_start(hns, i);
+	}
+	hns3_init_tx_ring_tc(hns);
+
+	hns3_enable_all_queues(hw, true);
+	return 0;
+
+out:
+	for (j = 0; j < i; j++) {
+		rxq = dev_data->rx_queues[j];
+		hns3_rx_queue_release_mbufs(rxq);
+	}
+
+	return ret;
+}
+
+int
+hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	hns3_enable_all_queues(hw, false);
+	if (reset_queue) {
+		ret = hns3_reset_all_queues(hns);
+		if (ret) {
+			hns3_err(hw, "Failed to reset all queues %d", ret);
+			return ret;
+		}
+	}
+	return 0;
+}
+
+void
+hns3_dev_release_mbufs(struct hns3_adapter *hns)
+{
+	struct rte_eth_dev_data *dev_data = hns->hw.data;
+	struct hns3_rx_queue *rxq;
+	struct hns3_tx_queue *txq;
+	int i;
+
+	if (dev_data->rx_queues)
+		for (i = 0; i < dev_data->nb_rx_queues; i++) {
+			rxq = dev_data->rx_queues[i];
+			if (rxq == NULL || rxq->rx_deferred_start)
+				continue;
+			hns3_rx_queue_release_mbufs(rxq);
+		}
+
+	if (dev_data->tx_queues)
+		for (i = 0; i < dev_data->nb_tx_queues; i++) {
+			txq = dev_data->tx_queues[i];
+			if (txq == NULL || txq->tx_deferred_start)
+				continue;
+			hns3_tx_queue_release_mbufs(txq);
+		}
+}
+
+int
+hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
+		    unsigned int socket_id, const struct rte_eth_rxconf *conf,
+		    struct rte_mempool *mp)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_memzone *rx_mz;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_rx_queue *rxq;
+	unsigned int desc_size = sizeof(struct hns3_desc);
+	unsigned int rx_desc;
+	int rx_entry_len;
+
+	if (dev->data->dev_started) {
+		hns3_err(hw, "rx_queue_setup after dev_start no supported");
+		return -EINVAL;
+	}
+
+	if (nb_desc > HNS3_MAX_RING_DESC || nb_desc < HNS3_MIN_RING_DESC ||
+	    nb_desc % HNS3_ALIGN_RING_DESC) {
+		hns3_err(hw, "Number (%u) of rx descriptors is invalid",
+			 nb_desc);
+		return -EINVAL;
+	}
+
+	if (dev->data->rx_queues[idx]) {
+		hns3_rx_queue_release(dev->data->rx_queues[idx]);
+		dev->data->rx_queues[idx] = NULL;
+	}
+
+	rxq = rte_zmalloc_socket("hns3 RX queue", sizeof(struct hns3_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL) {
+		hns3_err(hw, "Failed to allocate memory for rx queue!");
+		return -ENOMEM;
+	}
+
+	rxq->hns = hns;
+	rxq->mb_pool = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->queue_id = idx;
+	if (conf->rx_free_thresh <= 0)
+		rxq->rx_free_thresh = DEFAULT_RX_FREE_THRESH;
+	else
+		rxq->rx_free_thresh = conf->rx_free_thresh;
+	rxq->rx_deferred_start = conf->rx_deferred_start;
+
+	rx_entry_len = sizeof(struct hns3_entry) * rxq->nb_rx_desc;
+	rxq->sw_ring = rte_zmalloc_socket("hns3 RX sw ring", rx_entry_len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq->sw_ring == NULL) {
+		hns3_err(hw, "Failed to allocate memory for rx sw ring!");
+		hns3_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate rx ring hardware descriptors. */
+	rx_desc = rxq->nb_rx_desc * desc_size;
+	rx_mz = rte_eth_dma_zone_reserve(dev, "rx_ring", idx, rx_desc,
+					 HNS3_RING_BASE_ALIGN, socket_id);
+	if (rx_mz == NULL) {
+		hns3_err(hw, "Failed to reserve DMA memory for No.%d rx ring!",
+			 idx);
+		hns3_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+	rxq->mz = rx_mz;
+	rxq->rx_ring = (struct hns3_desc *)rx_mz->addr;
+	rxq->rx_ring_phys_addr = rx_mz->iova;
+
+	hns3_dbg(hw, "No.%d rx descriptors iova 0x%lx", idx,
+		 rxq->rx_ring_phys_addr);
+
+	rxq->next_to_use = 0;
+	rxq->next_to_clean = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+	rxq->port_id = dev->data->port_id;
+	rxq->configured = true;
+	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
+				idx * HNS3_TQP_REG_SIZE);
+	rxq->rx_buf_len = hw->rx_buf_len;
+	rxq->non_vld_descs = 0;
+	rxq->l2_errors = 0;
+	rxq->csum_erros = 0;
+	rxq->pkt_len_errors = 0;
+	rxq->errors = 0;
+
+	rte_spinlock_lock(&hw->lock);
+	dev->data->rx_queues[idx] = rxq;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static inline uint32_t
+rxd_pkt_info_to_pkt_type(uint32_t pkt_info)
+{
+#define HNS3_L2TBL_NUM	4
+#define HNS3_L3TBL_NUM	16
+#define HNS3_L4TBL_NUM	16
+	uint32_t pkt_type = 0;
+	uint32_t l2id, l3id, l4id;
+
+	static const uint32_t l2table[HNS3_L2TBL_NUM] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_VLAN,
+		RTE_PTYPE_L2_ETHER_QINQ,
+		0
+	};
+
+	static const uint32_t l3table[HNS3_L3TBL_NUM] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		0, 0, 0, 0, 0, 0, 0, 0, 0
+	};
+
+	static const uint32_t l4table[HNS3_L4TBL_NUM] = {
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_IGMP,
+		RTE_PTYPE_L4_ICMP,
+		0, 0, 0, 0, 0, 0, 0, 0, 0, 0
+	};
+
+	l2id = hns3_get_field(pkt_info, HNS3_RXD_STRP_TAGP_M,
+			      HNS3_RXD_STRP_TAGP_S);
+	l3id = hns3_get_field(pkt_info, HNS3_RXD_L3ID_M, HNS3_RXD_L3ID_S);
+	l4id = hns3_get_field(pkt_info, HNS3_RXD_L4ID_M, HNS3_RXD_L4ID_S);
+	pkt_type |= (l2table[l2id] | l3table[l3id] | l4table[l4id]);
+
+	return pkt_type;
+}
+
+const uint32_t *
+hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_VLAN,
+		RTE_PTYPE_L2_ETHER_QINQ,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_IGMP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	if (dev->rx_pkt_burst == hns3_recv_pkts)
+		return ptypes;
+
+	return NULL;
+}
+
+static void
+hns3_clean_rx_buffers(struct hns3_rx_queue *rxq, int count)
+{
+	rxq->next_to_use += count;
+	if (rxq->next_to_use >= rxq->nb_rx_desc)
+		rxq->next_to_use -= rxq->nb_rx_desc;
+
+	hns3_write_dev(rxq, HNS3_RING_RX_HEAD_REG, count);
+}
+
+static int
+hns3_handle_bdinfo(struct hns3_rx_queue *rxq, uint16_t pkt_len,
+		   uint32_t bd_base_info, uint32_t l234_info)
+{
+	if (unlikely(l234_info & BIT(HNS3_RXD_L2E_B))) {
+		rxq->l2_errors++;
+		return -EINVAL;
+	}
+
+	if (unlikely(pkt_len == 0 || (l234_info & BIT(HNS3_RXD_TRUNCAT_B)))) {
+		rxq->pkt_len_errors++;
+		return -EINVAL;
+	}
+
+	if ((bd_base_info & BIT(HNS3_RXD_L3L4P_B)) &&
+	      unlikely(l234_info & (BIT(HNS3_RXD_L3E_B) | BIT(HNS3_RXD_L4E_B) |
+		       BIT(HNS3_RXD_OL3E_B) | BIT(HNS3_RXD_OL4E_B)))) {
+		rxq->csum_erros++;
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+uint16_t
+hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct hns3_rx_queue *rxq;      /* RX queue */
+	struct hns3_desc *rx_ring;      /* RX ring (desc) */
+	struct hns3_entry *sw_ring;
+	struct hns3_entry *rxe;
+	struct hns3_desc *rxdp;         /* pointer of the current desc */
+	struct rte_mbuf *first_seg;
+	struct rte_mbuf *last_seg;
+	struct rte_mbuf *nmb;           /* pointer of the new mbuf */
+	struct rte_mbuf *rxm;
+	struct rte_eth_dev *dev;
+	struct hns3_hw *hw;
+	uint32_t bd_base_info;
+	uint32_t l234_info;
+	uint64_t dma_addr;
+	uint16_t data_len;
+	uint16_t nb_rx_bd;
+	uint16_t pkt_len;
+	uint16_t nb_rx;
+	uint16_t rx_id;
+	int num;                        /* num of desc in ring */
+	int ret;
+
+	nb_rx = 0;
+	nb_rx_bd = 0;
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+	hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	rx_id = rxq->next_to_clean;
+	rx_ring = rxq->rx_ring;
+	first_seg = rxq->pkt_first_seg;
+	last_seg = rxq->pkt_last_seg;
+	sw_ring = rxq->sw_ring;
+
+	/* Get num of packets in desc ring */
+	num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
+	while (nb_rx_bd < num && nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		bd_base_info = rte_le_to_cpu_32(rxdp->rx.bd_base_info);
+		rte_cio_rmb();
+		if (unlikely(!hns3_get_bit(bd_base_info, HNS3_RXD_VLD_B))) {
+			nb_rx_bd++;
+			rx_id++;
+			if (unlikely(rx_id == rxq->nb_rx_desc))
+				rx_id = 0;
+			rxq->non_vld_descs++;
+			break;
+		}
+
+		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+		if (unlikely(nmb == NULL)) {
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		nb_rx_bd++;
+		rxe = &sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		rte_prefetch0(rxe->mbuf);
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->addr = dma_addr;
+		rxdp->rx.bd_base_info = 0;
+
+		data_len = (uint16_t)(rte_le_to_cpu_16(rxdp->rx.size));
+		l234_info = rte_le_to_cpu_32(rxdp->rx.l234_info);
+
+		if (first_seg == NULL) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+		} else {
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->data_len = data_len;
+
+		if (!hns3_get_bit(bd_base_info, HNS3_RXD_FE_B)) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* The last buffer of the received packet */
+		pkt_len = (uint16_t)(rte_le_to_cpu_16(rxdp->rx.pkt_len));
+		first_seg->pkt_len = pkt_len;
+		first_seg->port = rxq->port_id;
+		first_seg->hash.rss = rxdp->rx.rss_hash;
+		if (unlikely(hns3_get_bit(bd_base_info, HNS3_RXD_LUM_B))) {
+			first_seg->hash.fdir.hi =
+				rte_le_to_cpu_32(rxdp->rx.fd_id);
+			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		}
+		rxm->next = NULL;
+
+		ret = hns3_handle_bdinfo(rxq, pkt_len, bd_base_info, l234_info);
+		if (unlikely(ret))
+			goto pkt_err;
+
+		first_seg->packet_type = rxd_pkt_info_to_pkt_type(l234_info);
+		first_seg->vlan_tci = rxdp->rx.vlan_tag;
+		first_seg->vlan_tci_outer = rxdp->rx.ot_vlan_tag;
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+		continue;
+pkt_err:
+		rte_pktmbuf_free(first_seg);
+		first_seg = NULL;
+		rxq->errors++;
+		hns3_dbg(hw, "Found pkt err in Port %d No.%d Rx queue, "
+			     "rx bd: l234_info 0x%x, bd_base_info 0x%x, "
+			     "addr 0x%x, 0x%x, pkt_len 0x%x, size 0x%x",
+			     rxq->port_id, rxq->queue_id, l234_info,
+			     bd_base_info, rxdp->addr0, rxdp->addr1,
+			     rxdp->rx.pkt_len, rxdp->rx.size);
+	}
+
+	rxq->next_to_clean = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+	hns3_clean_rx_buffers(rxq, nb_rx_bd);
+
+	return nb_rx;
+}
+
+int
+hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
+		    unsigned int socket_id, const struct rte_eth_txconf *conf)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const struct rte_memzone *tx_mz;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tx_queue *txq;
+	struct hns3_desc *desc;
+	unsigned int desc_size = sizeof(struct hns3_desc);
+	unsigned int tx_desc;
+	int tx_entry_len;
+	int i;
+
+	if (dev->data->dev_started) {
+		hns3_err(hw, "tx_queue_setup after dev_start no supported");
+		return -EINVAL;
+	}
+
+	if (nb_desc > HNS3_MAX_RING_DESC || nb_desc < HNS3_MIN_RING_DESC ||
+	    nb_desc % HNS3_ALIGN_RING_DESC) {
+		hns3_err(hw, "Number (%u) of tx descriptors is invalid",
+			    nb_desc);
+		return -EINVAL;
+	}
+
+	if (dev->data->tx_queues[idx] != NULL) {
+		hns3_tx_queue_release(dev->data->tx_queues[idx]);
+		dev->data->tx_queues[idx] = NULL;
+	}
+
+	txq = rte_zmalloc_socket("hns3 TX queue", sizeof(struct hns3_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL) {
+		hns3_err(hw, "Failed to allocate memory for tx queue!");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->queue_id = idx;
+	txq->tx_deferred_start = conf->tx_deferred_start;
+
+	tx_entry_len = sizeof(struct hns3_entry) * txq->nb_tx_desc;
+	txq->sw_ring = rte_zmalloc_socket("hns3 TX sw ring", tx_entry_len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		hns3_err(hw, "Failed to allocate memory for tx sw ring!");
+		hns3_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate tx ring hardware descriptors. */
+	tx_desc = txq->nb_tx_desc * desc_size;
+	tx_mz = rte_eth_dma_zone_reserve(dev, "tx_ring", idx, tx_desc,
+					 HNS3_RING_BASE_ALIGN, socket_id);
+	if (tx_mz == NULL) {
+		hns3_err(hw, "Failed to reserve DMA memory for No.%d tx ring!",
+			 idx);
+		hns3_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+	txq->mz = tx_mz;
+	txq->tx_ring = (struct hns3_desc *)tx_mz->addr;
+	txq->tx_ring_phys_addr = tx_mz->iova;
+
+	hns3_dbg(hw, "No.%d tx descriptors iova 0x%lx", idx,
+		 txq->tx_ring_phys_addr);
+
+	/* Clear tx bd */
+	desc = txq->tx_ring;
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		desc->tx.tp_fe_sc_vld_ra_ri = 0;
+		desc++;
+	}
+
+	txq->hns = hns;
+	txq->next_to_use = 0;
+	txq->next_to_clean = 0;
+	txq->tx_bd_ready   = txq->nb_tx_desc;
+	txq->port_id = dev->data->port_id;
+	txq->pkt_len_errors = 0;
+	txq->configured = true;
+	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
+				idx * HNS3_TQP_REG_SIZE);
+	rte_spinlock_lock(&hw->lock);
+	dev->data->tx_queues[idx] = txq;
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static inline int
+tx_ring_dist(struct hns3_tx_queue *txq, int begin, int end)
+{
+	return (end - begin + txq->nb_tx_desc) % txq->nb_tx_desc;
+}
+
+static inline int
+tx_ring_space(struct hns3_tx_queue *txq)
+{
+	return txq->nb_tx_desc -
+		tx_ring_dist(txq, txq->next_to_clean, txq->next_to_use) - 1;
+}
+
+static inline void
+hns3_queue_xmit(struct hns3_tx_queue *txq, uint32_t buf_num)
+{
+	hns3_write_dev(txq, HNS3_RING_TX_TAIL_REG, buf_num);
+}
+
+static void
+hns3_tx_free_useless_buffer(struct hns3_tx_queue *txq)
+{
+	uint16_t tx_next_clean = txq->next_to_clean;
+	uint16_t tx_next_use   = txq->next_to_use;
+	uint16_t tx_bd_ready   = txq->tx_bd_ready;
+	uint16_t tx_bd_max     = txq->nb_tx_desc;
+	struct hns3_entry *tx_bak_pkt = &txq->sw_ring[tx_next_clean];
+	struct hns3_desc *desc = &txq->tx_ring[tx_next_clean];
+	struct rte_mbuf *mbuf;
+
+	while ((!hns3_get_bit(desc->tx.tp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B)) &&
+		(tx_next_use != tx_next_clean || tx_bd_ready < tx_bd_max)) {
+		mbuf = tx_bak_pkt->mbuf;
+		if (mbuf) {
+			mbuf->next = NULL;
+			rte_pktmbuf_free(mbuf);
+			tx_bak_pkt->mbuf = NULL;
+		}
+
+		desc++;
+		tx_bak_pkt++;
+		tx_next_clean++;
+		tx_bd_ready++;
+
+		if (tx_next_clean >= tx_bd_max) {
+			tx_next_clean = 0;
+			desc = txq->tx_ring;
+			tx_bak_pkt = txq->sw_ring;
+		}
+	}
+
+	txq->next_to_clean = tx_next_clean;
+	txq->tx_bd_ready   = tx_bd_ready;
+}
+
+static void
+fill_desc(struct hns3_tx_queue *txq, uint16_t tx_desc_id, struct rte_mbuf *rxm,
+	  bool first, int offset)
+{
+	struct hns3_desc *tx_ring = txq->tx_ring;
+	struct hns3_desc *desc = &tx_ring[tx_desc_id];
+	uint8_t frag_end = rxm->next == NULL ? 1 : 0;
+	uint8_t ol_type_vlan_msec = 0;
+	uint8_t type_cs_vlan_tso = 0;
+	uint16_t size = rxm->data_len;
+	uint16_t paylen = 0;
+	uint16_t rrcfv = 0;
+	uint64_t ol_flags;
+
+	desc->addr = rte_mbuf_data_iova(rxm) + offset;
+	desc->tx.send_size = rte_cpu_to_le_16(size);
+	hns3_set_bit(rrcfv, HNS3_TXD_VLD_B, 1);
+
+	if (first) {
+		if (rxm->packet_type &
+		    (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
+			hns3_set_bit(type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
+			if (rxm->packet_type & RTE_PTYPE_L3_IPV6)
+				hns3_set_field(type_cs_vlan_tso, HNS3_TXD_L3T_M,
+					       HNS3_TXD_L3T_S, HNS3_L3T_IPV6);
+			else
+				hns3_set_field(type_cs_vlan_tso, HNS3_TXD_L3T_M,
+					       HNS3_TXD_L3T_S, HNS3_L3T_IPV4);
+		}
+		desc->tx.paylen = rte_cpu_to_le_16(paylen);
+		desc->tx.l4_len = rxm->l4_len;
+	}
+
+	hns3_set_bit(rrcfv, HNS3_TXD_FE_B, frag_end);
+	desc->tx.tp_fe_sc_vld_ra_ri = rrcfv;
+
+	if (frag_end) {
+		ol_flags = rxm->ol_flags;
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			hns3_set_bit(type_cs_vlan_tso, HNS3_TXD_VLAN_B, 1);
+			desc->tx.vlan_tag = rxm->vlan_tci;
+		}
+
+		if (ol_flags & PKT_TX_QINQ_PKT) {
+			hns3_set_bit(ol_type_vlan_msec, HNS3_TXD_OVLAN_B, 1);
+			desc->tx.outer_vlan_tag = rxm->vlan_tci_outer;
+		}
+	}
+
+	desc->tx.type_cs_vlan_tso = type_cs_vlan_tso;
+	desc->tx.ol_type_vlan_msec = ol_type_vlan_msec;
+}
+
+static int
+hns3_tx_alloc_mbufs(struct hns3_tx_queue *txq, struct rte_mempool *mb_pool,
+		    uint16_t nb_new_buf, struct rte_mbuf **alloc_mbuf)
+{
+	struct rte_mbuf *new_mbuf = NULL;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *temp;
+	struct hns3_hw *hw;
+	uint16_t i;
+
+	/* Allocate enough mbufs */
+	for (i = 0; i < nb_new_buf; i++) {
+		temp = rte_pktmbuf_alloc(mb_pool);
+		if (unlikely(temp == NULL)) {
+			dev = &rte_eth_devices[txq->port_id];
+			hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+			hns3_err(hw, "Failed to alloc TX mbuf port_id=%d,"
+				     "queue_id=%d in reassemble tx pkts.",
+				     txq->port_id, txq->queue_id);
+			rte_pktmbuf_free(new_mbuf);
+			return -ENOMEM;
+		}
+		temp->next = new_mbuf;
+		new_mbuf = temp;
+	}
+
+	if (new_mbuf == NULL)
+		return -ENOMEM;
+
+	new_mbuf->nb_segs = nb_new_buf;
+	*alloc_mbuf = new_mbuf;
+
+	return 0;
+}
+
+static int
+hns3_reassemble_tx_pkts(void *tx_queue, struct rte_mbuf *tx_pkt,
+			struct rte_mbuf **new_pkt)
+{
+	struct hns3_tx_queue *txq = tx_queue;
+	struct rte_mempool *mb_pool;
+	struct rte_mbuf *new_mbuf;
+	struct rte_mbuf *temp_new;
+	struct rte_mbuf *temp;
+	uint16_t last_buf_len;
+	uint16_t nb_new_buf;
+	uint16_t buf_size;
+	uint16_t buf_len;
+	uint16_t len_s;
+	uint16_t len_d;
+	uint16_t len;
+	uint16_t i;
+	int ret;
+	char *s;
+	char *d;
+
+	mb_pool = tx_pkt->pool;
+	buf_size = tx_pkt->buf_len - RTE_PKTMBUF_HEADROOM;
+	nb_new_buf = (tx_pkt->pkt_len - 1) / buf_size + 1;
+
+	last_buf_len = tx_pkt->pkt_len % buf_size;
+	if (last_buf_len == 0)
+		last_buf_len = buf_size;
+
+	/* Allocate enough mbufs */
+	ret = hns3_tx_alloc_mbufs(txq, mb_pool, nb_new_buf, &new_mbuf);
+	if (ret)
+		return ret;
+
+	/* Copy the original packet content to the new mbufs */
+	temp = tx_pkt;
+	s = rte_pktmbuf_mtod(temp, char *);
+	len_s = temp->data_len;
+	temp_new = new_mbuf;
+	for (i = 0; i < nb_new_buf; i++) {
+		d = rte_pktmbuf_mtod(temp_new, char *);
+		if (i < nb_new_buf - 1)
+			buf_len = buf_size;
+		else
+			buf_len = last_buf_len;
+		len_d = buf_len;
+
+		while (len_d) {
+			len = RTE_MIN(len_s, len_d);
+			memcpy(d, s, len);
+			s = s + len;
+			d = d + len;
+			len_d = len_d - len;
+			len_s = len_s - len;
+
+			if (len_s == 0) {
+				temp = temp->next;
+				if (temp == NULL)
+					break;
+				s = rte_pktmbuf_mtod(temp, char *);
+				len_s = temp->data_len;
+			}
+		}
+
+		temp_new->data_len = buf_len;
+		temp_new = temp_new->next;
+	}
+
+	/* free original mbufs */
+	rte_pktmbuf_free(tx_pkt);
+
+	*new_pkt = new_mbuf;
+
+	return 0;
+}
+
+uint16_t
+hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct hns3_tx_queue *txq = tx_queue;
+	struct hns3_entry *tx_bak_pkt;
+	struct rte_mbuf *new_pkt;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	struct rte_mbuf *temp;
+	uint32_t nb_hold = 0;
+	uint16_t tx_next_clean;
+	uint16_t tx_next_use;
+	uint16_t tx_bd_ready;
+	uint16_t tx_pkt_num;
+	uint16_t tx_bd_max;
+	uint16_t nb_buf;
+	uint16_t nb_tx;
+	uint16_t i;
+
+	/* free useless buffer */
+	hns3_tx_free_useless_buffer(txq);
+	tx_bd_ready = txq->tx_bd_ready;
+	if (tx_bd_ready == 0)
+		return 0;
+
+	tx_next_clean = txq->next_to_clean;
+	tx_next_use   = txq->next_to_use;
+	tx_bd_max     = txq->nb_tx_desc;
+	tx_bak_pkt = &txq->sw_ring[tx_next_clean];
+
+	tx_pkt_num = (tx_bd_ready < nb_pkts) ? tx_bd_ready : nb_pkts;
+
+	/* send packets */
+	tx_bak_pkt = &txq->sw_ring[tx_next_use];
+	for (nb_tx = 0; nb_tx < tx_pkt_num; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		nb_buf = tx_pkt->nb_segs;
+
+		if (nb_buf > tx_ring_space(txq)) {
+			if (nb_tx == 0)
+				return 0;
+
+			goto end_of_tx;
+		}
+
+		/*
+		 * If the length of the packet is too long or zero, the packet
+		 * will be ignored.
+		 */
+		if (unlikely(tx_pkt->pkt_len > HNS3_MAX_FRAME_LEN ||
+			     tx_pkt->pkt_len == 0)) {
+			txq->pkt_len_errors++;
+			continue;
+		}
+
+		m_seg = tx_pkt;
+		if (unlikely(nb_buf > HNS3_MAX_TX_BD_PER_PKT)) {
+			if (hns3_reassemble_tx_pkts(txq, tx_pkt, &new_pkt))
+				goto end_of_tx;
+			m_seg = new_pkt;
+			nb_buf = m_seg->nb_segs;
+		}
+
+		i = 0;
+		do {
+			fill_desc(txq, tx_next_use, m_seg, (i == 0), 0);
+			temp = m_seg->next;
+			tx_bak_pkt->mbuf = m_seg;
+			m_seg = temp;
+			tx_next_use++;
+			tx_bak_pkt++;
+			if (tx_next_use >= tx_bd_max) {
+				tx_next_use = 0;
+				tx_bak_pkt = txq->sw_ring;
+			}
+
+			i++;
+		} while (m_seg != NULL);
+
+		nb_hold += i;
+	}
+
+end_of_tx:
+
+	if (likely(nb_tx)) {
+		hns3_queue_xmit(txq, nb_hold);
+		txq->next_to_clean = tx_next_clean;
+		txq->next_to_use   = tx_next_use;
+		txq->tx_bd_ready   = tx_bd_ready - nb_hold;
+	}
+
+	return nb_tx;
+}
+
+static uint16_t
+hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
+		      struct rte_mbuf **pkts __rte_unused,
+		      uint16_t pkts_n __rte_unused)
+{
+	return 0;
+}
+
+void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+
+	if (hns->hw.adapter_state == HNS3_NIC_STARTED &&
+	    rte_atomic16_read(&hns->hw.reset.resetting) == 0) {
+		eth_dev->rx_pkt_burst = hns3_recv_pkts;
+		eth_dev->tx_pkt_burst = hns3_xmit_pkts;
+	} else {
+		eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
+		eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+	}
+}
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
new file mode 100644
index 0000000..6189f56
--- /dev/null
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_RXTX_H_
+#define _HNS3_RXTX_H_
+
+#define	HNS3_MIN_RING_DESC	32
+#define	HNS3_MAX_RING_DESC	32768
+#define HNS3_DEFAULT_RING_DESC  1024
+#define	HNS3_ALIGN_RING_DESC	32
+#define HNS3_RING_BASE_ALIGN	128
+
+#define HNS3_BD_SIZE_512_TYPE			0
+#define HNS3_BD_SIZE_1024_TYPE			1
+#define HNS3_BD_SIZE_2048_TYPE			2
+#define HNS3_BD_SIZE_4096_TYPE			3
+
+#define HNS3_RX_FLAG_VLAN_PRESENT		0x1
+#define HNS3_RX_FLAG_L3ID_IPV4			0x0
+#define HNS3_RX_FLAG_L3ID_IPV6			0x1
+#define HNS3_RX_FLAG_L4ID_UDP			0x0
+#define HNS3_RX_FLAG_L4ID_TCP			0x1
+
+#define HNS3_RXD_DMAC_S				0
+#define HNS3_RXD_DMAC_M				(0x3 << HNS3_RXD_DMAC_S)
+#define HNS3_RXD_VLAN_S				2
+#define HNS3_RXD_VLAN_M				(0x3 << HNS3_RXD_VLAN_S)
+#define HNS3_RXD_L3ID_S				4
+#define HNS3_RXD_L3ID_M				(0xf << HNS3_RXD_L3ID_S)
+#define HNS3_RXD_L4ID_S				8
+#define HNS3_RXD_L4ID_M				(0xf << HNS3_RXD_L4ID_S)
+#define HNS3_RXD_FRAG_B				12
+#define HNS3_RXD_STRP_TAGP_S			13
+#define HNS3_RXD_STRP_TAGP_M			(0x3 << HNS3_RXD_STRP_TAGP_S)
+
+#define HNS3_RXD_L2E_B				16
+#define HNS3_RXD_L3E_B				17
+#define HNS3_RXD_L4E_B				18
+#define HNS3_RXD_TRUNCAT_B			19
+#define HNS3_RXD_HOI_B				20
+#define HNS3_RXD_DOI_B				21
+#define HNS3_RXD_OL3E_B				22
+#define HNS3_RXD_OL4E_B				23
+#define HNS3_RXD_GRO_COUNT_S			24
+#define HNS3_RXD_GRO_COUNT_M			(0x3f << HNS3_RXD_GRO_COUNT_S)
+#define HNS3_RXD_GRO_FIXID_B			30
+#define HNS3_RXD_GRO_ECN_B			31
+
+#define HNS3_RXD_ODMAC_S			0
+#define HNS3_RXD_ODMAC_M			(0x3 << HNS3_RXD_ODMAC_S)
+#define HNS3_RXD_OVLAN_S			2
+#define HNS3_RXD_OVLAN_M			(0x3 << HNS3_RXD_OVLAN_S)
+#define HNS3_RXD_OL3ID_S			4
+#define HNS3_RXD_OL3ID_M			(0xf << HNS3_RXD_OL3ID_S)
+#define HNS3_RXD_OL4ID_S			8
+#define HNS3_RXD_OL4ID_M			(0xf << HNS3_RXD_OL4ID_S)
+#define HNS3_RXD_FBHI_S				12
+#define HNS3_RXD_FBHI_M				(0x3 << HNS3_RXD_FBHI_S)
+#define HNS3_RXD_FBLI_S				14
+#define HNS3_RXD_FBLI_M				(0x3 << HNS3_RXD_FBLI_S)
+
+#define HNS3_RXD_BDTYPE_S			0
+#define HNS3_RXD_BDTYPE_M			(0xf << HNS3_RXD_BDTYPE_S)
+#define HNS3_RXD_VLD_B				4
+#define HNS3_RXD_UDP0_B				5
+#define HNS3_RXD_EXTEND_B			7
+#define HNS3_RXD_FE_B				8
+#define HNS3_RXD_LUM_B				9
+#define HNS3_RXD_CRCP_B				10
+#define HNS3_RXD_L3L4P_B			11
+#define HNS3_RXD_TSIND_S			12
+#define HNS3_RXD_TSIND_M			(0x7 << HNS3_RXD_TSIND_S)
+#define HNS3_RXD_LKBK_B				15
+#define HNS3_RXD_GRO_SIZE_S			16
+#define HNS3_RXD_GRO_SIZE_M			(0x3ff << HNS3_RXD_GRO_SIZE_S)
+
+#define HNS3_TXD_L3T_S				0
+#define HNS3_TXD_L3T_M				(0x3 << HNS3_TXD_L3T_S)
+#define HNS3_TXD_L4T_S				2
+#define HNS3_TXD_L4T_M				(0x3 << HNS3_TXD_L4T_S)
+#define HNS3_TXD_L3CS_B				4
+#define HNS3_TXD_L4CS_B				5
+#define HNS3_TXD_VLAN_B				6
+#define HNS3_TXD_TSO_B				7
+
+#define HNS3_TXD_L2LEN_S			8
+#define HNS3_TXD_L2LEN_M			(0xff << HNS3_TXD_L2LEN_S)
+#define HNS3_TXD_L3LEN_S			16
+#define HNS3_TXD_L3LEN_M			(0xff << HNS3_TXD_L3LEN_S)
+#define HNS3_TXD_L4LEN_S			24
+#define HNS3_TXD_L4LEN_M			(0xff << HNS3_TXD_L4LEN_S)
+
+#define HNS3_TXD_OL3T_S				0
+#define HNS3_TXD_OL3T_M				(0x3 << HNS3_TXD_OL3T_S)
+#define HNS3_TXD_OVLAN_B			2
+#define HNS3_TXD_MACSEC_B			3
+#define HNS3_TXD_TUNTYPE_S			4
+#define HNS3_TXD_TUNTYPE_M			(0xf << HNS3_TXD_TUNTYPE_S)
+
+#define HNS3_TXD_BDTYPE_S			0
+#define HNS3_TXD_BDTYPE_M			(0xf << HNS3_TXD_BDTYPE_S)
+#define HNS3_TXD_FE_B				4
+#define HNS3_TXD_SC_S				5
+#define HNS3_TXD_SC_M				(0x3 << HNS3_TXD_SC_S)
+#define HNS3_TXD_EXTEND_B			7
+#define HNS3_TXD_VLD_B				8
+#define HNS3_TXD_RI_B				9
+#define HNS3_TXD_RA_B				10
+#define HNS3_TXD_TSYN_B				11
+#define HNS3_TXD_DECTTL_S			12
+#define HNS3_TXD_DECTTL_M			(0xf << HNS3_TXD_DECTTL_S)
+
+#define HNS3_TXD_MSS_S				0
+#define HNS3_TXD_MSS_M				(0x3fff << HNS3_TXD_MSS_S)
+
+enum hns3_pkt_l2t_type {
+	HNS3_L2_TYPE_UNICAST,
+	HNS3_L2_TYPE_MULTICAST,
+	HNS3_L2_TYPE_BROADCAST,
+	HNS3_L2_TYPE_INVALID,
+};
+
+enum hns3_pkt_l3t_type {
+	HNS3_L3T_NONE,
+	HNS3_L3T_IPV6,
+	HNS3_L3T_IPV4,
+	HNS3_L3T_RESERVED
+};
+
+enum hns3_pkt_l4t_type {
+	HNS3_L4T_UNKNOWN,
+	HNS3_L4T_TCP,
+	HNS3_L4T_UDP,
+	HNS3_L4T_SCTP
+};
+
+enum hns3_pkt_ol3t_type {
+	HNS3_OL3T_NONE,
+	HNS3_OL3T_IPV6,
+	HNS3_OL3T_IPV4_NO_CSUM,
+	HNS3_OL3T_IPV4_CSUM
+};
+
+enum hns3_pkt_tun_type {
+	HNS3_TUN_NONE,
+	HNS3_TUN_MAC_IN_UDP,
+	HNS3_TUN_NVGRE,
+	HNS3_TUN_OTHER
+};
+
+#define __packed __attribute__((packed))
+/* hardware spec ring buffer format */
+__packed struct hns3_desc {
+	union {
+		uint64_t addr;
+		struct {
+			uint32_t addr0;
+			uint32_t addr1;
+		};
+	};
+	union {
+		struct {
+			uint16_t vlan_tag;
+			uint16_t send_size;
+			union {
+				uint32_t type_cs_vlan_tso_len;
+				struct {
+					uint8_t type_cs_vlan_tso;
+					uint8_t l2_len;
+					uint8_t l3_len;
+					uint8_t l4_len;
+				};
+			};
+			uint16_t outer_vlan_tag;
+			uint16_t tv;
+			union {
+				uint32_t ol_type_vlan_len_msec;
+				struct {
+					uint8_t ol_type_vlan_msec;
+					uint8_t ol2_len;
+					uint8_t ol3_len;
+					uint8_t ol4_len;
+				};
+			};
+
+			uint32_t paylen;
+			uint16_t tp_fe_sc_vld_ra_ri;
+			uint16_t mss;
+		} tx;
+
+		struct {
+			uint32_t l234_info;
+			uint16_t pkt_len;
+			uint16_t size;
+			uint32_t rss_hash;
+			uint16_t fd_id;
+			uint16_t vlan_tag;
+			union {
+				uint32_t ol_info;
+				struct {
+					uint16_t o_dm_vlan_id_fb;
+					uint16_t ot_vlan_tag;
+				};
+			};
+			uint32_t bd_base_info;
+		} rx;
+	};
+};
+
+struct hns3_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct hns3_rx_queue {
+	void *io_base;
+	struct hns3_adapter *hns;
+	struct rte_mempool *mb_pool;
+	struct hns3_desc *rx_ring;
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	const struct rte_memzone *mz;
+	struct hns3_entry *sw_ring;
+
+	struct rte_mbuf *pkt_first_seg;
+	struct rte_mbuf *pkt_last_seg;
+
+	uint16_t queue_id;
+	uint16_t port_id;
+	uint16_t nb_rx_desc;
+	uint16_t nb_rx_hold;
+	uint16_t rx_tail;
+	uint16_t next_to_clean;
+	uint16_t next_to_use;
+	uint16_t rx_buf_len;
+	uint16_t rx_free_thresh;
+
+	bool rx_deferred_start; /* don't start this queue in dev start */
+	bool configured;        /* indicate if rx queue has been configured */
+
+	uint64_t non_vld_descs; /* num of non valid rx descriptors */
+	uint64_t l2_errors;
+	uint64_t csum_erros;
+	uint64_t pkt_len_errors;
+	uint64_t errors;        /* num of error rx packets recorded by driver */
+};
+
+struct hns3_tx_queue {
+	void *io_base;
+	struct hns3_adapter *hns;
+	struct hns3_desc *tx_ring;
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	const struct rte_memzone *mz;
+	struct hns3_entry *sw_ring;
+
+	uint16_t queue_id;
+	uint16_t port_id;
+	uint16_t nb_tx_desc;
+	uint16_t next_to_clean;
+	uint16_t next_to_use;
+	uint16_t tx_bd_ready;
+
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool configured;        /* indicate if tx queue has been configured */
+
+	uint64_t pkt_len_errors;
+};
+
+void hns3_dev_rx_queue_release(void *queue);
+void hns3_dev_tx_queue_release(void *queue);
+void hns3_free_all_queues(struct rte_eth_dev *dev);
+int hns3_reset_all_queues(struct hns3_adapter *hns);
+int hns3_start_queues(struct hns3_adapter *hns, bool reset_queue);
+int hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue);
+void hns3_dev_release_mbufs(struct hns3_adapter *hns);
+int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
+			unsigned int socket, const struct rte_eth_rxconf *conf,
+			struct rte_mempool *mp);
+int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
+			unsigned int socket, const struct rte_eth_txconf *conf);
+uint16_t hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts);
+uint16_t hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
+
+const uint32_t *hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
+#endif /* _HNS3_RXTX_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (14 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:14   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 17/22] net/hns3: add dump register ops for hns3 PMD driver Wei Hu (Xavier)
                   ` (6 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds dev_start, dev_stop, dev_configure, promiscuous_enable,
promiscuous_disable, allmulticast_enable, allmulticast_disable,
dev_infos_get related function codes.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 406 ++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_ethdev_vf.c |   1 +
 2 files changed, 407 insertions(+)

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 73b34d2..4a57474 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -42,6 +42,7 @@
 #define HNS3_DEFAULT_PORT_CONF_BURST_SIZE	32
 #define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
 
+#define HNS3_SERVICE_INTERVAL		1000000 /* us */
 #define HNS3_PORT_BASE_VLAN_DISABLE	0
 #define HNS3_PORT_BASE_VLAN_ENABLE	1
 #define HNS3_INVLID_PVID		0xFFFF
@@ -134,6 +135,41 @@ hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id,
 }
 
 static int
+hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
+{
+	struct hns3_pf *pf = &hns->pf;
+	bool writen_to_tbl = false;
+	int ret = 0;
+
+	/*
+	 * When vlan filter is enabled, hardware regards vlan id 0 as the entry
+	 * for normal packet, deleting vlan id 0 is not allowed.
+	 */
+	if (on == 0 && vlan_id == 0)
+		return 0;
+
+	/*
+	 * When port base vlan enabled, we use port base vlan as the vlan
+	 * filter condition. In this case, we don't update vlan filter table
+	 * when user add new vlan or remove exist vlan, just update the
+	 * vlan list. The vlan id in vlan list will be writen in vlan filter
+	 * table until port base vlan disabled
+	 */
+	if (pf->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) {
+		ret = hns3_set_port_vlan_filter(hns, vlan_id, on);
+		writen_to_tbl = true;
+	}
+
+	if (ret == 0 && vlan_id) {
+		if (on)
+			hns3_add_dev_vlan_table(hns, vlan_id, writen_to_tbl);
+		else
+			hns3_rm_dev_vlan_table(hns, vlan_id);
+	}
+	return ret;
+}
+
+static int
 hns3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
@@ -1632,6 +1668,178 @@ hns3_configure_all_mc_mac_addr(struct hns3_adapter *hns, bool del)
 }
 
 static int
+hns3_check_mq_mode(struct rte_eth_dev *dev)
+{
+	enum rte_eth_rx_mq_mode rx_mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+	enum rte_eth_tx_mq_mode tx_mq_mode = dev->data->dev_conf.txmode.mq_mode;
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	struct rte_eth_dcb_tx_conf *dcb_tx_conf;
+	uint8_t num_tc;
+	int max_tc = 0;
+	int i;
+
+	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
+
+	if (rx_mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		hns3_err(hw, "ETH_MQ_RX_VMDQ_DCB_RSS is not supported. "
+			 "rx_mq_mode = %d", rx_mq_mode);
+		return -EINVAL;
+	}
+
+	if (rx_mq_mode == ETH_MQ_RX_VMDQ_DCB ||
+	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		hns3_err(hw, "ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB "
+			 "is not supported. rx_mq_mode = %d, tx_mq_mode = %d",
+			 rx_mq_mode, tx_mq_mode);
+		return -EINVAL;
+	}
+
+	if (rx_mq_mode == ETH_MQ_RX_DCB_RSS) {
+		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
+			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
+				 dcb_rx_conf->nb_tcs, pf->tc_max);
+			return -EINVAL;
+		}
+
+		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
+		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
+			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+				 "nb_tcs(%d) != %d or %d in rx direction.",
+				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
+			return -EINVAL;
+		}
+
+		if (dcb_rx_conf->nb_tcs != dcb_tx_conf->nb_tcs) {
+			hns3_err(hw, "num_tcs(%d) of tx is not equal to rx(%d)",
+				 dcb_tx_conf->nb_tcs, dcb_rx_conf->nb_tcs);
+			return -EINVAL;
+		}
+
+		for (i = 0; i < HNS3_MAX_USER_PRIO; i++) {
+			if (dcb_rx_conf->dcb_tc[i] != dcb_tx_conf->dcb_tc[i]) {
+				hns3_err(hw, "dcb_tc[%d] = %d in rx direction, "
+					 "is not equal to one in tx direction.",
+					 i, dcb_rx_conf->dcb_tc[i]);
+				return -EINVAL;
+			}
+			if (dcb_rx_conf->dcb_tc[i] > max_tc)
+				max_tc = dcb_rx_conf->dcb_tc[i];
+		}
+
+		num_tc = max_tc + 1;
+		if (num_tc > dcb_rx_conf->nb_tcs) {
+			hns3_err(hw, "max num_tc(%u) mapped > nb_tcs(%u)",
+				 num_tc, dcb_rx_conf->nb_tcs);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hns3_check_dcb_cfg(struct rte_eth_dev *dev)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!hns3_dev_dcb_supported(hw)) {
+		hns3_err(hw, "this port does not support dcb configurations.");
+		return -EOPNOTSUPP;
+	}
+
+	if (hw->current_fc_status == HNS3_FC_STATUS_MAC_PAUSE) {
+		hns3_err(hw, "MAC pause enabled, cannot config dcb info.");
+		return -EOPNOTSUPP;
+	}
+
+	/* Check multiple queue mode */
+	return hns3_check_mq_mode(dev);
+}
+
+static int
+hns3_dev_configure(struct rte_eth_dev *dev)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct hns3_rss_conf *rss_cfg = &hw->rss_info;
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	enum rte_eth_rx_mq_mode mq_mode = conf->rxmode.mq_mode;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_tx_queues;
+	struct rte_eth_rss_conf rss_conf;
+	uint16_t mtu;
+	int ret;
+
+	/*
+	 * Hardware does not support where the number of rx and tx queues is
+	 * not equal in hip08.
+	 */
+	if (nb_rx_q != nb_tx_q) {
+		hns3_err(hw,
+			 "nb_rx_queues(%u) not equal with nb_tx_queues(%u)! "
+			 "Hardware does not support this configuration!",
+			 nb_rx_q, nb_tx_q);
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		hns3_err(hw, "setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	hw->adapter_state = HNS3_NIC_CONFIGURING;
+	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+		ret = hns3_check_dcb_cfg(dev);
+		if (ret)
+			goto cfg_err;
+	}
+
+	/* When RSS is not configured, redirect the packet queue 0 */
+	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
+		rss_conf = conf->rx_adv_conf.rss_conf;
+		if (rss_conf.rss_key == NULL) {
+			rss_conf.rss_key = rss_cfg->key;
+			rss_conf.rss_key_len = HNS3_RSS_KEY_SIZE;
+		}
+
+		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
+		if (ret)
+			goto cfg_err;
+	}
+
+	/*
+	 * If jumbo frames are enabled, MTU needs to be refreshed
+	 * according to the maximum RX packet length.
+	 */
+	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		/*
+		 * Security of max_rx_pkt_len is guaranteed in dpdk frame.
+		 * Maximum value of max_rx_pkt_len is HNS3_MAX_FRAME_LEN, so it
+		 * can safely assign to "uint16_t" type variable.
+		 */
+		mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(conf->rxmode.max_rx_pkt_len);
+		ret = hns3_dev_mtu_set(dev, mtu);
+		if (ret)
+			goto cfg_err;
+		dev->data->mtu = mtu;
+	}
+
+	ret = hns3_dev_configure_vlan(dev);
+	if (ret)
+		goto cfg_err;
+
+	hw->adapter_state = HNS3_NIC_CONFIGURED;
+
+	return 0;
+
+cfg_err:
+	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	return ret;
+}
+
+static int
 hns3_set_mac_mtu(struct hns3_hw *hw, uint16_t new_mps)
 {
 	struct hns3_config_max_frm_size_cmd *req;
@@ -3080,6 +3288,71 @@ hns3_set_promisc_mode(struct hns3_hw *hw, bool en_uc_pmc, bool en_mc_pmc)
 	return 0;
 }
 
+static void
+hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	bool en_mc_pmc = (dev->data->all_multicast == 1) ? true : false;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_set_promisc_mode(hw, true, en_mc_pmc);
+	if (ret)
+		hns3_err(hw, "Failed to enable promiscuous mode: %d", ret);
+	rte_spinlock_unlock(&hw->lock);
+}
+
+static void
+hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	bool en_mc_pmc = (dev->data->all_multicast == 1) ? true : false;
+	int ret;
+
+	/* If now in all_multicast mode, must remain in all_multicast mode. */
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_set_promisc_mode(hw, false, en_mc_pmc);
+	if (ret)
+		hns3_err(hw, "Failed to disable promiscuous mode: %d", ret);
+	rte_spinlock_unlock(&hw->lock);
+}
+
+static void
+hns3_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	bool en_uc_pmc = (dev->data->promiscuous == 1) ? true : false;
+	int ret;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_set_promisc_mode(hw, en_uc_pmc, true);
+	if (ret)
+		hns3_err(hw, "Failed to enable allmulticast mode: %d", ret);
+	rte_spinlock_unlock(&hw->lock);
+}
+
+static void
+hns3_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	bool en_uc_pmc = (dev->data->promiscuous == 1) ? true : false;
+	int ret;
+
+	/* If now in promiscuous mode, must remain in all_multicast mode. */
+	if (dev->data->promiscuous == 1)
+		return;
+
+	rte_spinlock_lock(&hw->lock);
+	ret = hns3_set_promisc_mode(hw, en_uc_pmc, false);
+	if (ret)
+		hns3_err(hw, "Failed to disable allmulticast mode: %d", ret);
+	rte_spinlock_unlock(&hw->lock);
+}
+
 static int
 hns3_get_sfp_speed(struct hns3_hw *hw, uint32_t *speed)
 {
@@ -3345,6 +3618,18 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 		goto err_cmd_init;
 	}
 
+	ret = rte_intr_callback_register(&pci_dev->intr_handle,
+					 hns3_interrupt_handler,
+					 eth_dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
+		goto err_intr_callback_register;
+	}
+
+	/* Enable interrupt */
+	rte_intr_enable(&pci_dev->intr_handle);
+	hns3_pf_enable_irq0(hw);
+
 	/* Get configuration */
 	ret = hns3_get_configuration(hw);
 	if (ret) {
@@ -3373,6 +3658,12 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 	hns3_uninit_umv_space(hw);
 
 err_get_config:
+	hns3_pf_disable_irq0(hw);
+	rte_intr_disable(&pci_dev->intr_handle);
+	hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+			     eth_dev);
+
+err_intr_callback_register:
 	hns3_cmd_uninit(hw);
 
 err_cmd_init:
@@ -3397,19 +3688,126 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
 	hns3_rss_uninit(hns);
 	hns3_fdir_filter_uninit(hns);
 	hns3_uninit_umv_space(hw);
+	hns3_pf_disable_irq0(hw);
+	rte_intr_disable(&pci_dev->intr_handle);
+	hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+			     eth_dev);
 	hns3_cmd_uninit(hw);
 	hns3_cmd_destroy_queue(hw);
 	hw->io_base = NULL;
 }
 
+static int
+hns3_do_start(struct hns3_adapter *hns, bool reset_queue)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_dcb_cfg_update(hns);
+	if (ret)
+		return ret;
+
+	/* Enable queues */
+	ret = hns3_start_queues(hns, reset_queue);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start queues: %d", ret);
+		return ret;
+	}
+
+	/* Enable MAC */
+	ret = hns3_cfg_mac_mode(hw, true);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable MAC: %d", ret);
+		goto err_config_mac_mode;
+	}
+	return 0;
+
+err_config_mac_mode:
+	hns3_stop_queues(hns, true);
+	return ret;
+}
+
+static int
+hns3_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+	rte_spinlock_lock(&hw->lock);
+	hw->adapter_state = HNS3_NIC_STARTING;
+
+	ret = hns3_do_start(hns, true);
+	if (ret) {
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+		rte_spinlock_unlock(&hw->lock);
+		return ret;
+	}
+
+	hw->adapter_state = HNS3_NIC_STARTED;
+	rte_spinlock_unlock(&hw->lock);
+	hns3_set_rxtx_function(eth_dev);
+	hns3_mp_req_start_rxtx(eth_dev);
+
+	hns3_info(hw, "hns3 dev start successful!");
+	return 0;
+}
+
+static int
+hns3_do_stop(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	bool reset_queue;
+	int ret;
+
+	ret = hns3_cfg_mac_mode(hw, false);
+	if (ret)
+		return ret;
+	hw->mac.link_status = ETH_LINK_DOWN;
+
+	hns3_configure_all_mac_addr(hns, true);
+	reset_queue = true;
+	hw->mac.default_addr_setted = false;
+	return hns3_stop_queues(hns, reset_queue);
+}
+
+static void
+hns3_dev_stop(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_state = HNS3_NIC_STOPPING;
+	hns3_set_rxtx_function(eth_dev);
+	rte_wmb();
+
+	rte_spinlock_lock(&hw->lock);
+	hns3_do_stop(hns);
+	hns3_dev_release_mbufs(hns);
+	hw->adapter_state = HNS3_NIC_CONFIGURED;
+	rte_spinlock_unlock(&hw->lock);
+}
+
 static void
 hns3_dev_close(struct rte_eth_dev *eth_dev)
 {
 	struct hns3_adapter *hns = eth_dev->data->dev_private;
 	struct hns3_hw *hw = &hns->hw;
 
+	if (hw->adapter_state == HNS3_NIC_STARTED)
+		hns3_dev_stop(eth_dev);
+
 	hw->adapter_state = HNS3_NIC_CLOSING;
+	rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
+
+	hns3_configure_all_mc_mac_addr(hns, true);
+	hns3_remove_all_vlan_table(hns);
+	hns3_vlan_txvlan_cfg(hns, HNS3_PORT_BASE_VLAN_DISABLE, 0);
 	hns3_uninit_pf(eth_dev);
+	hns3_free_all_queues(eth_dev);
 	hw->adapter_state = HNS3_NIC_CLOSED;
 }
 
@@ -3600,7 +3998,13 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 }
 
 static const struct eth_dev_ops hns3_eth_dev_ops = {
+	.dev_start          = hns3_dev_start,
+	.dev_stop           = hns3_dev_stop,
 	.dev_close          = hns3_dev_close,
+	.promiscuous_enable = hns3_dev_promiscuous_enable,
+	.promiscuous_disable = hns3_dev_promiscuous_disable,
+	.allmulticast_enable  = hns3_dev_allmulticast_enable,
+	.allmulticast_disable = hns3_dev_allmulticast_disable,
 	.mtu_set            = hns3_dev_mtu_set,
 	.dev_infos_get          = hns3_dev_infos_get,
 	.fw_version_get         = hns3_fw_version_get,
@@ -3608,6 +4012,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.tx_queue_setup         = hns3_tx_queue_setup,
 	.rx_queue_release       = hns3_dev_rx_queue_release,
 	.tx_queue_release       = hns3_dev_tx_queue_release,
+	.dev_configure          = hns3_dev_configure,
 	.flow_ctrl_get          = hns3_flow_ctrl_get,
 	.flow_ctrl_set          = hns3_flow_ctrl_set,
 	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,
@@ -3626,6 +4031,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.vlan_offload_set       = hns3_vlan_offload_set,
 	.vlan_pvid_set          = hns3_vlan_pvid_set,
 	.get_dcb_info           = hns3_get_dcb_info,
+	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
 static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index abab2b4..7e73845 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1161,6 +1161,7 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.filter_ctrl        = hns3_dev_filter_ctrl,
 	.vlan_filter_set    = hns3vf_vlan_filter_set,
 	.vlan_offload_set   = hns3vf_vlan_offload_set,
+	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
 static int
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 17/22] net/hns3: add dump register ops for hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (15 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 18/22] net/hns3: add abnormal interrupt process " Wei Hu (Xavier)
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds get_reg related function codes for hns3 PMD driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |   1 +
 drivers/net/hns3/hns3_ethdev_vf.c |   1 +
 drivers/net/hns3/hns3_regs.c      | 377 ++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_regs.h      |   1 +
 4 files changed, 380 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_regs.c

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 4a57474..340f92f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -4030,6 +4030,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.vlan_tpid_set          = hns3_vlan_tpid_set,
 	.vlan_offload_set       = hns3_vlan_offload_set,
 	.vlan_pvid_set          = hns3_vlan_pvid_set,
+	.get_reg                = hns3_get_regs,
 	.get_dcb_info           = hns3_get_dcb_info,
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 7e73845..32ba26c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1161,6 +1161,7 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.filter_ctrl        = hns3_dev_filter_ctrl,
 	.vlan_filter_set    = hns3vf_vlan_filter_set,
 	.vlan_offload_set   = hns3vf_vlan_offload_set,
+	.get_reg            = hns3_get_regs,
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
new file mode 100644
index 0000000..91cd7c1
--- /dev/null
+++ b/drivers/net/hns3/hns3_regs.c
@@ -0,0 +1,377 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <errno.h>
+#include <stdarg.h>
+#include <stdbool.h>
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <sys/queue.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_pci.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+#include "hns3_rxtx.h"
+#include "hns3_regs.h"
+
+#define MAX_SEPARATE_NUM	4
+#define SEPARATOR_VALUE		0xFFFFFFFF
+#define REG_NUM_PER_LINE	4
+#define REG_LEN_PER_LINE	(REG_NUM_PER_LINE * sizeof(uint32_t))
+
+static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_ADDR_L_REG,
+					  HNS3_CMDQ_TX_ADDR_H_REG,
+					  HNS3_CMDQ_TX_DEPTH_REG,
+					  HNS3_CMDQ_TX_TAIL_REG,
+					  HNS3_CMDQ_TX_HEAD_REG,
+					  HNS3_CMDQ_RX_ADDR_L_REG,
+					  HNS3_CMDQ_RX_ADDR_H_REG,
+					  HNS3_CMDQ_RX_DEPTH_REG,
+					  HNS3_CMDQ_RX_TAIL_REG,
+					  HNS3_CMDQ_RX_HEAD_REG,
+					  HNS3_VECTOR0_CMDQ_SRC_REG,
+					  HNS3_CMDQ_INTR_STS_REG,
+					  HNS3_CMDQ_INTR_EN_REG,
+					  HNS3_CMDQ_INTR_GEN_REG};
+
+static const uint32_t common_reg_addrs[] = {HNS3_MISC_VECTOR_REG_BASE,
+					    HNS3_VECTOR0_OTER_EN_REG,
+					    HNS3_MISC_RESET_STS_REG,
+					    HNS3_VECTOR0_OTHER_INT_STS_REG,
+					    HNS3_GLOBAL_RESET_REG,
+					    HNS3_FUN_RST_ING,
+					    HNS3_GRO_EN_REG};
+
+static const uint32_t common_vf_reg_addrs[] = {HNS3_MISC_VECTOR_REG_BASE,
+					       HNS3_FUN_RST_ING,
+					       HNS3_GRO_EN_REG};
+
+static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BASEADDR_L_REG,
+					  HNS3_RING_RX_BASEADDR_H_REG,
+					  HNS3_RING_RX_BD_NUM_REG,
+					  HNS3_RING_RX_BD_LEN_REG,
+					  HNS3_RING_RX_MERGE_EN_REG,
+					  HNS3_RING_RX_TAIL_REG,
+					  HNS3_RING_RX_HEAD_REG,
+					  HNS3_RING_RX_FBDNUM_REG,
+					  HNS3_RING_RX_OFFSET_REG,
+					  HNS3_RING_RX_FBD_OFFSET_REG,
+					  HNS3_RING_RX_STASH_REG,
+					  HNS3_RING_RX_BD_ERR_REG,
+					  HNS3_RING_TX_BASEADDR_L_REG,
+					  HNS3_RING_TX_BASEADDR_H_REG,
+					  HNS3_RING_TX_BD_NUM_REG,
+					  HNS3_RING_TX_PRIORITY_REG,
+					  HNS3_RING_TX_TC_REG,
+					  HNS3_RING_TX_MERGE_EN_REG,
+					  HNS3_RING_TX_TAIL_REG,
+					  HNS3_RING_TX_HEAD_REG,
+					  HNS3_RING_TX_FBDNUM_REG,
+					  HNS3_RING_TX_OFFSET_REG,
+					  HNS3_RING_TX_EBD_NUM_REG,
+					  HNS3_RING_TX_EBD_OFFSET_REG,
+					  HNS3_RING_TX_BD_ERR_REG,
+					  HNS3_RING_EN_REG};
+
+static const uint32_t tqp_intr_reg_addrs[] = {HNS3_TQP_INTR_CTRL_REG,
+					      HNS3_TQP_INTR_GL0_REG,
+					      HNS3_TQP_INTR_GL1_REG,
+					      HNS3_TQP_INTR_GL2_REG,
+					      HNS3_TQP_INTR_RL_REG};
+
+static int
+hns3_get_regs_num(struct hns3_hw *hw, uint32_t *regs_num_32_bit,
+		  uint32_t *regs_num_64_bit)
+{
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_REG_NUM, true);
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret) {
+		hns3_err(hw, "Query register number cmd failed, ret = %d",
+			 ret);
+		return ret;
+	}
+
+	*regs_num_32_bit = rte_le_to_cpu_32(desc.data[0]);
+	*regs_num_64_bit = rte_le_to_cpu_32(desc.data[1]);
+
+	return 0;
+}
+
+static int
+hns3_get_regs_length(struct hns3_hw *hw, uint32_t *length)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	int cmdq_lines, common_lines, ring_lines, tqp_intr_lines;
+	uint32_t regs_num_32_bit, regs_num_64_bit;
+	int ret;
+
+	ret = hns3_get_regs_num(hw, &regs_num_32_bit, &regs_num_64_bit);
+	if (ret) {
+		hns3_err(hw, "Get register number failed, ret = %d.",
+			 ret);
+		return -ENOTSUP;
+	}
+
+	cmdq_lines = sizeof(cmdq_reg_addrs) / REG_LEN_PER_LINE + 1;
+	if (hns->is_vf)
+		common_lines =
+			sizeof(common_vf_reg_addrs) / REG_LEN_PER_LINE + 1;
+	else
+		common_lines = sizeof(common_reg_addrs) / REG_LEN_PER_LINE + 1;
+	ring_lines = sizeof(ring_reg_addrs) / REG_LEN_PER_LINE + 1;
+	tqp_intr_lines = sizeof(tqp_intr_reg_addrs) / REG_LEN_PER_LINE + 1;
+
+	*length = (cmdq_lines + common_lines + ring_lines * hw->tqps_num +
+		   tqp_intr_lines * hw->num_msi) * REG_LEN_PER_LINE +
+		  regs_num_32_bit * sizeof(uint32_t) +
+		  regs_num_64_bit * sizeof(uint64_t);
+
+	return 0;
+}
+
+static int
+hns3_get_32_bit_regs(struct hns3_hw *hw, uint32_t regs_num, void *data)
+{
+#define HNS3_32_BIT_REG_RTN_DATANUM 8
+#define HNS3_32_BIT_DESC_NODATA_LEN 2
+	struct hns3_cmd_desc *desc;
+	uint32_t *reg_val = data;
+	uint32_t *desc_data;
+	int cmd_num;
+	int i, k, n;
+	int ret;
+
+	if (regs_num == 0)
+		return 0;
+
+	cmd_num = DIV_ROUND_UP(regs_num + HNS3_32_BIT_DESC_NODATA_LEN,
+			       HNS3_32_BIT_REG_RTN_DATANUM);
+	desc = rte_zmalloc("hns3-32bit-regs",
+			   sizeof(struct hns3_cmd_desc) * cmd_num, 0);
+	if (desc == NULL) {
+		hns3_err(hw, "Failed to allocate %ld bytes needed to "
+			 "store 32bit regs",
+			 sizeof(struct hns3_cmd_desc) * cmd_num);
+		return -ENOMEM;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_OPC_QUERY_32_BIT_REG, true);
+	ret = hns3_cmd_send(hw, desc, cmd_num);
+	if (ret) {
+		hns3_err(hw, "Query 32 bit register cmd failed, ret = %d",
+			 ret);
+		rte_free(desc);
+		return ret;
+	}
+
+	for (i = 0; i < cmd_num; i++) {
+		if (i == 0) {
+			desc_data = &desc[i].data[0];
+			n = HNS3_32_BIT_REG_RTN_DATANUM -
+			    HNS3_32_BIT_DESC_NODATA_LEN;
+		} else {
+			desc_data = (uint32_t *)(&desc[i]);
+			n = HNS3_32_BIT_REG_RTN_DATANUM;
+		}
+		for (k = 0; k < n; k++) {
+			*reg_val++ = rte_le_to_cpu_32(*desc_data++);
+
+			regs_num--;
+			if (regs_num == 0)
+				break;
+		}
+	}
+
+	rte_free(desc);
+	return 0;
+}
+
+static int
+hns3_get_64_bit_regs(struct hns3_hw *hw, uint32_t regs_num, void *data)
+{
+#define HNS3_64_BIT_REG_RTN_DATANUM 4
+#define HNS3_64_BIT_DESC_NODATA_LEN 1
+	struct hns3_cmd_desc *desc;
+	uint64_t *reg_val = data;
+	uint64_t *desc_data;
+	int cmd_num;
+	int i, k, n;
+	int ret;
+
+	if (regs_num == 0)
+		return 0;
+
+	cmd_num = DIV_ROUND_UP(regs_num + HNS3_64_BIT_DESC_NODATA_LEN,
+			       HNS3_64_BIT_REG_RTN_DATANUM);
+	desc = rte_zmalloc("hns3-64bit-regs",
+			   sizeof(struct hns3_cmd_desc) * cmd_num, 0);
+	if (desc == NULL) {
+		hns3_err(hw, "Failed to allocate %ld bytes needed to "
+			 "store 64bit regs",
+			 sizeof(struct hns3_cmd_desc) * cmd_num);
+		return -ENOMEM;
+	}
+
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_OPC_QUERY_64_BIT_REG, true);
+	ret = hns3_cmd_send(hw, desc, cmd_num);
+	if (ret) {
+		hns3_err(hw, "Query 64 bit register cmd failed, ret = %d",
+			 ret);
+		rte_free(desc);
+		return ret;
+	}
+
+	for (i = 0; i < cmd_num; i++) {
+		if (i == 0) {
+			desc_data = (uint64_t *)(&desc[i].data[0]);
+			n = HNS3_64_BIT_REG_RTN_DATANUM -
+			    HNS3_64_BIT_DESC_NODATA_LEN;
+		} else {
+			desc_data = (uint64_t *)(&desc[i]);
+			n = HNS3_64_BIT_REG_RTN_DATANUM;
+		}
+		for (k = 0; k < n; k++) {
+			*reg_val++ = rte_le_to_cpu_64(*desc_data++);
+
+			regs_num--;
+			if (!regs_num)
+				break;
+		}
+	}
+
+	rte_free(desc);
+	return 0;
+}
+
+static void
+hns3_direct_access_regs(struct hns3_hw *hw, uint32_t *data)
+{
+	struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
+	uint32_t reg_offset;
+	int separator_num;
+	int reg_um;
+	int i, j;
+
+	/* fetching per-PF registers values from PF PCIe register space */
+	reg_um = sizeof(cmdq_reg_addrs) / sizeof(uint32_t);
+	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+	for (i = 0; i < reg_um; i++)
+		*data++ = hns3_read_dev(hw, cmdq_reg_addrs[i]);
+	for (i = 0; i < separator_num; i++)
+		*data++ = SEPARATOR_VALUE;
+
+	if (hns->is_vf)
+		reg_um = sizeof(common_vf_reg_addrs) / sizeof(uint32_t);
+	else
+		reg_um = sizeof(common_reg_addrs) / sizeof(uint32_t);
+	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+	for (i = 0; i < reg_um; i++)
+		if (hns->is_vf)
+			*data++ = hns3_read_dev(hw, common_vf_reg_addrs[i]);
+		else
+			*data++ = hns3_read_dev(hw, common_reg_addrs[i]);
+	for (i = 0; i < separator_num; i++)
+		*data++ = SEPARATOR_VALUE;
+
+	reg_um = sizeof(ring_reg_addrs) / sizeof(uint32_t);
+	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+	for (j = 0; j < hw->tqps_num; j++) {
+		reg_offset = HNS3_TQP_REG_OFFSET + HNS3_TQP_REG_SIZE * j;
+		for (i = 0; i < reg_um; i++)
+			*data++ = hns3_read_dev(hw,
+						ring_reg_addrs[i] + reg_offset);
+		for (i = 0; i < separator_num; i++)
+			*data++ = SEPARATOR_VALUE;
+	}
+
+	reg_um = sizeof(tqp_intr_reg_addrs) / sizeof(uint32_t);
+	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+	for (j = 0; j < hw->num_msi; j++) {
+		reg_offset = HNS3_TQP_INTR_REG_SIZE * j;
+		for (i = 0; i < reg_um; i++)
+			*data++ = hns3_read_dev(hw,
+						tqp_intr_reg_addrs[i] +
+						reg_offset);
+		for (i = 0; i < separator_num; i++)
+			*data++ = SEPARATOR_VALUE;
+	}
+}
+
+int
+hns3_get_regs(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint32_t regs_num_32_bit;
+	uint32_t regs_num_64_bit;
+	uint32_t length;
+	uint32_t *data;
+	int ret;
+
+	if (regs == NULL) {
+		hns3_err(hw, "the input parameter regs is NULL!");
+		return -EINVAL;
+	}
+
+	ret = hns3_get_regs_length(hw, &length);
+	if (ret)
+		return ret;
+
+	data = regs->data;
+	if (data == NULL) {
+		regs->length = length;
+		regs->width = sizeof(uint32_t);
+		return 0;
+	}
+
+	/* Only full register dump is supported */
+	if (regs->length && regs->length != length)
+		return -ENOTSUP;
+
+	/* fetching per-PF registers values from PF PCIe register space */
+	hns3_direct_access_regs(hw, data);
+
+	ret = hns3_get_regs_num(hw, &regs_num_32_bit, &regs_num_64_bit);
+	if (ret) {
+		hns3_err(hw, "Get register number failed, ret = %d", ret);
+		return ret;
+	}
+
+	/* fetching PF common registers values from firmware */
+	ret = hns3_get_32_bit_regs(hw, regs_num_32_bit, data);
+	if (ret) {
+		hns3_err(hw, "Get 32 bit register failed, ret = %d", ret);
+		return ret;
+	}
+
+	data += regs_num_32_bit;
+	ret = hns3_get_64_bit_regs(hw, regs_num_64_bit, data);
+	if (ret)
+		hns3_err(hw, "Get 64 bit register failed, ret = %d", ret);
+
+	return ret;
+}
diff --git a/drivers/net/hns3/hns3_regs.h b/drivers/net/hns3/hns3_regs.h
index 5a4f315..2f5faaf 100644
--- a/drivers/net/hns3/hns3_regs.h
+++ b/drivers/net/hns3/hns3_regs.h
@@ -95,4 +95,5 @@
 
 #define HNS3_TQP_INTR_REG_SIZE			4
 
+int hns3_get_regs(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs);
 #endif /* _HNS3_REGS_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 18/22] net/hns3: add abnormal interrupt process for hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (16 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 17/22] net/hns3: add dump register ops for hns3 PMD driver Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops " Wei Hu (Xavier)
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds abnormal interrupt process for hns3 PMD driver,
the interrupt reported by NIC hardware.

Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    | 135 ++++++++
 drivers/net/hns3/hns3_ethdev.h    |   1 +
 drivers/net/hns3/hns3_ethdev_vf.c |  10 +
 drivers/net/hns3/hns3_intr.c      | 657 ++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_intr.h      |  68 ++++
 drivers/net/hns3/hns3_mbx.c       |  13 +-
 6 files changed, 883 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/hns3/hns3_intr.c
 create mode 100644 drivers/net/hns3/hns3_intr.h

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 340f92f..17acfc5 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -36,6 +36,7 @@
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_rxtx.h"
+#include "hns3_intr.h"
 #include "hns3_regs.h"
 #include "hns3_dcb.h"
 
@@ -62,10 +63,134 @@
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
+enum hns3_evt_cause {
+	HNS3_VECTOR0_EVENT_RST,
+	HNS3_VECTOR0_EVENT_MBX,
+	HNS3_VECTOR0_EVENT_ERR,
+	HNS3_VECTOR0_EVENT_OTHER,
+};
+
 static int hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static int hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid,
 				    int on);
 
+static void
+hns3_pf_disable_irq0(struct hns3_hw *hw)
+{
+	hns3_write_dev(hw, HNS3_MISC_VECTOR_REG_BASE, 0);
+}
+
+static void
+hns3_pf_enable_irq0(struct hns3_hw *hw)
+{
+	hns3_write_dev(hw, HNS3_MISC_VECTOR_REG_BASE, 1);
+}
+
+static enum hns3_evt_cause
+hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
+{
+	struct hns3_hw *hw = &hns->hw;
+	uint32_t vector0_int_stats;
+	uint32_t cmdq_src_val;
+	uint32_t val;
+	enum hns3_evt_cause ret;
+
+	/* fetch the events from their corresponding regs */
+	vector0_int_stats = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
+	cmdq_src_val = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG);
+
+	/*
+	 * Assumption: If by any chance reset and mailbox events are reported
+	 * together then we will only process reset event and defer the
+	 * processing of the mailbox events. Since, we would have not cleared
+	 * RX CMDQ event this time we would receive again another interrupt
+	 * from H/W just for the mailbox.
+	 */
+	if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int_stats) { /* IMP */
+		ret = HNS3_VECTOR0_EVENT_RST;
+		goto out;
+	}
+
+	/* Global reset */
+	if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int_stats) {
+		ret = HNS3_VECTOR0_EVENT_RST;
+		goto out;
+	}
+
+	/* check for vector0 msix event source */
+	if (vector0_int_stats & HNS3_VECTOR0_REG_MSIX_MASK) {
+		val = vector0_int_stats;
+		ret = HNS3_VECTOR0_EVENT_ERR;
+		goto out;
+	}
+
+	/* check for vector0 mailbox(=CMDQ RX) event source */
+	if (BIT(HNS3_VECTOR0_RX_CMDQ_INT_B) & cmdq_src_val) {
+		cmdq_src_val &= ~BIT(HNS3_VECTOR0_RX_CMDQ_INT_B);
+		val = cmdq_src_val;
+		ret = HNS3_VECTOR0_EVENT_MBX;
+		goto out;
+	}
+
+	if (clearval && (vector0_int_stats || cmdq_src_val))
+		hns3_warn(hw, "surprise irq ector0_int_stats:0x%x cmdq_src_val:0x%x",
+			  vector0_int_stats, cmdq_src_val);
+	val = vector0_int_stats;
+	ret = HNS3_VECTOR0_EVENT_OTHER;
+out:
+
+	if (clearval)
+		*clearval = val;
+	return ret;
+}
+
+static void
+hns3_clear_event_cause(struct hns3_hw *hw, uint32_t event_type, uint32_t regclr)
+{
+	if (event_type == HNS3_VECTOR0_EVENT_RST)
+		hns3_write_dev(hw, HNS3_MISC_RESET_STS_REG, regclr);
+	else if (event_type == HNS3_VECTOR0_EVENT_MBX)
+		hns3_write_dev(hw, HNS3_VECTOR0_CMDQ_SRC_REG, regclr);
+}
+
+static void
+hns3_clear_all_event_cause(struct hns3_hw *hw)
+{
+	uint32_t vector0_int_stats;
+	vector0_int_stats = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
+
+	if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int_stats)
+		hns3_warn(hw, "Probe during IMP reset interrupt");
+
+	if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int_stats)
+		hns3_warn(hw, "Probe during Global reset interrupt");
+
+	hns3_clear_event_cause(hw, HNS3_VECTOR0_EVENT_RST,
+			       BIT(HNS3_VECTOR0_IMPRESET_INT_B) |
+			       BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) |
+			       BIT(HNS3_VECTOR0_CORERESET_INT_B));
+	hns3_clear_event_cause(hw, HNS3_VECTOR0_EVENT_MBX, 0);
+}
+
+static void
+hns3_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_evt_cause event_cause;
+	uint32_t clearval = 0;
+
+	/* Disable interrupt */
+	hns3_pf_disable_irq0(hw);
+
+	event_cause = hns3_check_event_cause(hns, &clearval);
+
+	hns3_clear_event_cause(hw, event_cause, clearval);
+	/* Enable interrupt if it is not cause by reset */
+	hns3_pf_enable_irq0(hw);
+}
+
 static int
 hns3_set_port_vlan_filter(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 {
@@ -3652,8 +3777,17 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
 
 	hns3_set_default_rss_args(hw);
 
+	ret = hns3_enable_hw_error_intr(hns, true);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "fail to enable hw error interrupts: %d",
+			     ret);
+		goto err_fdir;
+	}
+
 	return 0;
 
+err_fdir:
+	hns3_fdir_filter_uninit(hns);
 err_hw_init:
 	hns3_uninit_umv_space(hw);
 
@@ -3685,6 +3819,7 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
+	hns3_enable_hw_error_intr(hns, false);
 	hns3_rss_uninit(hns);
 	hns3_fdir_filter_uninit(hns);
 	hns3_uninit_umv_space(hw);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 413db04..83bcb34 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -329,6 +329,7 @@ struct hns3_hw {
 	struct hns3_cmq cmq;
 	struct hns3_mbx_resp_status mbx_resp; /* mailbox response */
 	struct hns3_mbx_arq_ring arq;         /* mailbox async rx queue */
+	pthread_t irq_thread_id;
 	struct hns3_mac mac;
 	unsigned int secondary_cnt; /* Number of secondary processes init'd. */
 	uint32_t fw_version;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 32ba26c..a473a35 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -37,6 +37,7 @@
 #include "hns3_logs.h"
 #include "hns3_rxtx.h"
 #include "hns3_regs.h"
+#include "hns3_intr.h"
 #include "hns3_dcb.h"
 
 #define HNS3VF_KEEP_ALIVE_INTERVAL	2000000 /* us */
@@ -45,6 +46,12 @@
 #define HNS3VF_RESET_WAIT_MS	20
 #define HNS3VF_RESET_WAIT_CNT	2000
 
+enum hns3vf_evt_cause {
+	HNS3VF_VECTOR0_EVENT_RST,
+	HNS3VF_VECTOR0_EVENT_MBX,
+	HNS3VF_VECTOR0_EVENT_OTHER,
+};
+
 static int hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static int hns3vf_dev_configure_vlan(struct rte_eth_dev *dev);
 
@@ -561,6 +568,9 @@ hns3vf_interrupt_handler(void *param)
 	enum hns3vf_evt_cause event_cause;
 	uint32_t clearval;
 
+	if (hw->irq_thread_id == 0)
+		hw->irq_thread_id = pthread_self();
+
 	/* Disable interrupt */
 	hns3vf_disable_irq0(hw);
 
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
new file mode 100644
index 0000000..b695914
--- /dev/null
+++ b/drivers/net/hns3/hns3_intr.c
@@ -0,0 +1,657 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+#include <sys/time.h>
+#include <rte_atomic.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+#include "hns3_intr.h"
+#include "hns3_regs.h"
+#include "hns3_rxtx.h"
+
+/* offset in MSIX bd */
+#define MAC_ERROR_OFFSET	1
+#define PPP_PF_ERROR_OFFSET	2
+#define PPU_PF_ERROR_OFFSET	3
+#define RCB_ERROR_OFFSET	5
+#define RCB_ERROR_STATUS_OFFSET	2
+
+#define HNS3_CHECK_MERGE_CNT(val)			\
+	do {						\
+		if (val)				\
+			hw->reset.stats.merge_cnt++;	\
+	} while (0)
+
+const struct hns3_hw_error mac_afifo_tnl_int[] = {
+	{ .int_msk = BIT(0), .msg = "egu_cge_afifo_ecc_1bit_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(1), .msg = "egu_cge_afifo_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(2), .msg = "egu_lge_afifo_ecc_1bit_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(3), .msg = "egu_lge_afifo_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(4), .msg = "cge_igu_afifo_ecc_1bit_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(5), .msg = "cge_igu_afifo_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(6), .msg = "lge_igu_afifo_ecc_1bit_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(7), .msg = "lge_igu_afifo_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(8), .msg = "cge_igu_afifo_overflow_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(9), .msg = "lge_igu_afifo_overflow_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(10), .msg = "egu_cge_afifo_underrun_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(11), .msg = "egu_lge_afifo_underrun_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(12), .msg = "egu_ge_afifo_underrun_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(13), .msg = "ge_igu_afifo_overflow_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = 0, .msg = NULL,
+	  .reset_level = HNS3_NONE_RESET}
+};
+
+const struct hns3_hw_error ppu_mpf_abnormal_int_st2[] = {
+	{ .int_msk = BIT(13), .msg = "rpu_rx_pkt_bit32_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(14), .msg = "rpu_rx_pkt_bit33_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(15), .msg = "rpu_rx_pkt_bit34_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(16), .msg = "rpu_rx_pkt_bit35_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(17), .msg = "rcb_tx_ring_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(18), .msg = "rcb_rx_ring_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(19), .msg = "rcb_tx_fbd_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(20), .msg = "rcb_rx_ebd_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(21), .msg = "rcb_tso_info_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(22), .msg = "rcb_tx_int_info_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(23), .msg = "rcb_rx_int_info_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(24), .msg = "tpu_tx_pkt_0_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(25), .msg = "tpu_tx_pkt_1_ecc_mbit_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(26), .msg = "rd_bus_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(27), .msg = "wr_bus_err",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(28), .msg = "reg_search_miss",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(29), .msg = "rx_q_search_miss",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(30), .msg = "ooo_ecc_err_detect",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(31), .msg = "ooo_ecc_err_multpl",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = 0, .msg = NULL,
+	  .reset_level = HNS3_NONE_RESET}
+};
+
+const struct hns3_hw_error ssu_port_based_pf_int[] = {
+	{ .int_msk = BIT(0), .msg = "roc_pkt_without_key_port",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = BIT(9), .msg = "low_water_line_err_port",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(10), .msg = "hi_water_line_err_port",
+	  .reset_level = HNS3_GLOBAL_RESET },
+	{ .int_msk = 0, .msg = NULL,
+	  .reset_level = HNS3_NONE_RESET}
+};
+
+const struct hns3_hw_error ppp_pf_abnormal_int[] = {
+	{ .int_msk = BIT(0), .msg = "tx_vlan_tag_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(1), .msg = "rss_list_tc_unassigned_queue_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = 0, .msg = NULL,
+	  .reset_level = HNS3_NONE_RESET}
+};
+
+const struct hns3_hw_error ppu_pf_abnormal_int[] = {
+	{ .int_msk = BIT(0), .msg = "over_8bd_no_fe",
+	  .reset_level = HNS3_FUNC_RESET },
+	{ .int_msk = BIT(1), .msg = "tso_mss_cmp_min_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(2), .msg = "tso_mss_cmp_max_err",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = BIT(3), .msg = "tx_rd_fbd_poison",
+	  .reset_level = HNS3_FUNC_RESET },
+	{ .int_msk = BIT(4), .msg = "rx_rd_ebd_poison",
+	  .reset_level = HNS3_FUNC_RESET },
+	{ .int_msk = BIT(5), .msg = "buf_wait_timeout",
+	  .reset_level = HNS3_NONE_RESET },
+	{ .int_msk = 0, .msg = NULL,
+	  .reset_level = HNS3_NONE_RESET}
+};
+
+static int
+config_ppp_err_intr(struct hns3_adapter *hns, uint32_t cmd, bool en)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc[2];
+	int ret;
+
+	/* configure PPP error interrupts */
+	hns3_cmd_setup_basic_desc(&desc[0], cmd, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+	hns3_cmd_setup_basic_desc(&desc[1], cmd, false);
+
+	if (cmd == HNS3_PPP_CMD0_INT_CMD) {
+		if (en) {
+			desc[0].data[0] =
+				rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT0_EN);
+			desc[0].data[1] =
+				rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT1_EN);
+			desc[0].data[4] =
+				rte_cpu_to_le_32(HNS3_PPP_PF_ERR_INT_EN);
+		}
+
+		desc[1].data[0] =
+			rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT0_EN_MASK);
+		desc[1].data[1] =
+			rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT1_EN_MASK);
+		desc[1].data[2] =
+			rte_cpu_to_le_32(HNS3_PPP_PF_ERR_INT_EN_MASK);
+	} else if (cmd == HNS3_PPP_CMD1_INT_CMD) {
+		if (en) {
+			desc[0].data[0] =
+				rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT2_EN);
+			desc[0].data[1] =
+				rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT3_EN);
+		}
+
+		desc[1].data[0] =
+			rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT2_EN_MASK);
+		desc[1].data[1] =
+			rte_cpu_to_le_32(HNS3_PPP_MPF_ECC_ERR_INT3_EN_MASK);
+	}
+
+	ret = hns3_cmd_send(hw, &desc[0], 2);
+	if (ret)
+		hns3_err(hw, "fail to configure PPP error int: %d", ret);
+
+	return ret;
+}
+
+static int
+enable_ppp_err_intr(struct hns3_adapter *hns, bool en)
+{
+	int ret;
+
+	ret = config_ppp_err_intr(hns, HNS3_PPP_CMD0_INT_CMD, en);
+	if (ret)
+		return ret;
+
+	return config_ppp_err_intr(hns, HNS3_PPP_CMD1_INT_CMD, en);
+}
+
+static int
+enable_ssu_err_intr(struct hns3_adapter *hns, bool en)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc[2];
+	int ret;
+
+	/* configure SSU ecc error interrupts */
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_SSU_ECC_INT_CMD, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+	hns3_cmd_setup_basic_desc(&desc[1], HNS3_SSU_ECC_INT_CMD, false);
+	if (en) {
+		desc[0].data[0] =
+			rte_cpu_to_le_32(HNS3_SSU_1BIT_ECC_ERR_INT_EN);
+		desc[0].data[1] =
+			rte_cpu_to_le_32(HNS3_SSU_MULTI_BIT_ECC_ERR_INT_EN);
+		desc[0].data[4] =
+			rte_cpu_to_le_32(HNS3_SSU_BIT32_ECC_ERR_INT_EN);
+	}
+
+	desc[1].data[0] = rte_cpu_to_le_32(HNS3_SSU_1BIT_ECC_ERR_INT_EN_MASK);
+	desc[1].data[1] =
+		rte_cpu_to_le_32(HNS3_SSU_MULTI_BIT_ECC_ERR_INT_EN_MASK);
+	desc[1].data[2] = rte_cpu_to_le_32(HNS3_SSU_BIT32_ECC_ERR_INT_EN_MASK);
+
+	ret = hns3_cmd_send(hw, &desc[0], 2);
+	if (ret) {
+		hns3_err(hw, "fail to configure SSU ECC error interrupt: %d",
+			 ret);
+		return ret;
+	}
+
+	/* configure SSU common error interrupts */
+	hns3_cmd_setup_basic_desc(&desc[0], HNS3_SSU_COMMON_INT_CMD, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+	hns3_cmd_setup_basic_desc(&desc[1], HNS3_SSU_COMMON_INT_CMD, false);
+
+	if (en) {
+		desc[0].data[0] = rte_cpu_to_le_32(HNS3_SSU_COMMON_INT_EN);
+		desc[0].data[1] =
+			rte_cpu_to_le_32(HNS3_SSU_PORT_BASED_ERR_INT_EN);
+		desc[0].data[2] =
+			rte_cpu_to_le_32(HNS3_SSU_FIFO_OVERFLOW_ERR_INT_EN);
+	}
+
+	desc[1].data[0] = rte_cpu_to_le_32(HNS3_SSU_COMMON_INT_EN_MASK |
+					   HNS3_SSU_PORT_BASED_ERR_INT_EN_MASK);
+	desc[1].data[1] =
+		rte_cpu_to_le_32(HNS3_SSU_FIFO_OVERFLOW_ERR_INT_EN_MASK);
+
+	ret = hns3_cmd_send(hw, &desc[0], 2);
+	if (ret)
+		hns3_err(hw, "fail to configure SSU COMMON error intr: %d",
+			 ret);
+
+	return ret;
+}
+
+static int
+config_ppu_err_intrs(struct hns3_adapter *hns, uint32_t cmd, bool en)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc[2];
+	int num = 1;
+
+	/* configure PPU error interrupts */
+	switch (cmd) {
+	case HNS3_PPU_MPF_ECC_INT_CMD:
+		hns3_cmd_setup_basic_desc(&desc[0], cmd, false);
+		desc[0].flag |= HNS3_CMD_FLAG_NEXT;
+		hns3_cmd_setup_basic_desc(&desc[1], cmd, false);
+		if (en) {
+			desc[0].data[0] = HNS3_PPU_MPF_ABNORMAL_INT0_EN;
+			desc[0].data[1] = HNS3_PPU_MPF_ABNORMAL_INT1_EN;
+			desc[1].data[3] = HNS3_PPU_MPF_ABNORMAL_INT3_EN;
+			desc[1].data[4] = HNS3_PPU_MPF_ABNORMAL_INT2_EN;
+		}
+
+		desc[1].data[0] = HNS3_PPU_MPF_ABNORMAL_INT0_EN_MASK;
+		desc[1].data[1] = HNS3_PPU_MPF_ABNORMAL_INT1_EN_MASK;
+		desc[1].data[2] = HNS3_PPU_MPF_ABNORMAL_INT2_EN_MASK;
+		desc[1].data[3] |= HNS3_PPU_MPF_ABNORMAL_INT3_EN_MASK;
+		num = 2;
+		break;
+	case HNS3_PPU_MPF_OTHER_INT_CMD:
+		hns3_cmd_setup_basic_desc(&desc[0], cmd, false);
+		if (en)
+			desc[0].data[0] = HNS3_PPU_MPF_ABNORMAL_INT2_EN2;
+
+		desc[0].data[2] = HNS3_PPU_MPF_ABNORMAL_INT2_EN2_MASK;
+		break;
+	case HNS3_PPU_PF_OTHER_INT_CMD:
+		hns3_cmd_setup_basic_desc(&desc[0], cmd, false);
+		if (en)
+			desc[0].data[0] = HNS3_PPU_PF_ABNORMAL_INT_EN;
+
+		desc[0].data[2] = HNS3_PPU_PF_ABNORMAL_INT_EN_MASK;
+		break;
+	default:
+		hns3_err(hw,
+			 "Invalid cmd(%u) to configure PPU error interrupts.",
+			 cmd);
+		return -EINVAL;
+	}
+
+	return hns3_cmd_send(hw, &desc[0], num);
+}
+
+static int
+enable_ppu_err_intr(struct hns3_adapter *hns, bool en)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = config_ppu_err_intrs(hns, HNS3_PPU_MPF_ECC_INT_CMD, en);
+	if (ret) {
+		hns3_err(hw, "fail to configure PPU MPF ECC error intr: %d",
+			 ret);
+		return ret;
+	}
+
+	ret = config_ppu_err_intrs(hns, HNS3_PPU_MPF_OTHER_INT_CMD, en);
+	if (ret) {
+		hns3_err(hw, "fail to configure PPU MPF other intr: %d",
+			 ret);
+		return ret;
+	}
+
+	ret = config_ppu_err_intrs(hns, HNS3_PPU_PF_OTHER_INT_CMD, en);
+	if (ret)
+		hns3_err(hw, "fail to configure PPU PF error interrupts: %d",
+			 ret);
+	return ret;
+}
+
+static int
+enable_mac_err_intr(struct hns3_adapter *hns, bool en)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc;
+	int ret;
+
+	/* configure MAC common error interrupts */
+	hns3_cmd_setup_basic_desc(&desc, HNS3_MAC_COMMON_INT_EN, false);
+	if (en)
+		desc.data[0] = rte_cpu_to_le_32(HNS3_MAC_COMMON_ERR_INT_EN);
+
+	desc.data[1] = rte_cpu_to_le_32(HNS3_MAC_COMMON_ERR_INT_EN_MASK);
+
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		hns3_err(hw, "fail to configure MAC COMMON error intr: %d",
+			 ret);
+
+	return ret;
+}
+
+static const struct hns3_hw_blk hw_blk[] = {
+	{
+		.name = "PPP",
+		.enable_err_intr = enable_ppp_err_intr,
+	},
+	{
+		.name = "SSU",
+		.enable_err_intr = enable_ssu_err_intr,
+	},
+	{
+		.name = "PPU",
+		.enable_err_intr = enable_ppu_err_intr,
+	},
+	{
+		.name = "MAC",
+		.enable_err_intr = enable_mac_err_intr,
+	},
+	{
+		.name = NULL,
+		.enable_err_intr = NULL,
+	}
+};
+
+int
+hns3_enable_hw_error_intr(struct hns3_adapter *hns, bool en)
+{
+	const struct hns3_hw_blk *module = hw_blk;
+	int ret = 0;
+
+	while (module->enable_err_intr) {
+		ret = module->enable_err_intr(hns, en);
+		if (ret)
+			return ret;
+
+		module++;
+	}
+
+	return ret;
+}
+
+static enum hns3_reset_level
+hns3_find_highest_level(struct hns3_adapter *hns, const char *reg,
+			const struct hns3_hw_error *err, uint32_t err_sts)
+{
+	enum hns3_reset_level reset_level = HNS3_FUNC_RESET;
+	struct hns3_hw *hw = &hns->hw;
+	bool need_reset = false;
+
+	while (err->msg) {
+		if (err->int_msk & err_sts) {
+			hns3_warn(hw, "%s %s found [error status=0x%x]",
+				  reg, err->msg, err_sts);
+			if (err->reset_level != HNS3_NONE_RESET &&
+			    err->reset_level >= reset_level) {
+				reset_level = err->reset_level;
+				need_reset = true;
+			}
+		}
+		err++;
+	}
+	if (need_reset)
+		return reset_level;
+	else
+		return HNS3_NONE_RESET;
+}
+
+static int
+query_num_bds_in_msix(struct hns3_hw *hw, struct hns3_cmd_desc *desc_bd)
+{
+	int ret;
+
+	hns3_cmd_setup_basic_desc(desc_bd, HNS3_QUERY_MSIX_INT_STS_BD_NUM,
+				  true);
+	ret = hns3_cmd_send(hw, desc_bd, 1);
+	if (ret)
+		hns3_err(hw, "query num bds in msix failed: %d", ret);
+
+	return ret;
+}
+
+static int
+query_all_mpf_msix_err(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
+		       uint32_t mpf_bd_num)
+{
+	int ret;
+
+	hns3_cmd_setup_basic_desc(desc, HNS3_QUERY_CLEAR_ALL_MPF_MSIX_INT,
+				  true);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+	ret = hns3_cmd_send(hw, &desc[0], mpf_bd_num);
+	if (ret)
+		hns3_err(hw, "query all mpf msix err failed: %d", ret);
+
+	return ret;
+}
+
+static int
+clear_all_mpf_msix_err(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
+		       uint32_t mpf_bd_num)
+{
+	int ret;
+
+	hns3_cmd_reuse_desc(desc, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+	ret = hns3_cmd_send(hw, desc, mpf_bd_num);
+	if (ret)
+		hns3_err(hw, "clear all mpf msix err failed: %d", ret);
+
+	return ret;
+}
+
+static int
+query_all_pf_msix_err(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
+		      uint32_t pf_bd_num)
+{
+	int ret;
+
+	hns3_cmd_setup_basic_desc(desc, HNS3_QUERY_CLEAR_ALL_PF_MSIX_INT, true);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+	ret = hns3_cmd_send(hw, desc, pf_bd_num);
+	if (ret)
+		hns3_err(hw, "query all pf msix int cmd failed: %d", ret);
+
+	return ret;
+}
+
+static int
+clear_all_pf_msix_err(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
+		      uint32_t pf_bd_num)
+{
+	int ret;
+
+	hns3_cmd_reuse_desc(desc, false);
+	desc[0].flag |= rte_cpu_to_le_16(HNS3_CMD_FLAG_NEXT);
+
+	ret = hns3_cmd_send(hw, desc, pf_bd_num);
+	if (ret)
+		hns3_err(hw, "clear all pf msix err failed: %d", ret);
+
+	return ret;
+}
+
+void
+hns3_intr_unregister(const struct rte_intr_handle *hdl,
+		     rte_intr_callback_fn cb_fn, void *cb_arg)
+{
+	int retry_cnt = 0;
+	int ret;
+
+	do {
+		ret = rte_intr_callback_unregister(hdl, cb_fn, cb_arg);
+		if (ret >= 0) {
+			break;
+		} else if (ret != -EAGAIN) {
+			PMD_INIT_LOG(ERR, "Failed to unregister intr: %d", ret);
+			break;
+		}
+		rte_delay_ms(HNS3_INTR_UNREG_FAIL_DELAY_MS);
+	} while (retry_cnt++ < HNS3_INTR_UNREG_FAIL_RETRY_CNT);
+}
+
+void
+hns3_handle_msix_error(struct hns3_adapter *hns, uint64_t *levels)
+{
+	uint32_t mpf_bd_num, pf_bd_num, bd_num;
+	enum hns3_reset_level req_level;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_cmd_desc desc_bd;
+	struct hns3_cmd_desc *desc;
+	uint32_t *desc_data;
+	uint32_t status;
+	int ret;
+
+	/* query the number of bds for the MSIx int status */
+	ret = query_num_bds_in_msix(hw, &desc_bd);
+	if (ret) {
+		hns3_err(hw, "fail to query msix int status bd num: %d", ret);
+		return;
+	}
+
+	mpf_bd_num = rte_le_to_cpu_32(desc_bd.data[0]);
+	pf_bd_num = rte_le_to_cpu_32(desc_bd.data[1]);
+	bd_num = max_t(uint32_t, mpf_bd_num, pf_bd_num);
+	if (bd_num < RCB_ERROR_OFFSET) {
+		hns3_err(hw, "bd_num is less than RCB_ERROR_OFFSET: %u",
+			 bd_num);
+		return;
+	}
+
+	desc = rte_zmalloc(NULL, bd_num * sizeof(struct hns3_cmd_desc), 0);
+	if (desc == NULL) {
+		hns3_err(hw, "fail to zmalloc desc");
+		return;
+	}
+
+	/* query all main PF MSIx errors */
+	ret = query_all_mpf_msix_err(hw, &desc[0], mpf_bd_num);
+	if (ret) {
+		hns3_err(hw, "query all mpf msix int cmd failed: %d", ret);
+		goto out;
+	}
+
+	/* log MAC errors */
+	desc_data = (uint32_t *)&desc[MAC_ERROR_OFFSET];
+	status = rte_le_to_cpu_32(*desc_data);
+	if (status) {
+		req_level = hns3_find_highest_level(hns, "MAC_AFIFO_TNL_INT_R",
+						    mac_afifo_tnl_int,
+						    status);
+		hns3_atomic_set_bit(req_level, levels);
+		pf->abn_int_stats.mac_afifo_tnl_intr_cnt++;
+	}
+
+	/* log PPU(RCB) errors */
+	desc_data = (uint32_t *)&desc[RCB_ERROR_OFFSET];
+	status = rte_le_to_cpu_32(*(desc_data + RCB_ERROR_STATUS_OFFSET)) &
+			HNS3_PPU_MPF_INT_ST2_MSIX_MASK;
+	if (status) {
+		req_level = hns3_find_highest_level(hns,
+						    "PPU_MPF_ABNORMAL_INT_ST2",
+						    ppu_mpf_abnormal_int_st2,
+						    status);
+		hns3_atomic_set_bit(req_level, levels);
+		pf->abn_int_stats.ppu_mpf_abnormal_intr_st2_cnt++;
+	}
+
+	/* clear all main PF MSIx errors */
+	ret = clear_all_mpf_msix_err(hw, desc, mpf_bd_num);
+	if (ret) {
+		hns3_err(hw, "clear all mpf msix int cmd failed: %d", ret);
+		goto out;
+	}
+
+	/* query all PF MSIx errors */
+	memset(desc, 0, bd_num * sizeof(struct hns3_cmd_desc));
+	ret = query_all_pf_msix_err(hw, &desc[0], pf_bd_num);
+	if (ret) {
+		hns3_err(hw, "query all pf msix int cmd failed (%d)", ret);
+		goto out;
+	}
+
+	/* log SSU PF errors */
+	status = rte_le_to_cpu_32(desc[0].data[0]) &
+		 HNS3_SSU_PORT_INT_MSIX_MASK;
+	if (status) {
+		req_level = hns3_find_highest_level(hns,
+						    "SSU_PORT_BASED_ERR_INT",
+						    ssu_port_based_pf_int,
+						    status);
+		hns3_atomic_set_bit(req_level, levels);
+		pf->abn_int_stats.ssu_port_based_pf_intr_cnt++;
+	}
+
+	/* log PPP PF errors */
+	desc_data = (uint32_t *)&desc[PPP_PF_ERROR_OFFSET];
+	status = rte_le_to_cpu_32(*desc_data);
+	if (status) {
+		req_level = hns3_find_highest_level(hns,
+						    "PPP_PF_ABNORMAL_INT_ST0",
+						    ppp_pf_abnormal_int,
+						    status);
+		hns3_atomic_set_bit(req_level, levels);
+		pf->abn_int_stats.ppp_pf_abnormal_intr_cnt++;
+	}
+
+	/* log PPU(RCB) PF errors */
+	desc_data = (uint32_t *)&desc[PPU_PF_ERROR_OFFSET];
+	status = rte_le_to_cpu_32(*desc_data) & HNS3_PPU_PF_INT_MSIX_MASK;
+	if (status) {
+		req_level = hns3_find_highest_level(hns,
+						    "PPU_PF_ABNORMAL_INT_ST",
+						    ppu_pf_abnormal_int,
+						    status);
+		hns3_atomic_set_bit(req_level, levels);
+		pf->abn_int_stats.ppu_pf_abnormal_intr_cnt++;
+	}
+
+	/* clear all PF MSIx errors */
+	ret = clear_all_pf_msix_err(hw, desc, pf_bd_num);
+	if (ret)
+		hns3_err(hw, "clear all pf msix int cmd failed: %d", ret);
+out:
+	rte_free(desc);
+}
diff --git a/drivers/net/hns3/hns3_intr.h b/drivers/net/hns3/hns3_intr.h
new file mode 100644
index 0000000..b57b4ac
--- /dev/null
+++ b/drivers/net/hns3/hns3_intr.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_INTR_H_
+#define _HNS3_INTR_H_
+
+#define HNS3_PPP_MPF_ECC_ERR_INT0_EN		0xFFFFFFFF
+#define HNS3_PPP_MPF_ECC_ERR_INT0_EN_MASK	0xFFFFFFFF
+#define HNS3_PPP_MPF_ECC_ERR_INT1_EN		0xFFFFFFFF
+#define HNS3_PPP_MPF_ECC_ERR_INT1_EN_MASK	0xFFFFFFFF
+#define HNS3_PPP_PF_ERR_INT_EN			0x0003
+#define HNS3_PPP_PF_ERR_INT_EN_MASK		0x0003
+#define HNS3_PPP_MPF_ECC_ERR_INT2_EN		0x003F
+#define HNS3_PPP_MPF_ECC_ERR_INT2_EN_MASK	0x003F
+#define HNS3_PPP_MPF_ECC_ERR_INT3_EN		0x003F
+#define HNS3_PPP_MPF_ECC_ERR_INT3_EN_MASK	0x003F
+
+#define HNS3_MAC_COMMON_ERR_INT_EN		0x107FF
+#define HNS3_MAC_COMMON_ERR_INT_EN_MASK		0x107FF
+
+#define HNS3_PPU_MPF_ABNORMAL_INT0_EN		GENMASK(31, 0)
+#define HNS3_PPU_MPF_ABNORMAL_INT0_EN_MASK	GENMASK(31, 0)
+#define HNS3_PPU_MPF_ABNORMAL_INT1_EN		GENMASK(31, 0)
+#define HNS3_PPU_MPF_ABNORMAL_INT1_EN_MASK	GENMASK(31, 0)
+#define HNS3_PPU_MPF_ABNORMAL_INT2_EN		0x3FFF3FFF
+#define HNS3_PPU_MPF_ABNORMAL_INT2_EN_MASK	0x3FFF3FFF
+#define HNS3_PPU_MPF_ABNORMAL_INT2_EN2		0xB
+#define HNS3_PPU_MPF_ABNORMAL_INT2_EN2_MASK	0xB
+#define HNS3_PPU_MPF_ABNORMAL_INT3_EN		GENMASK(7, 0)
+#define HNS3_PPU_MPF_ABNORMAL_INT3_EN_MASK	GENMASK(23, 16)
+#define HNS3_PPU_PF_ABNORMAL_INT_EN		GENMASK(5, 0)
+#define HNS3_PPU_PF_ABNORMAL_INT_EN_MASK	GENMASK(5, 0)
+#define HNS3_PPU_PF_INT_MSIX_MASK		0x27
+#define HNS3_PPU_MPF_INT_ST2_MSIX_MASK		GENMASK(29, 28)
+
+#define HNS3_SSU_1BIT_ECC_ERR_INT_EN		GENMASK(31, 0)
+#define HNS3_SSU_1BIT_ECC_ERR_INT_EN_MASK	GENMASK(31, 0)
+#define HNS3_SSU_MULTI_BIT_ECC_ERR_INT_EN	GENMASK(31, 0)
+#define HNS3_SSU_MULTI_BIT_ECC_ERR_INT_EN_MASK	GENMASK(31, 0)
+#define HNS3_SSU_BIT32_ECC_ERR_INT_EN		0x0101
+#define HNS3_SSU_BIT32_ECC_ERR_INT_EN_MASK	0x0101
+#define HNS3_SSU_COMMON_INT_EN			GENMASK(9, 0)
+#define HNS3_SSU_COMMON_INT_EN_MASK		GENMASK(9, 0)
+#define HNS3_SSU_PORT_BASED_ERR_INT_EN		0x0BFF
+#define HNS3_SSU_PORT_BASED_ERR_INT_EN_MASK	0x0BFF0000
+#define HNS3_SSU_FIFO_OVERFLOW_ERR_INT_EN	GENMASK(23, 0)
+#define HNS3_SSU_FIFO_OVERFLOW_ERR_INT_EN_MASK	GENMASK(23, 0)
+#define HNS3_SSU_COMMON_ERR_INT_MASK		GENMASK(9, 0)
+#define HNS3_SSU_PORT_INT_MSIX_MASK		0x7BFF
+
+struct hns3_hw_blk {
+	const char *name;
+	int (*enable_err_intr)(struct hns3_adapter *hns, bool en);
+};
+
+struct hns3_hw_error {
+	uint32_t int_msk;
+	const char *msg;
+	enum hns3_reset_level reset_level;
+};
+
+int hns3_enable_hw_error_intr(struct hns3_adapter *hns, bool state);
+void hns3_handle_msix_error(struct hns3_adapter *hns, uint64_t *levels);
+void hns3_intr_unregister(const struct rte_intr_handle *hdl,
+			  rte_intr_callback_fn cb_fn, void *cb_arg);
+
+#endif /* _HNS3_INTR_H_ */
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 485d810..de16cbe 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -31,6 +31,7 @@
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
 #include "hns3_logs.h"
+#include "hns3_intr.h"
 
 #define HNS3_REG_MSG_DATA_OFFSET	4
 #define HNS3_CMD_CODE_OFFSET		2
@@ -105,7 +106,17 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code0, uint16_t code1,
 	end = now + HNS3_MAX_RETRY_MS;
 	while ((hw->mbx_resp.head != hw->mbx_resp.tail + hw->mbx_resp.lost) &&
 	       (now < end)) {
-		rte_delay_ms(HNS3_POLL_RESPONE_MS);
+		/*
+		 * The mbox response is running on the interrupt thread.
+		 * Sending mbox in the interrupt thread cannot wait for the
+		 * response, so polling the mbox response on the irq thread.
+		 */
+		if (pthread_equal(hw->irq_thread_id, pthread_self())) {
+			in_irq = true;
+			hns3_poll_all_sync_msg();
+		} else {
+			rte_delay_ms(HNS3_POLL_RESPONE_MS);
+		}
 		now = get_timeofday_ms();
 	}
 	hw->mbx_resp.req_msg_data = 0;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops for hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (17 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 18/22] net/hns3: add abnormal interrupt process " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:20   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 20/22] net/hns3: add reset related process " Wei Hu (Xavier)
                   ` (3 subsequent siblings)
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds stats_get, stats_reset, xstats_get, xstats_get_names
xstats_reset, xstats_get_by_id and xstats_get_names_by_id related
function codes.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c       |   1 +
 drivers/net/hns3/hns3_dcb.c       |   1 +
 drivers/net/hns3/hns3_ethdev.c    |   8 +
 drivers/net/hns3/hns3_ethdev.h    |   3 +
 drivers/net/hns3/hns3_ethdev_vf.c |   8 +
 drivers/net/hns3/hns3_fdir.c      |   1 +
 drivers/net/hns3/hns3_flow.c      |   1 +
 drivers/net/hns3/hns3_intr.c      |   1 +
 drivers/net/hns3/hns3_mbx.c       |   1 +
 drivers/net/hns3/hns3_regs.c      |   1 +
 drivers/net/hns3/hns3_rss.c       |   1 +
 drivers/net/hns3/hns3_rxtx.c      |   1 +
 drivers/net/hns3/hns3_stats.c     | 847 ++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_stats.h     | 146 +++++++
 14 files changed, 1021 insertions(+)
 create mode 100644 drivers/net/hns3/hns3_stats.c
 create mode 100644 drivers/net/hns3/hns3_stats.h

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 2a58f95..8c0bf8d 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -30,6 +30,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
 #include "hns3_logs.h"
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index b86a4b0..1ba88b1 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -18,6 +18,7 @@
 #include "hns3_cmd.h"
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
+#include "hns3_stats.h"
 #include "hns3_fdir.h"
 #include "hns3_regs.h"
 #include "hns3_ethdev.h"
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 17acfc5..22d7e61 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -33,6 +33,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_rxtx.h"
@@ -4141,6 +4142,13 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.allmulticast_enable  = hns3_dev_allmulticast_enable,
 	.allmulticast_disable = hns3_dev_allmulticast_disable,
 	.mtu_set            = hns3_dev_mtu_set,
+	.stats_get          = hns3_stats_get,
+	.stats_reset        = hns3_stats_reset,
+	.xstats_get         = hns3_dev_xstats_get,
+	.xstats_get_names   = hns3_dev_xstats_get_names,
+	.xstats_reset       = hns3_dev_xstats_reset,
+	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
+	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
 	.dev_infos_get          = hns3_dev_infos_get,
 	.fw_version_get         = hns3_fw_version_get,
 	.rx_queue_setup         = hns3_rx_queue_setup,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 83bcb34..986314c 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -332,6 +332,9 @@ struct hns3_hw {
 	pthread_t irq_thread_id;
 	struct hns3_mac mac;
 	unsigned int secondary_cnt; /* Number of secondary processes init'd. */
+	struct hns3_tqp_stats tqp_stats;
+	/* Include Mac stats | Rx stats | Tx stats */
+	struct hns3_mac_stats mac_stats;
 	uint32_t fw_version;
 
 	uint16_t num_msi;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index a473a35..d941969 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -33,6 +33,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_rxtx.h"
@@ -1153,6 +1154,13 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.dev_stop           = hns3vf_dev_stop,
 	.dev_close          = hns3vf_dev_close,
 	.mtu_set            = hns3vf_dev_mtu_set,
+	.stats_get          = hns3_stats_get,
+	.stats_reset        = hns3_stats_reset,
+	.xstats_get         = hns3_dev_xstats_get,
+	.xstats_get_names   = hns3_dev_xstats_get_names,
+	.xstats_reset       = hns3_dev_xstats_reset,
+	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
+	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
 	.dev_infos_get      = hns3vf_dev_infos_get,
 	.rx_queue_setup     = hns3_rx_queue_setup,
 	.tx_queue_setup     = hns3_tx_queue_setup,
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index 544ac7e..dfa038f 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -14,6 +14,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 47c9b3a..16e703b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -12,6 +12,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index b695914..2d2051e 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -17,6 +17,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_intr.h"
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index de16cbe..44d8275 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -28,6 +28,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
 #include "hns3_logs.h"
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 91cd7c1..3538d64 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -30,6 +30,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 #include "hns3_rxtx.h"
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index c4ef11b..ba01d71 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -13,6 +13,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_logs.h"
 
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 8a3ca4f..135da53 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -26,6 +26,7 @@
 #include "hns3_mbx.h"
 #include "hns3_rss.h"
 #include "hns3_fdir.h"
+#include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_rxtx.h"
 #include "hns3_regs.h"
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c
new file mode 100644
index 0000000..723887a
--- /dev/null
+++ b/drivers/net/hns3/hns3_stats.c
@@ -0,0 +1,847 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <rte_common.h>
+#include <rte_ethdev.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_stats.h"
+#include "hns3_ethdev.h"
+#include "hns3_rxtx.h"
+#include "hns3_logs.h"
+
+/* MAC statistics */
+static const struct hns3_xstats_name_offset hns3_mac_strings[] = {
+	{"mac_tx_mac_pause_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_mac_pause_num)},
+	{"mac_rx_mac_pause_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_mac_pause_num)},
+	{"mac_tx_control_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_ctrl_pkt_num)},
+	{"mac_rx_control_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_ctrl_pkt_num)},
+	{"mac_tx_pfc_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pause_pkt_num)},
+	{"mac_tx_pfc_pri0_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri0_pkt_num)},
+	{"mac_tx_pfc_pri1_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri1_pkt_num)},
+	{"mac_tx_pfc_pri2_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri2_pkt_num)},
+	{"mac_tx_pfc_pri3_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri3_pkt_num)},
+	{"mac_tx_pfc_pri4_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri4_pkt_num)},
+	{"mac_tx_pfc_pri5_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri5_pkt_num)},
+	{"mac_tx_pfc_pri6_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri6_pkt_num)},
+	{"mac_tx_pfc_pri7_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_pfc_pri7_pkt_num)},
+	{"mac_rx_pfc_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pause_pkt_num)},
+	{"mac_rx_pfc_pri0_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri0_pkt_num)},
+	{"mac_rx_pfc_pri1_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri1_pkt_num)},
+	{"mac_rx_pfc_pri2_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri2_pkt_num)},
+	{"mac_rx_pfc_pri3_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri3_pkt_num)},
+	{"mac_rx_pfc_pri4_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri4_pkt_num)},
+	{"mac_rx_pfc_pri5_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri5_pkt_num)},
+	{"mac_rx_pfc_pri6_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri6_pkt_num)},
+	{"mac_rx_pfc_pri7_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_pfc_pri7_pkt_num)},
+	{"mac_tx_total_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_total_pkt_num)},
+	{"mac_tx_total_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_total_oct_num)},
+	{"mac_tx_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_good_pkt_num)},
+	{"mac_tx_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_bad_pkt_num)},
+	{"mac_tx_good_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_good_oct_num)},
+	{"mac_tx_bad_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_bad_oct_num)},
+	{"mac_tx_uni_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_uni_pkt_num)},
+	{"mac_tx_multi_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_multi_pkt_num)},
+	{"mac_tx_broad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_broad_pkt_num)},
+	{"mac_tx_undersize_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_undersize_pkt_num)},
+	{"mac_tx_oversize_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_oversize_pkt_num)},
+	{"mac_tx_64_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_64_oct_pkt_num)},
+	{"mac_tx_65_127_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_65_127_oct_pkt_num)},
+	{"mac_tx_128_255_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_128_255_oct_pkt_num)},
+	{"mac_tx_256_511_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_256_511_oct_pkt_num)},
+	{"mac_tx_512_1023_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_512_1023_oct_pkt_num)},
+	{"mac_tx_1024_1518_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_1024_1518_oct_pkt_num)},
+	{"mac_tx_1519_2047_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_1519_2047_oct_pkt_num)},
+	{"mac_tx_2048_4095_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_2048_4095_oct_pkt_num)},
+	{"mac_tx_4096_8191_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_4096_8191_oct_pkt_num)},
+	{"mac_tx_8192_9216_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_8192_9216_oct_pkt_num)},
+	{"mac_tx_9217_12287_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_9217_12287_oct_pkt_num)},
+	{"mac_tx_12288_16383_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_12288_16383_oct_pkt_num)},
+	{"mac_tx_1519_max_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_1519_max_good_oct_pkt_num)},
+	{"mac_tx_1519_max_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_1519_max_bad_oct_pkt_num)},
+	{"mac_rx_total_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_total_pkt_num)},
+	{"mac_rx_total_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_total_oct_num)},
+	{"mac_rx_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_good_pkt_num)},
+	{"mac_rx_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_bad_pkt_num)},
+	{"mac_rx_good_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_good_oct_num)},
+	{"mac_rx_bad_oct_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_bad_oct_num)},
+	{"mac_rx_uni_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_uni_pkt_num)},
+	{"mac_rx_multi_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_multi_pkt_num)},
+	{"mac_rx_broad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_broad_pkt_num)},
+	{"mac_rx_undersize_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_undersize_pkt_num)},
+	{"mac_rx_oversize_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_oversize_pkt_num)},
+	{"mac_rx_64_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_64_oct_pkt_num)},
+	{"mac_rx_65_127_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_65_127_oct_pkt_num)},
+	{"mac_rx_128_255_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_128_255_oct_pkt_num)},
+	{"mac_rx_256_511_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_256_511_oct_pkt_num)},
+	{"mac_rx_512_1023_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_512_1023_oct_pkt_num)},
+	{"mac_rx_1024_1518_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_1024_1518_oct_pkt_num)},
+	{"mac_rx_1519_2047_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_1519_2047_oct_pkt_num)},
+	{"mac_rx_2048_4095_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_2048_4095_oct_pkt_num)},
+	{"mac_rx_4096_8191_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_4096_8191_oct_pkt_num)},
+	{"mac_rx_8192_9216_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_8192_9216_oct_pkt_num)},
+	{"mac_rx_9217_12287_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_9217_12287_oct_pkt_num)},
+	{"mac_rx_12288_16383_oct_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_12288_16383_oct_pkt_num)},
+	{"mac_rx_1519_max_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_1519_max_good_oct_pkt_num)},
+	{"mac_rx_1519_max_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_1519_max_bad_oct_pkt_num)},
+	{"mac_tx_fragment_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_fragment_pkt_num)},
+	{"mac_tx_undermin_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_undermin_pkt_num)},
+	{"mac_tx_jabber_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_jabber_pkt_num)},
+	{"mac_tx_err_all_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_err_all_pkt_num)},
+	{"mac_tx_from_app_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_from_app_good_pkt_num)},
+	{"mac_tx_from_app_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_tx_from_app_bad_pkt_num)},
+	{"mac_rx_fragment_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_fragment_pkt_num)},
+	{"mac_rx_undermin_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_undermin_pkt_num)},
+	{"mac_rx_jabber_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_jabber_pkt_num)},
+	{"mac_rx_fcs_err_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_fcs_err_pkt_num)},
+	{"mac_rx_send_app_good_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_send_app_good_pkt_num)},
+	{"mac_rx_send_app_bad_pkt_num",
+		HNS3_MAC_STATS_OFFSET(mac_rx_send_app_bad_pkt_num)}
+};
+
+static const struct hns3_xstats_name_offset hns3_error_int_stats_strings[] = {
+	{"MAC_AFIFO_TNL_INT_R",
+		HNS3_ERR_INT_STATS_FIELD_OFFSET(mac_afifo_tnl_intr_cnt)},
+	{"PPU_MPF_ABNORMAL_INT_ST2",
+		HNS3_ERR_INT_STATS_FIELD_OFFSET(ppu_mpf_abnormal_intr_st2_cnt)},
+	{"SSU_PORT_BASED_ERR_INT",
+		HNS3_ERR_INT_STATS_FIELD_OFFSET(ssu_port_based_pf_intr_cnt)},
+	{"PPP_PF_ABNORMAL_INT_ST0",
+		HNS3_ERR_INT_STATS_FIELD_OFFSET(ppp_pf_abnormal_intr_cnt)},
+	{"PPU_PF_ABNORMAL_INT_ST",
+		HNS3_ERR_INT_STATS_FIELD_OFFSET(ppu_pf_abnormal_intr_cnt)}
+};
+
+/* The statistic of reset */
+static const struct hns3_xstats_name_offset hns3_reset_stats_strings[] = {
+	{"REQ_RESET_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(request_cnt)},
+	{"GLOBAL_RESET_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(global_cnt)},
+	{"IMP_RESET_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(imp_cnt)},
+	{"RESET_EXEC_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(exec_cnt)},
+	{"RESET_SUCCESS_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(success_cnt)},
+	{"RESET_FAIL_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(fail_cnt)},
+	{"RESET_MERGE_CNT",
+		HNS3_RESET_STATS_FIELD_OFFSET(merge_cnt)}
+};
+
+#define HNS3_NUM_MAC_STATS (sizeof(hns3_mac_strings) / \
+	sizeof(hns3_mac_strings[0]))
+
+#define HNS3_NUM_ERROR_INT_XSTATS (sizeof(hns3_error_int_stats_strings) / \
+	sizeof(hns3_error_int_stats_strings[0]))
+
+#define HNS3_NUM_RESET_XSTATS (sizeof(hns3_reset_stats_strings) / \
+	sizeof(hns3_reset_stats_strings[0]))
+
+#define HNS3_MAX_NUM_STATS (HNS3_NUM_MAC_STATS + HNS3_NUM_ERROR_INT_XSTATS + \
+			    HNS3_NUM_RESET_XSTATS)
+
+/*
+ * Query all the MAC statistics data of Network ICL command ,opcode id: 0x0034.
+ * This command is used before send 'query_mac_stat command', the descriptor
+ * number of 'query_mac_stat command' must match with reg_num in this command.
+ * @praram hw
+ *   Pointer to structure hns3_hw.
+ * @return
+ *   0 on success.
+ */
+static int
+hns3_update_mac_stats(struct hns3_hw *hw, const uint32_t desc_num)
+{
+	uint64_t *data = (uint64_t *)(&hw->mac_stats);
+	struct hns3_cmd_desc *desc;
+	uint64_t *desc_data;
+	uint16_t i, k, n;
+	int ret;
+
+	desc = rte_malloc("hns3_mac_desc",
+			  desc_num * sizeof(struct hns3_cmd_desc), 0);
+	if (desc == NULL) {
+		hns3_err(hw, "Mac_update_stats alloced desc malloc fail");
+		return -ENOMEM;
+	}
+
+	hns3_cmd_setup_basic_desc(desc, HNS3_OPC_STATS_MAC_ALL, true);
+	ret = hns3_cmd_send(hw, desc, desc_num);
+	if (ret) {
+		hns3_err(hw, "Update complete MAC pkt stats fail : %d", ret);
+		rte_free(desc);
+		return ret;
+	}
+
+	for (i = 0; i < desc_num; i++) {
+		/* For special opcode 0034, only the first desc has the head */
+		if (i == 0) {
+			desc_data = (uint64_t *)(&desc[i].data[0]);
+			n = HNS3_RD_FIRST_STATS_NUM;
+		} else {
+			desc_data = (uint64_t *)(&desc[i]);
+			n = HNS3_RD_OTHER_STATS_NUM;
+		}
+
+		for (k = 0; k < n; k++) {
+			*data += rte_le_to_cpu_64(*desc_data);
+			data++;
+			desc_data++;
+		}
+	}
+	rte_free(desc);
+
+	return 0;
+}
+
+/*
+ * Query Mac stat reg num command ,opcode id: 0x0033.
+ * This command is used before send 'query_mac_stat command', the descriptor
+ * number of 'query_mac_stat command' must match with reg_num in this command.
+ * @praram rte_stats
+ *   Pointer to structure rte_eth_stats.
+ * @return
+ *   0 on success.
+ */
+static int
+hns3_mac_query_reg_num(struct rte_eth_dev *dev, uint32_t *desc_num)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_cmd_desc desc;
+	uint32_t *desc_data;
+	uint32_t reg_num;
+	int ret;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_MAC_REG_NUM, true);
+	ret = hns3_cmd_send(hw, &desc, 1);
+	if (ret)
+		return ret;
+
+	/*
+	 * The num of MAC statistics registers that are provided by IMP in this
+	 * version.
+	 */
+	desc_data = (uint32_t *)(&desc.data[0]);
+	reg_num = rte_le_to_cpu_32(*desc_data);
+	/* The descriptor number of 'query_additional_mac_stat command' is
+	 * '1 + (reg_num-3)/4 + ((reg_num-3)%4 !=0)';
+	 * This value is 83 in this version
+	 */
+	*desc_num = 1 + ((reg_num - 3) >> 2) +
+		    (uint32_t)(((reg_num - 3) & 0x3) ? 1 : 0);
+
+	return 0;
+}
+
+static int
+hns3_query_update_mac_stats(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	uint32_t desc_num;
+	int ret;
+
+	ret = hns3_mac_query_reg_num(dev, &desc_num);
+	if (ret == 0)
+		ret = hns3_update_mac_stats(hw, desc_num);
+	else
+		hns3_err(hw, "Query mac reg num fail : %d", ret);
+	return ret;
+}
+
+/* Get tqp stats from register */
+static int
+hns3_update_tqp_stats(struct hns3_hw *hw)
+{
+	struct hns3_tqp_stats *stats = &hw->tqp_stats;
+	struct hns3_cmd_desc desc;
+	uint64_t cnt;
+	uint16_t i;
+	int ret;
+
+	for (i = 0; i < hw->tqps_num; i++) {
+		hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_RX_STATUS,
+					  true);
+
+		desc.data[0] = rte_cpu_to_le_32((uint32_t)i &
+						HNS3_QUEUE_ID_MASK);
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			hns3_err(hw, "Failed to query RX No.%d queue stat: %d",
+				 i, ret);
+			return ret;
+		}
+		cnt = rte_le_to_cpu_32(desc.data[1]);
+		stats->rcb_rx_ring_pktnum_rcd += cnt;
+		stats->rcb_rx_ring_pktnum[i] += cnt;
+
+		hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_QUERY_TX_STATUS,
+					  true);
+
+		desc.data[0] = rte_cpu_to_le_32((uint32_t)i &
+						HNS3_QUEUE_ID_MASK);
+		ret = hns3_cmd_send(hw, &desc, 1);
+		if (ret) {
+			hns3_err(hw, "Failed to query TX No.%d queue stat: %d",
+				 i, ret);
+			return ret;
+		}
+		cnt = rte_le_to_cpu_32(desc.data[1]);
+		stats->rcb_tx_ring_pktnum_rcd += cnt;
+		stats->rcb_tx_ring_pktnum[i] += cnt;
+	}
+
+	return 0;
+}
+
+/*
+ * Query tqp tx queue statistics ,opcode id: 0x0B03.
+ * Query tqp rx queue statistics ,opcode id: 0x0B13.
+ * Get all statistics of a port.
+ * @param eth_dev
+ *   Pointer to Ethernet device.
+ * @praram rte_stats
+ *   Pointer to structure rte_eth_stats.
+ * @return
+ *   0 on success.
+ */
+int
+hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tqp_stats *stats = &hw->tqp_stats;
+	struct hns3_rx_queue *rxq;
+	uint64_t cnt;
+	uint64_t num;
+	uint16_t i;
+	int ret;
+
+	/* Update tqp stats by read register */
+	ret = hns3_update_tqp_stats(hw);
+	if (ret) {
+		hns3_err(hw, "Update tqp stats fail : %d", ret);
+		return ret;
+	}
+
+	rte_stats->ipackets  = stats->rcb_rx_ring_pktnum_rcd;
+	rte_stats->opackets  = stats->rcb_tx_ring_pktnum_rcd;
+	rte_stats->rx_nombuf = eth_dev->data->rx_mbuf_alloc_failed;
+
+	num = RTE_MIN(RTE_ETHDEV_QUEUE_STAT_CNTRS, hw->tqps_num);
+	for (i = 0; i < num; i++) {
+		rte_stats->q_ipackets[i] = stats->rcb_rx_ring_pktnum[i];
+		rte_stats->q_opackets[i] = stats->rcb_tx_ring_pktnum[i];
+	}
+
+	num = RTE_MIN(RTE_ETHDEV_QUEUE_STAT_CNTRS, eth_dev->data->nb_rx_queues);
+	for (i = 0; i != num; ++i) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (rxq) {
+			cnt = rxq->errors;
+			rte_stats->q_errors[i] = cnt;
+			rte_stats->ierrors += cnt;
+		}
+	}
+
+	return 0;
+}
+
+void
+hns3_stats_reset(struct rte_eth_dev *eth_dev)
+{
+	struct hns3_adapter *hns = eth_dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_tqp_stats *stats = &hw->tqp_stats;
+	struct hns3_cmd_desc desc_reset;
+	struct hns3_rx_queue *rxq;
+	struct hns3_tx_queue *txq;
+	uint16_t i;
+	int ret;
+
+	/*
+	 * If this is a reset xstats is NULL, and we have cleared the
+	 * registers by reading them.
+	 */
+	for (i = 0; i < hw->tqps_num; i++) {
+		hns3_cmd_setup_basic_desc(&desc_reset, HNS3_OPC_QUERY_RX_STATUS,
+					  true);
+		desc_reset.data[0] = rte_cpu_to_le_32((uint32_t)i &
+						      HNS3_QUEUE_ID_MASK);
+		ret = hns3_cmd_send(hw, &desc_reset, 1);
+		if (ret) {
+			hns3_err(hw, "Failed to reset RX No.%d queue stat: %d",
+				 i, ret);
+		}
+
+		hns3_cmd_setup_basic_desc(&desc_reset, HNS3_OPC_QUERY_TX_STATUS,
+					  true);
+		desc_reset.data[0] = rte_cpu_to_le_32((uint32_t)i &
+						      HNS3_QUEUE_ID_MASK);
+		ret = hns3_cmd_send(hw, &desc_reset, 1);
+		if (ret) {
+			hns3_err(hw, "Failed to reset TX No.%d queue stat: %d",
+				 i, ret);
+		}
+	}
+
+	for (i = 0; i != eth_dev->data->nb_rx_queues; ++i) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (rxq) {
+			rxq->non_vld_descs = 0;
+			rxq->l2_errors = 0;
+			rxq->csum_erros = 0;
+			rxq->pkt_len_errors = 0;
+			rxq->errors = 0;
+		}
+
+		txq = eth_dev->data->tx_queues[i];
+		if (txq)
+			txq->pkt_len_errors = 0;
+	}
+
+	memset(stats, 0, sizeof(struct hns3_tqp_stats));
+}
+
+static void
+hns3_mac_stats_reset(__rte_unused struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_mac_stats *mac_stats = &hw->mac_stats;
+	int ret;
+
+	ret = hns3_query_update_mac_stats(dev);
+	if (ret)
+		hns3_err(hw, "Clear Mac stats fail : %d", ret);
+
+	memset(mac_stats, 0, sizeof(struct hns3_mac_stats));
+}
+
+/* This function calculates the number of xstats based on the current config */
+static int
+hns3_xstats_calc_num(struct hns3_adapter *hns3)
+{
+	if (hns3->is_vf)
+		return HNS3_NUM_RESET_XSTATS;
+	else
+		return HNS3_MAX_NUM_STATS;
+}
+
+/*
+ * Retrieve extended(tqp | Mac) statistics of an Ethernet device.
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @praram xstats
+ *   A pointer to a table of structure of type *rte_eth_xstat*
+ *   to be filled with device statistics ids and values.
+ *   This parameter can be set to NULL if n is 0.
+ * @param n
+ *   The size of the xstats array (number of elements).
+ * @return
+ *   0 on fail, count(The size of the statistics elements) on success.
+ */
+int
+hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		    unsigned int n)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_mac_stats *mac_stats = &hw->mac_stats;
+	struct hns3_reset_stats *reset_stats = &hw->reset.stats;
+	int count;
+	uint16_t i;
+	char *addr;
+	int ret;
+
+	if (xstats == NULL)
+		return 0;
+
+	count = hns3_xstats_calc_num(hns);
+	if ((int)n < count)
+		return count;
+
+	count = 0;
+
+	if (!hns->is_vf) {
+		/* Update Mac stats */
+		ret = hns3_query_update_mac_stats(dev);
+		if (ret) {
+			hns3_err(hw, "Update Mac stats fail : %d", ret);
+			return 0;
+		}
+
+		/* Get MAC stats from hw->hw_xstats.mac_stats struct */
+		for (i = 0; i < HNS3_NUM_MAC_STATS; i++) {
+			addr = (char *)mac_stats + hns3_mac_strings[i].offset;
+			xstats[count].value = *(uint64_t *)addr;
+			xstats[count].id = count;
+			count++;
+		}
+
+		for (i = 0; i < HNS3_NUM_ERROR_INT_XSTATS; i++) {
+			addr = (char *)&pf->abn_int_stats +
+			       hns3_error_int_stats_strings[i].offset;
+			xstats[count].value = *(uint64_t *)addr;
+			xstats[count].id = count;
+			count++;
+		}
+	}
+
+	/* Get the reset stat */
+	for (i = 0; i < HNS3_NUM_RESET_XSTATS; i++) {
+		addr = (char *)reset_stats + hns3_reset_stats_strings[i].offset;
+		xstats[count].value = *(uint64_t *)addr;
+		xstats[count].id = count;
+		count++;
+	}
+	return count;
+}
+
+/*
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * There is an assumption that 'xstat_names' and 'xstats' arrays are matched
+ * by array index:
+ *  xstats_names[i].name => xstats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_eth_xstat':
+ *  xstats[i].id == i
+ *
+ * This assumption makes key-value pair matching less flexible but simpler.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param xstats_names
+ *   An rte_eth_xstat_name array of at least *size* elements to
+ *   be filled. If set to NULL, the function returns the required number
+ *   of elements.
+ * @param size
+ *   The size of the xstats_names array (number of elements).
+ * @return
+ *   - A positive value lower or equal to size: success. The return value
+ *     is the number of entries filled in the stats table.
+ */
+int
+hns3_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+			  struct rte_eth_xstat_name *xstats_names,
+			  __rte_unused unsigned int size)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	int cnt_stats = hns3_xstats_calc_num(hns);
+	uint32_t count = 0;
+	uint32_t i;
+
+	if (xstats_names == NULL)
+		return cnt_stats;
+
+	/* Note: size limited checked in rte_eth_xstats_get_names() */
+	if (!hns->is_vf) {
+		/* Get MAC name from hw->hw_xstats.mac_stats struct */
+		for (i = 0; i < HNS3_NUM_MAC_STATS; i++) {
+			snprintf(xstats_names[count].name,
+				 sizeof(xstats_names[count].name),
+				 "%s", hns3_mac_strings[i].name);
+			count++;
+		}
+
+		for (i = 0; i < HNS3_NUM_ERROR_INT_XSTATS; i++) {
+			snprintf(xstats_names[count].name,
+				 sizeof(xstats_names[count].name),
+				 "%s", hns3_error_int_stats_strings[i].name);
+			count++;
+		}
+	}
+	for (i = 0; i < HNS3_NUM_RESET_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", hns3_reset_stats_strings[i].name);
+		count++;
+	}
+
+	return cnt_stats;
+}
+
+/*
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param ids
+ *   A pointer to an ids array passed by application. This tells which
+ *   statistics values function should retrieve. This parameter
+ *   can be set to NULL if size is 0. In this case function will retrieve
+ *   all avalible statistics.
+ * @param values
+ *   A pointer to a table to be filled with device statistics values.
+ * @param size
+ *   The size of the ids array (number of elements).
+ * @return
+ *   - A positive value lower or equal to size: success. The return value
+ *     is the number of entries filled in the stats table.
+ *   - A positive value higher than size: error, the given statistics table
+ *     is too small. The return value corresponds to the size that should
+ *     be given to succeed. The entries in the table are not valid and
+ *     shall not be used by the caller.
+ *   - 0 on no ids.
+ */
+int
+hns3_dev_xstats_get_by_id(struct rte_eth_dev *dev,
+			  __rte_unused const uint64_t *ids,
+			  uint64_t *values, uint32_t size)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_mac_stats *mac_stats = &hw->mac_stats;
+	struct hns3_reset_stats *reset_stats = &hw->reset.stats;
+	const uint32_t cnt_stats = hns3_xstats_calc_num(hns);
+	uint64_t values_copy[HNS3_MAX_NUM_STATS];
+	uint32_t count = 0;
+	uint16_t i;
+	char *addr;
+	int ret;
+
+	if (ids == NULL && values == NULL)
+		return 0;
+
+	if (ids == NULL && size < cnt_stats)
+		return cnt_stats;
+
+	if (ids == NULL) {
+		/* Update tqp stats by read register */
+		ret = hns3_update_tqp_stats(hw);
+		if (ret) {
+			hns3_err(hw, "Update tqp stats fail : %d", ret);
+			return ret;
+		}
+
+		if (!hns->is_vf) {
+			/* Get MAC name from hw->hw_xstats.mac_stats struct */
+			for (i = 0; i < HNS3_NUM_MAC_STATS; i++) {
+				addr = (char *)mac_stats +
+				       hns3_mac_strings[i].offset;
+				values[count] = *(uint64_t *)addr;
+				count++;
+			}
+
+			for (i = 0; i < HNS3_NUM_ERROR_INT_XSTATS; i++) {
+				addr = (char *)&pf->abn_int_stats +
+				       hns3_error_int_stats_strings[i].offset;
+				values[count] = *(uint64_t *)addr;
+				count++;
+			}
+		}
+		for (i = 0; i < HNS3_NUM_RESET_XSTATS; i++) {
+			addr = (char *)reset_stats +
+			       hns3_reset_stats_strings[i].offset;
+			values[count] = *(uint64_t *)addr;
+			count++;
+		}
+
+		return count;
+	}
+
+	/* Set ids NULL to get the values of all by recursion */
+	(void)hns3_dev_xstats_get_by_id(dev, NULL, values_copy, cnt_stats);
+	for (i = 0; i < size; i++) {
+		if (ids[i] >= cnt_stats) {
+			hns3_err(hw, "id value is invalid");
+			return -EINVAL;
+		}
+		values[i] = values_copy[ids[i]];
+	}
+	return size;
+}
+
+/*
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param xstats_names
+ *   An rte_eth_xstat_name array of at least *size* elements to
+ *   be filled. If set to NULL, the function returns the required number
+ *   of elements.
+ * @param ids
+ *   IDs array given by app to retrieve specific statistics
+ * @param size
+ *   The size of the xstats_names array (number of elements).
+ * @return
+ *   - A positive value lower or equal to size: success. The return value
+ *     is the number of entries filled in the stats table.
+ *   - A positive value higher than size: error, the given statistics table
+ *     is too small. The return value corresponds to the size that should
+ *     be given to succeed. The entries in the table are not valid and
+ *     shall not be used by the caller.
+ */
+int
+hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				const uint64_t *ids, uint32_t size)
+{
+	struct rte_eth_xstat_name xstats_names_copy[HNS3_MAX_NUM_STATS];
+	struct hns3_adapter *hns = dev->data->dev_private;
+	const uint32_t cnt_stats = hns3_xstats_calc_num(hns);
+	uint32_t i, count_name = 0;
+
+	if (xstats_names == NULL)
+		return cnt_stats;
+
+	if (ids == NULL) {
+		/* Note: size >= cnt_stats checked upstream
+		 * in rte_eth_xstats_get_names()
+		 */
+		if (!hns->is_vf) {
+			for (i = 0; i < HNS3_NUM_MAC_STATS; i++) {
+				snprintf(xstats_names[count_name].name,
+					 sizeof(xstats_names[count_name].name),
+					 "%s", hns3_mac_strings[i].name);
+				count_name++;
+			}
+			for (i = 0; i < HNS3_NUM_ERROR_INT_XSTATS; i++) {
+				snprintf(xstats_names[count_name].name,
+					 sizeof(xstats_names[count_name].name),
+					 "%s",
+					 hns3_error_int_stats_strings[i].name);
+				count_name++;
+			}
+		}
+		for (i = 0; i < HNS3_NUM_RESET_XSTATS; i++) {
+			snprintf(xstats_names[count_name].name,
+				 sizeof(xstats_names[count_name].name),
+				 "%s", hns3_reset_stats_strings[i].name);
+			count_name++;
+		}
+
+		return cnt_stats;
+	}
+
+	/* Set ids NULL to get the names of all by recursion */
+	(void)hns3_dev_xstats_get_names_by_id(dev, xstats_names_copy, NULL,
+					      cnt_stats);
+
+	for (i = 0; i < size; i++) {
+		if (ids[i] >= cnt_stats) {
+			PMD_INIT_LOG(ERR, "id value is invalid");
+			return -EINVAL;
+		}
+		strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
+			strlen(xstats_names_copy[ids[i]].name));
+	}
+	return size;
+}
+
+void
+hns3_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct hns3_pf *pf = &hns->pf;
+
+	/* VF nosupport Mac and error stats */
+	hns3_stats_reset(dev);
+	memset(&hns->hw.reset.stats, 0, sizeof(struct hns3_reset_stats));
+
+	if (hns->is_vf)
+		return;
+
+	/* HW registers are cleared on read */
+	hns3_mac_stats_reset(dev);
+	memset(&pf->abn_int_stats, 0, sizeof(struct hns3_err_msix_intr_stats));
+}
diff --git a/drivers/net/hns3/hns3_stats.h b/drivers/net/hns3/hns3_stats.h
new file mode 100644
index 0000000..3eac44c
--- /dev/null
+++ b/drivers/net/hns3/hns3_stats.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_STATS_H_
+#define _HNS3_STATS_H_
+
+/* stats macro */
+#define HNS3_MAC_CMD_NUM		21
+#define HNS3_RD_FIRST_STATS_NUM		2
+#define HNS3_RD_OTHER_STATS_NUM		4
+#define HNS3_MAX_TQP_NUM_PER_FUNC	64
+
+/* TQP stats */
+struct hns3_tqp_stats {
+	uint64_t rcb_tx_ring_pktnum_rcd; /* Total num of transmitted packets */
+	uint64_t rcb_rx_ring_pktnum_rcd; /* Total num of received packets */
+	uint64_t rcb_tx_ring_pktnum[HNS3_MAX_TQP_NUM_PER_FUNC];
+	uint64_t rcb_rx_ring_pktnum[HNS3_MAX_TQP_NUM_PER_FUNC];
+};
+
+/* mac stats, Statistics counters collected by the MAC, opcode id: 0x0032 */
+struct hns3_mac_stats {
+	uint64_t mac_tx_mac_pause_num;
+	uint64_t mac_rx_mac_pause_num;
+	uint64_t mac_tx_pfc_pri0_pkt_num;
+	uint64_t mac_tx_pfc_pri1_pkt_num;
+	uint64_t mac_tx_pfc_pri2_pkt_num;
+	uint64_t mac_tx_pfc_pri3_pkt_num;
+	uint64_t mac_tx_pfc_pri4_pkt_num;
+	uint64_t mac_tx_pfc_pri5_pkt_num;
+	uint64_t mac_tx_pfc_pri6_pkt_num;
+	uint64_t mac_tx_pfc_pri7_pkt_num;
+	uint64_t mac_rx_pfc_pri0_pkt_num;
+	uint64_t mac_rx_pfc_pri1_pkt_num;
+	uint64_t mac_rx_pfc_pri2_pkt_num;
+	uint64_t mac_rx_pfc_pri3_pkt_num;
+	uint64_t mac_rx_pfc_pri4_pkt_num;
+	uint64_t mac_rx_pfc_pri5_pkt_num;
+	uint64_t mac_rx_pfc_pri6_pkt_num;
+	uint64_t mac_rx_pfc_pri7_pkt_num;
+	uint64_t mac_tx_total_pkt_num;
+	uint64_t mac_tx_total_oct_num;
+	uint64_t mac_tx_good_pkt_num;
+	uint64_t mac_tx_bad_pkt_num;
+	uint64_t mac_tx_good_oct_num;
+	uint64_t mac_tx_bad_oct_num;
+	uint64_t mac_tx_uni_pkt_num;
+	uint64_t mac_tx_multi_pkt_num;
+	uint64_t mac_tx_broad_pkt_num;
+	uint64_t mac_tx_undersize_pkt_num;
+	uint64_t mac_tx_oversize_pkt_num;
+	uint64_t mac_tx_64_oct_pkt_num;
+	uint64_t mac_tx_65_127_oct_pkt_num;
+	uint64_t mac_tx_128_255_oct_pkt_num;
+	uint64_t mac_tx_256_511_oct_pkt_num;
+	uint64_t mac_tx_512_1023_oct_pkt_num;
+	uint64_t mac_tx_1024_1518_oct_pkt_num;
+	uint64_t mac_tx_1519_2047_oct_pkt_num;
+	uint64_t mac_tx_2048_4095_oct_pkt_num;
+	uint64_t mac_tx_4096_8191_oct_pkt_num;
+	uint64_t rsv0;
+	uint64_t mac_tx_8192_9216_oct_pkt_num;
+	uint64_t mac_tx_9217_12287_oct_pkt_num;
+	uint64_t mac_tx_12288_16383_oct_pkt_num;
+	uint64_t mac_tx_1519_max_good_oct_pkt_num;
+	uint64_t mac_tx_1519_max_bad_oct_pkt_num;
+
+	uint64_t mac_rx_total_pkt_num;
+	uint64_t mac_rx_total_oct_num;
+	uint64_t mac_rx_good_pkt_num;
+	uint64_t mac_rx_bad_pkt_num;
+	uint64_t mac_rx_good_oct_num;
+	uint64_t mac_rx_bad_oct_num;
+	uint64_t mac_rx_uni_pkt_num;
+	uint64_t mac_rx_multi_pkt_num;
+	uint64_t mac_rx_broad_pkt_num;
+	uint64_t mac_rx_undersize_pkt_num;
+	uint64_t mac_rx_oversize_pkt_num;
+	uint64_t mac_rx_64_oct_pkt_num;
+	uint64_t mac_rx_65_127_oct_pkt_num;
+	uint64_t mac_rx_128_255_oct_pkt_num;
+	uint64_t mac_rx_256_511_oct_pkt_num;
+	uint64_t mac_rx_512_1023_oct_pkt_num;
+	uint64_t mac_rx_1024_1518_oct_pkt_num;
+	uint64_t mac_rx_1519_2047_oct_pkt_num;
+	uint64_t mac_rx_2048_4095_oct_pkt_num;
+	uint64_t mac_rx_4096_8191_oct_pkt_num;
+	uint64_t rsv1;
+	uint64_t mac_rx_8192_9216_oct_pkt_num;
+	uint64_t mac_rx_9217_12287_oct_pkt_num;
+	uint64_t mac_rx_12288_16383_oct_pkt_num;
+	uint64_t mac_rx_1519_max_good_oct_pkt_num;
+	uint64_t mac_rx_1519_max_bad_oct_pkt_num;
+
+	uint64_t mac_tx_fragment_pkt_num;
+	uint64_t mac_tx_undermin_pkt_num;
+	uint64_t mac_tx_jabber_pkt_num;
+	uint64_t mac_tx_err_all_pkt_num;
+	uint64_t mac_tx_from_app_good_pkt_num;
+	uint64_t mac_tx_from_app_bad_pkt_num;
+	uint64_t mac_rx_fragment_pkt_num;
+	uint64_t mac_rx_undermin_pkt_num;
+	uint64_t mac_rx_jabber_pkt_num;
+	uint64_t mac_rx_fcs_err_pkt_num;
+	uint64_t mac_rx_send_app_good_pkt_num;
+	uint64_t mac_rx_send_app_bad_pkt_num;
+	uint64_t mac_tx_pfc_pause_pkt_num;
+	uint64_t mac_rx_pfc_pause_pkt_num;
+	uint64_t mac_tx_ctrl_pkt_num;
+	uint64_t mac_rx_ctrl_pkt_num;
+};
+
+/* store statistics names and its offset in stats structure */
+struct hns3_xstats_name_offset {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	uint32_t offset;
+};
+
+#define HNS3_MAC_STATS_OFFSET(f) \
+	(offsetof(struct hns3_mac_stats, f))
+
+#define HNS3_ERR_INT_STATS_FIELD_OFFSET(f) \
+	(offsetof(struct hns3_err_msix_intr_stats, f))
+
+struct hns3_reset_stats;
+#define HNS3_RESET_STATS_FIELD_OFFSET(f) \
+	(offsetof(struct hns3_reset_stats, f))
+
+int hns3_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *rte_stats);
+int hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+			unsigned int n);
+void hns3_dev_xstats_reset(struct rte_eth_dev *dev);
+int hns3_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+			      struct rte_eth_xstat_name *xstats_names,
+			      __rte_unused unsigned int size);
+int hns3_dev_xstats_get_by_id(struct rte_eth_dev *dev,
+			      __rte_unused const uint64_t *ids,
+			      __rte_unused uint64_t *values,
+			      uint32_t size);
+int hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev,
+				    struct rte_eth_xstat_name *xstats_names,
+				    const uint64_t *ids,
+				    uint32_t size);
+void hns3_stats_reset(struct rte_eth_dev *dev);
+#endif /* _HNS3_STATS_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 20/22] net/hns3: add reset related process for hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (18 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support " Wei Hu (Xavier)
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds reset related process for hns3 PMD driver.
The following three scenarios will trigger the reset process,
and the driver settings will be restored after the reset is
successful:
1. Receive a reset interrupt
2. PF receives a hardware error interrupt
3. VF is notified by PF to reset

Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_cmd.c       |  31 ++
 drivers/net/hns3/hns3_ethdev.c    | 625 +++++++++++++++++++++++++++++++++++++-
 drivers/net/hns3/hns3_ethdev.h    |  13 +
 drivers/net/hns3/hns3_ethdev_vf.c | 426 +++++++++++++++++++++++++-
 drivers/net/hns3/hns3_intr.c      | 510 +++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_intr.h      |  11 +
 drivers/net/hns3/hns3_mbx.c       |  14 +
 7 files changed, 1602 insertions(+), 28 deletions(-)

diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 8c0bf8d..82eedf1 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -33,6 +33,7 @@
 #include "hns3_stats.h"
 #include "hns3_ethdev.h"
 #include "hns3_regs.h"
+#include "hns3_intr.h"
 #include "hns3_logs.h"
 
 #define hns3_is_csq(ring) ((ring)->flag & HNS3_TYPE_CSQ)
@@ -231,6 +232,22 @@ hns3_cmd_csq_clean(struct hns3_hw *hw)
 		hns3_err(hw, "wrong cmd head (%d, %d-%d)", head,
 			    csq->next_to_use, csq->next_to_clean);
 		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		if (hns->is_vf) {
+			global = hns3_read_dev(hw, HNS3_VF_RST_ING);
+			fun_rst = hns3_read_dev(hw, HNS3_FUN_RST_ING);
+			hns3_err(hw, "Delayed VF reset global: %x fun_rst: %x",
+				 global, fun_rst);
+			hns3_atomic_set_bit(HNS3_VF_RESET, &hw->reset.pending);
+		} else {
+			global = hns3_read_dev(hw, HNS3_GLOBAL_RESET_REG);
+			fun_rst = hns3_read_dev(hw, HNS3_FUN_RST_ING);
+			hns3_err(hw, "Delayed IMP reset global: %x fun_rst: %x",
+				 global, fun_rst);
+			hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
+		}
+
+		hns3_schedule_delayed_reset(hns);
+
 		return -EIO;
 	}
 
@@ -327,6 +344,11 @@ static int hns3_cmd_poll_reply(struct hns3_hw *hw)
 			return -EBUSY;
 		}
 
+		if (is_reset_pending(hns)) {
+			hns3_err(hw, "Don't wait for reply because of reset pending");
+			return -EIO;
+		}
+
 		rte_delay_us(1);
 		timeout++;
 	} while (timeout < hw->cmq.tx_timeout);
@@ -482,6 +504,15 @@ hns3_cmd_init(struct hns3_hw *hw)
 	rte_spinlock_unlock(&hw->cmq.crq.lock);
 	rte_spinlock_unlock(&hw->cmq.csq.lock);
 
+	/*
+	 * Check if there is new reset pending, because the higher level
+	 * reset may happen when lower level reset is being processed.
+	 */
+	if (is_reset_pending(HNS3_DEV_HW_TO_ADAPTER(hw))) {
+		PMD_INIT_LOG(ERR, "New reset pending, keep disable cmd");
+		ret = -EBUSY;
+		goto err_cmd_init;
+	}
 	rte_atomic16_clear(&hw->reset.disable_cmd);
 
 	ret = hns3_cmd_query_firmware_version(hw, &hw->fw_version);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 22d7e61..9a4c560 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -61,6 +61,17 @@
 #define HNS3_FILTER_FE_INGRESS		(HNS3_FILTER_FE_NIC_INGRESS_B \
 					| HNS3_FILTER_FE_ROCE_INGRESS_B)
 
+/* Reset related Registers */
+#define HNS3_GLOBAL_RESET_BIT		0
+#define HNS3_CORE_RESET_BIT		1
+#define HNS3_IMP_RESET_BIT		2
+#define HNS3_FUN_RST_ING_B		0
+
+#define HNS3_VECTOR0_IMP_RESET_INT_B	1
+
+#define HNS3_RESET_WAIT_MS	100
+#define HNS3_RESET_WAIT_CNT	200
+
 int hns3_logtype_init;
 int hns3_logtype_driver;
 
@@ -71,6 +82,8 @@ enum hns3_evt_cause {
 	HNS3_VECTOR0_EVENT_OTHER,
 };
 
+static enum hns3_reset_level hns3_get_reset_level(struct hns3_adapter *hns,
+						 uint64_t *levels);
 static int hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static int hns3_vlan_pvid_configure(struct hns3_adapter *hns, uint16_t pvid,
 				    int on);
@@ -108,12 +121,34 @@ hns3_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
 	 * from H/W just for the mailbox.
 	 */
 	if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_int_stats) { /* IMP */
+		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
+		val = BIT(HNS3_VECTOR0_IMPRESET_INT_B);
+		if (clearval) {
+			hw->reset.stats.imp_cnt++;
+			hns3_warn(hw, "IMP reset detected, clear reset status");
+		} else {
+			hns3_schedule_delayed_reset(hns);
+			hns3_warn(hw, "IMP reset detected, don't clear reset status");
+		}
+
 		ret = HNS3_VECTOR0_EVENT_RST;
 		goto out;
 	}
 
 	/* Global reset */
 	if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_int_stats) {
+		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		hns3_atomic_set_bit(HNS3_GLOBAL_RESET, &hw->reset.pending);
+		val = BIT(HNS3_VECTOR0_GLOBALRESET_INT_B);
+		if (clearval) {
+			hw->reset.stats.global_cnt++;
+			hns3_warn(hw, "Global reset detected, clear reset status");
+		} else {
+			hns3_schedule_delayed_reset(hns);
+			hns3_warn(hw, "Global reset detected, don't clear reset status");
+		}
+
 		ret = HNS3_VECTOR0_EVENT_RST;
 		goto out;
 	}
@@ -187,6 +222,15 @@ hns3_interrupt_handler(void *param)
 
 	event_cause = hns3_check_event_cause(hns, &clearval);
 
+	/* vector 0 interrupt is shared with reset and mailbox source events. */
+	if (event_cause == HNS3_VECTOR0_EVENT_ERR) {
+		hns3_handle_msix_error(hns, &hw->reset.request);
+		hns3_schedule_reset(hns);
+	} else if (event_cause == HNS3_VECTOR0_EVENT_RST)
+		hns3_schedule_reset(hns);
+	else
+		hns3_err(hw, "Received unknown event");
+
 	hns3_clear_event_cause(hw, event_cause, clearval);
 	/* Enable interrupt if it is not cause by reset */
 	hns3_pf_enable_irq0(hw);
@@ -261,6 +305,32 @@ hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id,
 }
 
 static int
+hns3_restore_vlan_table(struct hns3_adapter *hns)
+{
+	struct hns3_user_vlan_table *vlan_entry;
+	struct hns3_pf *pf = &hns->pf;
+	uint16_t vlan_id;
+	int ret = 0;
+
+	if (pf->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_ENABLE) {
+		ret = hns3_vlan_pvid_configure(hns, pf->port_base_vlan_cfg.pvid,
+					       1);
+		return ret;
+	}
+
+	LIST_FOREACH(vlan_entry, &pf->vlan_list, next) {
+		if (vlan_entry->hd_tbl_status) {
+			vlan_id = vlan_entry->vlan_id;
+			ret = hns3_set_port_vlan_filter(hns, vlan_id, 1);
+			if (ret)
+				break;
+		}
+	}
+
+	return ret;
+}
+
+static int
 hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
 {
 	struct hns3_pf *pf = &hns->pf;
@@ -837,7 +907,15 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	init_port_base_vlan_info(hw);
+	/*
+	 * This function can be called in the initialization and reset process,
+	 * when in reset process, it means that hardware had been reseted
+	 * successfully and we need to restore the hardware configuration to
+	 * ensure that the hardware configuration remains unchanged before and
+	 * after reset.
+	 */
+	if (rte_atomic16_read(&hw->reset.resetting) == 0)
+		init_port_base_vlan_info(hw);
 
 	ret = hns3_enable_vlan_filter(hns, true);
 	if (ret) {
@@ -852,22 +930,85 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_pvid_configure(hns, HNS3_INVLID_PVID, 0);
+	/*
+	 * When in the reinit dev stage of the reset process, the following
+	 * vlan-related configurations may differ from those at initialization,
+	 * we will restore configurations to hardware in hns3_restore_vlan_table
+	 * and hns3_restore_vlan_conf later.
+	 */
+	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
+		ret = hns3_vlan_pvid_configure(hns, HNS3_INVLID_PVID, 0);
+		if (ret) {
+			hns3_err(hw, "pvid set fail in pf, ret =%d", ret);
+			return ret;
+		}
+
+		ret = hns3_en_hw_strip_rxvtag(hns, false);
+		if (ret) {
+			hns3_err(hw, "rx strip configure fail in pf, ret =%d",
+				 ret);
+			return ret;
+		}
+	}
+
+	return hns3_default_vlan_config(hns);
+}
+
+static int
+hns3_restore_vlan_conf(struct hns3_adapter *hns)
+{
+	struct hns3_pf *pf = &hns->pf;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_set_vlan_rx_offload_cfg(hns, &pf->vtag_config.rx_vcfg);
 	if (ret) {
-		hns3_err(hw, "pvid set fail in pf, ret =%d", ret);
+		hns3_err(hw, "hns3 restore vlan rx conf fail, ret =%d", ret);
 		return ret;
 	}
 
-	ret = hns3_en_hw_strip_rxvtag(hns, false);
+	ret = hns3_set_vlan_tx_offload_cfg(hns, &pf->vtag_config.tx_vcfg);
+	if (ret)
+		hns3_err(hw, "hns3 restore vlan tx conf fail, ret =%d", ret);
+
+	return ret;
+}
+
+static int
+hns3_dev_configure_vlan(struct rte_eth_dev *dev)
+{
+	struct hns3_adapter *hns = dev->data->dev_private;
+	struct rte_eth_dev_data *data = dev->data;
+	struct rte_eth_txmode *txmode;
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	txmode = &data->dev_conf.txmode;
+	if (txmode->hw_vlan_reject_tagged || txmode->hw_vlan_reject_untagged)
+		hns3_warn(hw,
+			  "hw_vlan_reject_tagged or hw_vlan_reject_untagged "
+			  "configuration is not supported! Ignore these two "
+			  "parameters: hw_vlan_reject_tagged(%d), "
+			  "hw_vlan_reject_untagged(%d)",
+			  txmode->hw_vlan_reject_tagged,
+			  txmode->hw_vlan_reject_untagged);
+
+	/* Apply vlan offload setting */
+	ret = hns3_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
 	if (ret) {
-		hns3_err(hw, "rx strip configure fail in pf, ret =%d",
-			 ret);
+		hns3_err(hw, "dev config vlan Strip failed, ret =%d", ret);
 		return ret;
 	}
 
-	return hns3_default_vlan_config(hns);
-}
+	/* Apply pvid setting */
+	ret = hns3_vlan_pvid_set(dev, txmode->pvid,
+				 txmode->hw_vlan_insert_pvid);
+	if (ret)
+		hns3_err(hw, "dev config vlan pvid(%d) failed, ret =%d",
+			 txmode->pvid, ret);
 
+	return ret;
+}
 
 static int
 hns3_config_tso(struct hns3_hw *hw, unsigned int tso_mss_min,
@@ -3480,6 +3621,19 @@ hns3_dev_allmulticast_disable(struct rte_eth_dev *dev)
 }
 
 static int
+hns3_dev_promisc_restore(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	bool en_mc_pmc;
+	bool en_uc_pmc;
+
+	en_uc_pmc = (hw->data->promiscuous == 1) ? true : false;
+	en_mc_pmc = (hw->data->all_multicast == 1) ? true : false;
+
+	return hns3_set_promisc_mode(hw, en_uc_pmc, en_mc_pmc);
+}
+
+static int
 hns3_get_sfp_speed(struct hns3_hw *hw, uint32_t *speed)
 {
 	struct hns3_sfp_speed_cmd *resp;
@@ -3871,6 +4025,8 @@ hns3_dev_start(struct rte_eth_dev *eth_dev)
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	if (rte_atomic16_read(&hw->reset.resetting))
+		return -EBUSY;
 	rte_spinlock_lock(&hw->lock);
 	hw->adapter_state = HNS3_NIC_STARTING;
 
@@ -3902,8 +4058,11 @@ hns3_do_stop(struct hns3_adapter *hns)
 		return ret;
 	hw->mac.link_status = ETH_LINK_DOWN;
 
-	hns3_configure_all_mac_addr(hns, true);
-	reset_queue = true;
+	if (rte_atomic16_read(&hw->reset.disable_cmd) == 0) {
+		hns3_configure_all_mac_addr(hns, true);
+		reset_queue = true;
+	} else
+		reset_queue = false;
 	hw->mac.default_addr_setted = false;
 	return hns3_stop_queues(hns, reset_queue);
 }
@@ -3921,9 +4080,11 @@ hns3_dev_stop(struct rte_eth_dev *eth_dev)
 	rte_wmb();
 
 	rte_spinlock_lock(&hw->lock);
-	hns3_do_stop(hns);
-	hns3_dev_release_mbufs(hns);
-	hw->adapter_state = HNS3_NIC_CONFIGURED;
+	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
+		hns3_do_stop(hns);
+		hns3_dev_release_mbufs(hns);
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+	}
 	rte_spinlock_unlock(&hw->lock);
 }
 
@@ -3937,6 +4098,8 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 		hns3_dev_stop(eth_dev);
 
 	hw->adapter_state = HNS3_NIC_CLOSING;
+	hns3_reset_abort(hns);
+	hw->adapter_state = HNS3_NIC_CLOSED;
 	rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 
 	hns3_configure_all_mc_mac_addr(hns, true);
@@ -3944,7 +4107,8 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 	hns3_vlan_txvlan_cfg(hns, HNS3_PORT_BASE_VLAN_DISABLE, 0);
 	hns3_uninit_pf(eth_dev);
 	hns3_free_all_queues(eth_dev);
-	hw->adapter_state = HNS3_NIC_CLOSED;
+	rte_free(hw->reset.wait_data);
+	hns3_warn(hw, "Close port %d finished", hw->data->port_id);
 }
 
 static int
@@ -4133,6 +4297,414 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	return 0;
 }
 
+static int
+hns3_reinit_dev(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_cmd_init(hw);
+	if (ret) {
+		hns3_err(hw, "Failed to init cmd: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_reset_all_queues(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to reset all queues: %d", ret);
+		goto err_init;
+	}
+
+	ret = hns3_init_hardware(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to init hardware: %d", ret);
+		goto err_init;
+	}
+
+	ret = hns3_enable_hw_error_intr(hns, true);
+	if (ret) {
+		hns3_err(hw, "fail to enable hw error interrupts: %d",
+			     ret);
+		goto err_mac_init;
+	}
+	hns3_info(hw, "Reset done, driver initialization finished.");
+
+	return 0;
+
+err_mac_init:
+	hns3_uninit_umv_space(hw);
+err_init:
+	hns3_cmd_uninit(hw);
+
+	return ret;
+}
+
+static bool
+is_pf_reset_done(struct hns3_hw *hw)
+{
+	uint32_t val, reg, reg_bit;
+
+	switch (hw->reset.level) {
+	case HNS3_IMP_RESET:
+		reg = HNS3_GLOBAL_RESET_REG;
+		reg_bit = HNS3_IMP_RESET_BIT;
+		break;
+	case HNS3_GLOBAL_RESET:
+		reg = HNS3_GLOBAL_RESET_REG;
+		reg_bit = HNS3_GLOBAL_RESET_BIT;
+		break;
+	case HNS3_FUNC_RESET:
+		reg = HNS3_FUN_RST_ING;
+		reg_bit = HNS3_FUN_RST_ING_B;
+		break;
+	case HNS3_FLR_RESET:
+	default:
+		hns3_err(hw, "Wait for unsupported reset level: %d",
+			 hw->reset.level);
+		return true;
+	}
+	val = hns3_read_dev(hw, reg);
+	if (hns3_get_bit(val, reg_bit))
+		return false;
+	else
+		return true;
+}
+
+bool
+hns3_is_reset_pending(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_reset_level reset;
+
+	hns3_check_event_cause(hns, NULL);
+	reset = hns3_get_reset_level(hns, &hw->reset.pending);
+	if (hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) {
+		hns3_warn(hw, "High level reset %d is pending", reset);
+		return true;
+	}
+	reset = hns3_get_reset_level(hns, &hw->reset.request);
+	if (hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) {
+		hns3_warn(hw, "High level reset %d is request", reset);
+		return true;
+	}
+	return false;
+}
+
+static int
+hns3_wait_hardware_ready(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_wait_data *wait_data = hw->reset.wait_data;
+	struct timeval tv;
+
+	if (wait_data->result == HNS3_WAIT_SUCCESS)
+		return 0;
+	else if (wait_data->result == HNS3_WAIT_TIMEOUT) {
+		gettimeofday(&tv, NULL);
+		hns3_warn(hw, "Reset step4 hardware not ready after reset time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		return -ETIME;
+	} else if (wait_data->result == HNS3_WAIT_REQUEST)
+		return -EAGAIN;
+
+	wait_data->hns = hns;
+	wait_data->check_completion = is_pf_reset_done;
+	wait_data->end_ms = (uint64_t)HNS3_RESET_WAIT_CNT *
+				      HNS3_RESET_WAIT_MS + get_timeofday_ms();
+	wait_data->interval = HNS3_RESET_WAIT_MS * USEC_PER_MSEC;
+	wait_data->count = HNS3_RESET_WAIT_CNT;
+	wait_data->result = HNS3_WAIT_REQUEST;
+	rte_eal_alarm_set(wait_data->interval, hns3_wait_callback, wait_data);
+	return -EAGAIN;
+}
+
+static int
+hns3_func_reset_cmd(struct hns3_hw *hw, int func_id)
+{
+	struct hns3_cmd_desc desc;
+	struct hns3_reset_cmd *req = (struct hns3_reset_cmd *)desc.data;
+
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_RST_TRIGGER, false);
+	hns3_set_bit(req->mac_func_reset, HNS3_CFG_RESET_FUNC_B, 1);
+	req->fun_reset_vfid = func_id;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_imp_reset_cmd(struct hns3_hw *hw)
+{
+	struct hns3_cmd_desc desc;
+
+	hns3_cmd_setup_basic_desc(&desc, 0xFFFE, false);
+	desc.data[0] = 0xeedd;
+
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static void
+hns3_msix_process(struct hns3_adapter *hns, enum hns3_reset_level reset_level)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct timeval tv;
+	uint32_t val;
+
+	gettimeofday(&tv, NULL);
+	if (hns3_read_dev(hw, HNS3_GLOBAL_RESET_REG) ||
+	    hns3_read_dev(hw, HNS3_FUN_RST_ING)) {
+		hns3_warn(hw, "Don't process msix during resetting time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		return;
+	}
+
+	switch (reset_level) {
+	case HNS3_IMP_RESET:
+		hns3_imp_reset_cmd(hw);
+		hns3_warn(hw, "IMP Reset requested time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		break;
+	case HNS3_GLOBAL_RESET:
+		val = hns3_read_dev(hw, HNS3_GLOBAL_RESET_REG);
+		hns3_set_bit(val, HNS3_GLOBAL_RESET_BIT, 1);
+		hns3_write_dev(hw, HNS3_GLOBAL_RESET_REG, val);
+		hns3_warn(hw, "Global Reset requested time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		break;
+	case HNS3_FUNC_RESET:
+		hns3_warn(hw, "PF Reset requested time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		/* schedule again to check later */
+		hns3_atomic_set_bit(HNS3_FUNC_RESET, &hw->reset.pending);
+		hns3_schedule_reset(hns);
+		break;
+	default:
+		hns3_warn(hw, "Unsupported reset level: %d", reset_level);
+		return;
+	}
+	hns3_atomic_clear_bit(reset_level, &hw->reset.request);
+}
+
+static enum hns3_reset_level
+hns3_get_reset_level(struct hns3_adapter *hns, uint64_t *levels)
+{
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_reset_level reset_level = HNS3_NONE_RESET;
+
+	/* Return the highest priority reset level amongst all */
+	if (hns3_atomic_test_bit(HNS3_IMP_RESET, levels))
+		reset_level = HNS3_IMP_RESET;
+	else if (hns3_atomic_test_bit(HNS3_GLOBAL_RESET, levels))
+		reset_level = HNS3_GLOBAL_RESET;
+	else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
+		reset_level = HNS3_FUNC_RESET;
+	else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
+		reset_level = HNS3_FLR_RESET;
+
+	if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
+		return HNS3_NONE_RESET;
+
+	return reset_level;
+}
+
+static int
+hns3_prepare_reset(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	uint32_t reg_val;
+	int ret;
+
+	switch (hw->reset.level) {
+	case HNS3_FUNC_RESET:
+		ret = hns3_func_reset_cmd(hw, 0);
+		if (ret)
+			return ret;
+
+		/*
+		 * After performaning pf reset, it is not necessary to do the
+		 * mailbox handling or send any command to firmware, because
+		 * any mailbox handling or command to firmware is only valid
+		 * after hns3_cmd_init is called.
+		 */
+		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		hw->reset.stats.request_cnt++;
+		break;
+	case HNS3_IMP_RESET:
+		reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
+		hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val |
+			       BIT(HNS3_VECTOR0_IMP_RESET_INT_B));
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int
+hns3_set_rst_done(struct hns3_hw *hw)
+{
+	struct hns3_pf_rst_done_cmd *req;
+	struct hns3_cmd_desc desc;
+
+	req = (struct hns3_pf_rst_done_cmd *)desc.data;
+	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PF_RST_DONE, false);
+	req->pf_rst_done |= HNS3_PF_RESET_DONE_BIT;
+	return hns3_cmd_send(hw, &desc, 1);
+}
+
+static int
+hns3_stop_service(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = &rte_eth_devices[hw->data->port_id];
+	rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
+	hw->mac.link_status = ETH_LINK_DOWN;
+
+	hns3_set_rxtx_function(eth_dev);
+	rte_wmb();
+	/* Disable datapath on secondary process. */
+	hns3_mp_req_stop_rxtx(eth_dev);
+	rte_delay_ms(hw->tqps_num);
+
+	rte_spinlock_lock(&hw->lock);
+	if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
+	    hw->adapter_state == HNS3_NIC_STOPPING) {
+		hns3_do_stop(hns);
+		hw->reset.mbuf_deferred_free = true;
+	} else
+		hw->reset.mbuf_deferred_free = false;
+
+	/*
+	 * It is cumbersome for hardware to pick-and-choose entries for deletion
+	 * from table space. Hence, for function reset software intervention is
+	 * required to delete the entries
+	 */
+	if (rte_atomic16_read(&hw->reset.disable_cmd) == 0)
+		hns3_configure_all_mc_mac_addr(hns, true);
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static int
+hns3_start_service(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_dev *eth_dev;
+
+	if (hw->reset.level == HNS3_IMP_RESET ||
+	    hw->reset.level == HNS3_GLOBAL_RESET)
+		hns3_set_rst_done(hw);
+	eth_dev = &rte_eth_devices[hw->data->port_id];
+	hns3_set_rxtx_function(eth_dev);
+	hns3_mp_req_start_rxtx(eth_dev);
+	hns3_service_handler(eth_dev);
+	return 0;
+}
+
+static int
+hns3_restore_conf(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3_configure_all_mac_addr(hns, false);
+	if (ret)
+		return ret;
+
+	ret = hns3_configure_all_mc_mac_addr(hns, false);
+	if (ret)
+		goto err_mc_mac;
+
+	ret = hns3_dev_promisc_restore(hns);
+	if (ret)
+		goto err_promisc;
+
+	ret = hns3_restore_vlan_table(hns);
+	if (ret)
+		goto err_promisc;
+
+	ret = hns3_restore_vlan_conf(hns);
+	if (ret)
+		goto err_promisc;
+
+	ret = hns3_restore_all_fdir_filter(hns);
+	if (ret)
+		goto err_promisc;
+
+	if (hns->hw.adapter_state == HNS3_NIC_STARTED) {
+		ret = hns3_do_start(hns, false);
+		if (ret)
+			goto err_promisc;
+		hns3_info(hw, "hns3 dev restart successful!");
+	} else if (hw->adapter_state == HNS3_NIC_STOPPING)
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+	return 0;
+
+err_promisc:
+	hns3_configure_all_mc_mac_addr(hns, true);
+err_mc_mac:
+	hns3_configure_all_mac_addr(hns, true);
+	return ret;
+}
+
+static void
+hns3_reset_service(void *param)
+{
+	struct hns3_adapter *hns = (struct hns3_adapter *)param;
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_reset_level reset_level;
+	struct timeval tv_delta;
+	struct timeval tv_start;
+	struct timeval tv;
+	uint64_t msec;
+	int ret;
+
+	/*
+	 * The interrupt is not triggered within the delay time.
+	 * The interrupt may have been lost. It is necessary to handle
+	 * the interrupt to recover from the error.
+	 */
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_DEFERRED) {
+		rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_REQUESTED);
+		hns3_err(hw, "Handling interrupts in delayed tasks");
+		hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
+	}
+	rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_NONE);
+
+	/*
+	 * Check if there is any ongoing reset in the hardware. This status can
+	 * be checked from reset_pending. If there is then, we need to wait for
+	 * hardware to complete reset.
+	 *    a. If we are able to figure out in reasonable time that hardware
+	 *       has fully resetted then, we can proceed with driver, client
+	 *       reset.
+	 *    b. else, we can come back later to check this status so re-sched
+	 *       now.
+	 */
+	reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
+	if (reset_level != HNS3_NONE_RESET) {
+		gettimeofday(&tv_start, NULL);
+		ret = hns3_reset_process(hns, reset_level);
+		gettimeofday(&tv, NULL);
+		timersub(&tv, &tv_start, &tv_delta);
+		msec = tv_delta.tv_sec * MSEC_PER_SEC +
+		       tv_delta.tv_usec / USEC_PER_MSEC;
+		if (msec > HNS3_RESET_PROCESS_MS)
+			hns3_err(hw, "%d handle long time delta %ld ms time=%ld.%.6ld",
+				 hw->reset.level, msec,
+				 tv.tv_sec, tv.tv_usec);
+		if (ret == -EAGAIN)
+			return;
+	}
+
+	/* Check if we got any *new* reset requests to be honored */
+	reset_level = hns3_get_reset_level(hns, &hw->reset.request);
+	if (reset_level != HNS3_NONE_RESET)
+		hns3_msix_process(hns, reset_level);
+}
+
 static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_start          = hns3_dev_start,
 	.dev_stop           = hns3_dev_stop,
@@ -4178,6 +4750,16 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
+static const struct hns3_reset_ops hns3_reset_ops = {
+	.reset_service       = hns3_reset_service,
+	.stop_service        = hns3_stop_service,
+	.prepare_reset       = hns3_prepare_reset,
+	.wait_hardware_ready = hns3_wait_hardware_ready,
+	.reinit_dev          = hns3_reinit_dev,
+	.restore_conf	     = hns3_restore_conf,
+	.start_service       = hns3_start_service,
+};
+
 static int
 hns3_dev_init(struct rte_eth_dev *eth_dev)
 {
@@ -4221,6 +4803,11 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	 */
 	hns->pf.mps = hw->data->mtu + HNS3_ETH_OVERHEAD;
 
+	ret = hns3_reset_init(hw);
+	if (ret)
+		goto err_init_reset;
+	hw->reset.ops = &hns3_reset_ops;
+
 	ret = hns3_init_pf(eth_dev);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to init pf: %d", ret);
@@ -4244,6 +4831,14 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 			    &eth_dev->data->mac_addrs[0]);
 
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_PENDING) {
+		hns3_err(hw, "Reschedule reset service after dev_init");
+		hns3_schedule_reset(hns);
+	} else {
+		/* IMP will wait ready flag before reset */
+		hns3_notify_reset_ready(hw, false);
+	}
+
 	rte_eal_alarm_set(HNS3_SERVICE_INTERVAL, hns3_service_handler, eth_dev);
 	hns3_info(hw, "hns3 dev initialization successful!");
 
@@ -4253,6 +4848,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	hns3_uninit_pf(eth_dev);
 
 err_init_pf:
+	rte_free(hw->reset.wait_data);
+err_init_reset:
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
 	eth_dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 986314c..97f9637 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -620,5 +620,18 @@ hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr)
 
 int hns3_buffer_alloc(struct hns3_hw *hw);
 int hns3_config_gro(struct hns3_hw *hw, bool en);
+bool hns3_is_reset_pending(struct hns3_adapter *hns);
+bool hns3vf_is_reset_pending(struct hns3_adapter *hns);
+
+static inline bool
+is_reset_pending(struct hns3_adapter *hns)
+{
+	bool ret;
+	if (hns->is_vf)
+		ret = hns3vf_is_reset_pending(hns);
+	else
+		ret = hns3_is_reset_pending(hns);
+	return ret;
+}
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index d941969..45360c4 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -47,12 +47,20 @@
 #define HNS3VF_RESET_WAIT_MS	20
 #define HNS3VF_RESET_WAIT_CNT	2000
 
+/* Reset related Registers */
+#define HNS3_GLOBAL_RESET_BIT		0
+#define HNS3_CORE_RESET_BIT		1
+#define HNS3_IMP_RESET_BIT		2
+#define HNS3_FUN_RST_ING_B		0
+
 enum hns3vf_evt_cause {
 	HNS3VF_VECTOR0_EVENT_RST,
 	HNS3VF_VECTOR0_EVENT_MBX,
 	HNS3VF_VECTOR0_EVENT_OTHER,
 };
 
+static enum hns3_reset_level hns3vf_get_reset_level(struct hns3_hw *hw,
+						    uint64_t *levels);
 static int hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static int hns3vf_dev_configure_vlan(struct rte_eth_dev *dev);
 
@@ -442,6 +450,11 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		return -EBUSY;
 	}
 
+	if (rte_atomic16_read(&hw->reset.resetting)) {
+		hns3_err(hw, "Failed to set mtu during resetting");
+		return -EIO;
+	}
+
 	rte_spinlock_lock(&hw->lock);
 	ret = hns3vf_config_mtu(hw, mtu);
 	if (ret) {
@@ -545,6 +558,26 @@ hns3vf_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
 	/* Fetch the events from their corresponding regs */
 	cmdq_stat_reg = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_STAT_REG);
 
+	if (BIT(HNS3_VECTOR0_RST_INT_B) & cmdq_stat_reg) {
+		rst_ing_reg = hns3_read_dev(hw, HNS3_FUN_RST_ING);
+		hns3_warn(hw, "resetting reg: 0x%x", rst_ing_reg);
+		hns3_atomic_set_bit(HNS3_VF_RESET, &hw->reset.pending);
+		rte_atomic16_set(&hw->reset.disable_cmd, 1);
+		val = hns3_read_dev(hw, HNS3_VF_RST_ING);
+		hns3_write_dev(hw, HNS3_VF_RST_ING, val | HNS3_VF_RST_ING_BIT);
+		val = cmdq_stat_reg & ~BIT(HNS3_VECTOR0_RST_INT_B);
+		if (clearval) {
+			hw->reset.stats.global_cnt++;
+			hns3_warn(hw, "Global reset detected, clear reset status");
+		} else {
+			hns3_schedule_delayed_reset(hns);
+			hns3_warn(hw, "Global reset detected, don't clear reset status");
+		}
+
+		ret = HNS3VF_VECTOR0_EVENT_RST;
+		goto out;
+	}
+
 	/* Check for vector0 mailbox(=CMDQ RX) event source */
 	if (BIT(HNS3_VECTOR0_RX_CMDQ_INT_B) & cmdq_stat_reg) {
 		val = cmdq_stat_reg & ~BIT(HNS3_VECTOR0_RX_CMDQ_INT_B);
@@ -579,6 +612,9 @@ hns3vf_interrupt_handler(void *param)
 	event_cause = hns3vf_check_event_cause(hns, &clearval);
 
 	switch (event_cause) {
+	case HNS3VF_VECTOR0_EVENT_RST:
+		hns3_schedule_reset(hns);
+		break;
 	case HNS3VF_VECTOR0_EVENT_MBX:
 		hns3_dev_handle_mbx_msg(hw);
 		break;
@@ -802,6 +838,67 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 }
 
 static int
+hns3vf_restore_vlan_table(struct hns3_adapter *hns)
+{
+	struct rte_vlan_filter_conf *vfc;
+	struct hns3_hw *hw = &hns->hw;
+	uint16_t vlan_id;
+	uint64_t vbit;
+	uint64_t ids;
+	int ret = 0;
+	uint32_t i;
+
+	vfc = &hw->data->vlan_filter_conf;
+	for (i = 0; i < RTE_DIM(vfc->ids); i++) {
+		if (vfc->ids[i] == 0)
+			continue;
+		ids = vfc->ids[i];
+		while (ids) {
+			/*
+			 * 64 means the num bits of ids, one bit corresponds to
+			 * one vlan id
+			 */
+			vlan_id = 64 * i;
+			/* count trailing zeroes */
+			vbit = ~ids & (ids - 1);
+			/* clear least significant bit set */
+			ids ^= (ids ^ (ids - 1)) ^ vbit;
+			for (; vbit;) {
+				vbit >>= 1;
+				vlan_id++;
+			}
+			ret = hns3vf_vlan_filter_configure(hns, vlan_id, 1);
+			if (ret) {
+				hns3_err(hw,
+					 "VF restore vlan table failed, ret =%d",
+					 ret);
+				return ret;
+			}
+		}
+	}
+
+	return ret;
+}
+
+static int
+hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_conf *dev_conf;
+	bool en;
+	int ret;
+
+	dev_conf = &hw->data->dev_conf;
+	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+								   : false;
+	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
+	if (ret)
+		hns3_err(hw, "VF restore vlan conf fail, en =%d, ret =%d", en,
+			 ret);
+	return ret;
+}
+
+static int
 hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
@@ -843,14 +940,11 @@ hns3vf_keep_alive_handler(void *param)
 	uint8_t respmsg;
 	int ret;
 
-	if (!hns3vf_is_reset_pending(hns)) {
-		ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0,
-					false, &respmsg, sizeof(uint8_t));
-		if (ret)
-			hns3_err(hw, "VF sends keeping alive cmd failed(=%d)",
-				 ret);
-	} else
-		hns3_warn(hw, "Cancel keeping alive when reset is pending");
+	ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0,
+				false, &respmsg, sizeof(uint8_t));
+	if (ret)
+		hns3_err(hw, "VF sends keeping alive cmd failed(=%d)",
+			 ret);
 
 	rte_eal_alarm_set(HNS3VF_KEEP_ALIVE_INTERVAL, hns3vf_keep_alive_handler,
 			  eth_dev);
@@ -1028,8 +1122,11 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 
 	hw->mac.link_status = ETH_LINK_DOWN;
 
-	hns3vf_configure_mac_addr(hns, true);
-	reset_queue = true;
+	if (rte_atomic16_read(&hw->reset.disable_cmd) == 0) {
+		hns3vf_configure_mac_addr(hns, true);
+		reset_queue = true;
+	} else
+		reset_queue = false;
 	return hns3_stop_queues(hns, reset_queue);
 }
 
@@ -1050,9 +1147,11 @@ hns3vf_dev_stop(struct rte_eth_dev *eth_dev)
 	rte_delay_ms(hw->tqps_num);
 
 	rte_spinlock_lock(&hw->lock);
-	hns3vf_do_stop(hns);
-	hns3_dev_release_mbufs(hns);
-	hw->adapter_state = HNS3_NIC_CONFIGURED;
+	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
+		hns3vf_do_stop(hns);
+		hns3_dev_release_mbufs(hns);
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+	}
 	rte_spinlock_unlock(&hw->lock);
 }
 
@@ -1066,12 +1165,16 @@ hns3vf_dev_close(struct rte_eth_dev *eth_dev)
 		hns3vf_dev_stop(eth_dev);
 
 	hw->adapter_state = HNS3_NIC_CLOSING;
+	hns3_reset_abort(hns);
+	hw->adapter_state = HNS3_NIC_CLOSED;
 	rte_eal_alarm_cancel(hns3vf_keep_alive_handler, eth_dev);
 	rte_eal_alarm_cancel(hns3vf_service_handler, eth_dev);
+
 	hns3vf_configure_all_mc_mac_addr(hns, true);
 	hns3vf_uninit_vf(eth_dev);
 	hns3_free_all_queues(eth_dev);
-	hw->adapter_state = HNS3_NIC_CLOSED;
+	rte_free(hw->reset.wait_data);
+	hns3_warn(hw, "Close port %d finished", hw->data->port_id);
 }
 
 static int
@@ -1135,6 +1238,8 @@ hns3vf_dev_start(struct rte_eth_dev *eth_dev)
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	if (rte_atomic16_read(&hw->reset.resetting))
+		return -EBUSY;
 	rte_spinlock_lock(&hw->lock);
 	hw->adapter_state = HNS3_NIC_STARTING;
 	ret = hns3vf_do_start(hns, true);
@@ -1149,6 +1254,274 @@ hns3vf_dev_start(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static bool
+is_vf_reset_done(struct hns3_hw *hw)
+{
+#define HNS3_FUN_RST_ING_BITS \
+	(BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) | \
+	 BIT(HNS3_VECTOR0_CORERESET_INT_B) | \
+	 BIT(HNS3_VECTOR0_IMPRESET_INT_B) | \
+	 BIT(HNS3_VECTOR0_FUNCRESET_INT_B))
+
+	uint32_t val;
+
+	if (hw->reset.level == HNS3_VF_RESET) {
+		val = hns3_read_dev(hw, HNS3_VF_RST_ING);
+		if (val & HNS3_VF_RST_ING_BIT)
+			return false;
+	} else {
+		val = hns3_read_dev(hw, HNS3_FUN_RST_ING);
+		if (val & HNS3_FUN_RST_ING_BITS)
+			return false;
+	}
+	return true;
+}
+
+bool
+hns3vf_is_reset_pending(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_reset_level reset;
+
+	hns3vf_check_event_cause(hns, NULL);
+	reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
+	if (hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) {
+		hns3_warn(hw, "High level reset %d is pending", reset);
+		return true;
+	}
+	return false;
+}
+
+static int
+hns3vf_wait_hardware_ready(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct hns3_wait_data *wait_data = hw->reset.wait_data;
+	struct timeval tv;
+
+	if (wait_data->result == HNS3_WAIT_SUCCESS)
+		return 0;
+	else if (wait_data->result == HNS3_WAIT_TIMEOUT) {
+		gettimeofday(&tv, NULL);
+		hns3_warn(hw, "Reset step4 hardware not ready after reset time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		return -ETIME;
+	} else if (wait_data->result == HNS3_WAIT_REQUEST)
+		return -EAGAIN;
+
+	wait_data->hns = hns;
+	wait_data->check_completion = is_vf_reset_done;
+	wait_data->end_ms = (uint64_t)HNS3VF_RESET_WAIT_CNT *
+				      HNS3VF_RESET_WAIT_MS + get_timeofday_ms();
+	wait_data->interval = HNS3VF_RESET_WAIT_MS * USEC_PER_MSEC;
+	wait_data->count = HNS3VF_RESET_WAIT_CNT;
+	wait_data->result = HNS3_WAIT_REQUEST;
+	rte_eal_alarm_set(wait_data->interval, hns3_wait_callback, wait_data);
+	return -EAGAIN;
+}
+
+static int
+hns3vf_prepare_reset(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret = 0;
+
+	if (hw->reset.level == HNS3_VF_FUNC_RESET) {
+		ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL,
+					0, true, NULL, 0);
+	}
+	rte_atomic16_set(&hw->reset.disable_cmd, 1);
+
+	return ret;
+}
+
+static int
+hns3vf_stop_service(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = &rte_eth_devices[hw->data->port_id];
+	rte_eal_alarm_cancel(hns3vf_service_handler, eth_dev);
+	hw->mac.link_status = ETH_LINK_DOWN;
+
+	hns3_set_rxtx_function(eth_dev);
+	rte_wmb();
+	/* Disable datapath on secondary process. */
+	hns3_mp_req_stop_rxtx(eth_dev);
+	rte_delay_ms(hw->tqps_num);
+
+	rte_spinlock_lock(&hw->lock);
+	if (hw->adapter_state == HNS3_NIC_STARTED ||
+	    hw->adapter_state == HNS3_NIC_STOPPING) {
+		hns3vf_do_stop(hns);
+		hw->reset.mbuf_deferred_free = true;
+	} else
+		hw->reset.mbuf_deferred_free = false;
+
+	/*
+	 * It is cumbersome for hardware to pick-and-choose entries for deletion
+	 * from table space. Hence, for function reset software intervention is
+	 * required to delete the entries.
+	 */
+	if (rte_atomic16_read(&hw->reset.disable_cmd) == 0)
+		hns3vf_configure_all_mc_mac_addr(hns, true);
+	rte_spinlock_unlock(&hw->lock);
+
+	return 0;
+}
+
+static int
+hns3vf_start_service(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct rte_eth_dev *eth_dev;
+
+	eth_dev = &rte_eth_devices[hw->data->port_id];
+	hns3_set_rxtx_function(eth_dev);
+	hns3_mp_req_start_rxtx(eth_dev);
+
+	hns3vf_service_handler(eth_dev);
+	return 0;
+}
+
+static int
+hns3vf_restore_conf(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	ret = hns3vf_configure_mac_addr(hns, false);
+	if (ret)
+		return ret;
+
+	ret = hns3vf_configure_all_mc_mac_addr(hns, false);
+	if (ret)
+		goto err_mc_mac;
+
+	ret = hns3vf_restore_vlan_table(hns);
+	if (ret)
+		goto err_vlan_table;
+
+	ret = hns3vf_restore_vlan_conf(hns);
+	if (ret)
+		goto err_vlan_table;
+
+	if (hw->adapter_state == HNS3_NIC_STARTED) {
+		ret = hns3vf_do_start(hns, false);
+		if (ret)
+			goto err_vlan_table;
+		hns3_info(hw, "hns3vf dev restart successful!");
+	} else if (hw->adapter_state == HNS3_NIC_STOPPING)
+		hw->adapter_state = HNS3_NIC_CONFIGURED;
+	return 0;
+
+err_vlan_table:
+	hns3vf_configure_all_mc_mac_addr(hns, true);
+err_mc_mac:
+	hns3vf_configure_mac_addr(hns, true);
+	return ret;
+}
+
+static enum hns3_reset_level
+hns3vf_get_reset_level(struct hns3_hw *hw, uint64_t *levels)
+{
+	enum hns3_reset_level reset_level;
+
+	/* return the highest priority reset level amongst all */
+	if (hns3_atomic_test_bit(HNS3_VF_RESET, levels))
+		reset_level = HNS3_VF_RESET;
+	else if (hns3_atomic_test_bit(HNS3_VF_FULL_RESET, levels))
+		reset_level = HNS3_VF_FULL_RESET;
+	else if (hns3_atomic_test_bit(HNS3_VF_PF_FUNC_RESET, levels))
+		reset_level = HNS3_VF_PF_FUNC_RESET;
+	else if (hns3_atomic_test_bit(HNS3_VF_FUNC_RESET, levels))
+		reset_level = HNS3_VF_FUNC_RESET;
+	else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
+		reset_level = HNS3_FLR_RESET;
+	else
+		reset_level = HNS3_NONE_RESET;
+
+	if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
+		return HNS3_NONE_RESET;
+
+	return reset_level;
+}
+
+static void
+hns3vf_reset_service(void *param)
+{
+	struct hns3_adapter *hns = (struct hns3_adapter *)param;
+	struct hns3_hw *hw = &hns->hw;
+	enum hns3_reset_level reset_level;
+	struct timeval tv_delta;
+	struct timeval tv_start;
+	struct timeval tv;
+	uint64_t msec;
+
+	/*
+	 * The interrupt is not triggered within the delay time.
+	 * The interrupt may have been lost. It is necessary to handle
+	 * the interrupt to recover from the error.
+	 */
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_DEFERRED) {
+		rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_REQUESTED);
+		hns3_err(hw, "Handling interrupts in delayed tasks");
+		hns3vf_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
+	}
+	rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_NONE);
+
+	/*
+	 * Hardware reset has been notified, we now have to poll & check if
+	 * hardware has actually completed the reset sequence.
+	 */
+	reset_level = hns3vf_get_reset_level(hw, &hw->reset.pending);
+	if (reset_level != HNS3_NONE_RESET) {
+		gettimeofday(&tv_start, NULL);
+		hns3_reset_process(hns, reset_level);
+		gettimeofday(&tv, NULL);
+		timersub(&tv, &tv_start, &tv_delta);
+		msec = tv_delta.tv_sec * MSEC_PER_SEC +
+		       tv_delta.tv_usec / USEC_PER_MSEC;
+		if (msec > HNS3_RESET_PROCESS_MS)
+			hns3_err(hw, "%d handle long time delta %ld ms time=%ld.%.6ld",
+				 hw->reset.level, msec,
+				 tv.tv_sec, tv.tv_usec);
+	}
+}
+
+static int
+hns3vf_reinit_dev(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	int ret;
+
+	/* Firmware command initialize */
+	ret = hns3_cmd_init(hw);
+	if (ret) {
+		hns3_err(hw, "Failed to init cmd: %d", ret);
+		return ret;
+	}
+
+	ret = hns3_reset_all_queues(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to reset all queues: %d", ret);
+		goto err_init;
+	}
+
+	ret = hns3vf_init_hardware(hns);
+	if (ret) {
+		hns3_err(hw, "Failed to init hardware: %d", ret);
+		goto err_init;
+	}
+
+	return 0;
+
+err_init:
+	hns3_cmd_uninit(hw);
+	return ret;
+}
+
 static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.dev_start          = hns3vf_dev_start,
 	.dev_stop           = hns3vf_dev_stop,
@@ -1183,6 +1556,16 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
+static const struct hns3_reset_ops hns3vf_reset_ops = {
+	.reset_service       = hns3vf_reset_service,
+	.stop_service        = hns3vf_stop_service,
+	.prepare_reset       = hns3vf_prepare_reset,
+	.wait_hardware_ready = hns3vf_wait_hardware_ready,
+	.reinit_dev          = hns3vf_reinit_dev,
+	.restore_conf        = hns3vf_restore_conf,
+	.start_service       = hns3vf_start_service,
+};
+
 static int
 hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 {
@@ -1216,6 +1599,11 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	hns->is_vf = true;
 	hw->data = eth_dev->data;
 
+	ret = hns3_reset_init(hw);
+	if (ret)
+		goto err_init_reset;
+	hw->reset.ops = &hns3vf_reset_ops;
+
 	ret = hns3vf_init_vf(eth_dev);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Failed to init vf: %d", ret);
@@ -1238,6 +1626,13 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.mac_addr,
 			    &eth_dev->data->mac_addrs[0]);
 	hw->adapter_state = HNS3_NIC_INITIALIZED;
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_PENDING) {
+		hns3_err(hw, "Reschedule reset service after dev_init");
+		hns3_schedule_reset(hns);
+	} else {
+		/* IMP will wait ready flag before reset */
+		hns3_notify_reset_ready(hw, false);
+	}
 	rte_eal_alarm_set(HNS3VF_KEEP_ALIVE_INTERVAL, hns3vf_keep_alive_handler,
 			  eth_dev);
 	rte_eal_alarm_set(HNS3VF_SERVICE_INTERVAL, hns3vf_service_handler,
@@ -1248,6 +1643,9 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	hns3vf_uninit_vf(eth_dev);
 
 err_init_vf:
+	rte_free(hw->reset.wait_data);
+
+err_init_reset:
 	eth_dev->dev_ops = NULL;
 	eth_dev->rx_pkt_burst = NULL;
 	eth_dev->tx_pkt_burst = NULL;
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 2d2051e..1728311 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -24,6 +24,8 @@
 #include "hns3_regs.h"
 #include "hns3_rxtx.h"
 
+#define SWITCH_CONTEXT_US	10
+
 /* offset in MSIX bd */
 #define MAC_ERROR_OFFSET	1
 #define PPP_PF_ERROR_OFFSET	2
@@ -37,6 +39,11 @@
 			hw->reset.stats.merge_cnt++;	\
 	} while (0)
 
+static const char *reset_string[HNS3_MAX_RESET] = {
+	"none",	"vf_func", "vf_pf_func", "vf_full", "flr",
+	"vf_global", "pf_func", "global", "IMP",
+};
+
 const struct hns3_hw_error mac_afifo_tnl_int[] = {
 	{ .int_msk = BIT(0), .msg = "egu_cge_afifo_ecc_1bit_err",
 	  .reset_level = HNS3_NONE_RESET },
@@ -656,3 +663,506 @@ hns3_handle_msix_error(struct hns3_adapter *hns, uint64_t *levels)
 out:
 	rte_free(desc);
 }
+
+int
+hns3_reset_init(struct hns3_hw *hw)
+{
+	rte_spinlock_init(&hw->lock);
+	hw->reset.level = HNS3_NONE_RESET;
+	hw->reset.stage = RESET_STAGE_NONE;
+	hw->reset.request = 0;
+	hw->reset.pending = 0;
+	rte_atomic16_init(&hw->reset.resetting);
+	rte_atomic16_init(&hw->reset.disable_cmd);
+	hw->reset.wait_data = rte_zmalloc("wait_data",
+					  sizeof(struct hns3_wait_data), 0);
+	if (!hw->reset.wait_data) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for wait_data");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+void
+hns3_schedule_reset(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+
+	/* Reschedule the reset process after successful initialization */
+	if (hw->adapter_state == HNS3_NIC_UNINITIALIZED) {
+		rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_PENDING);
+		return;
+	}
+
+	if (hw->adapter_state >= HNS3_NIC_CLOSED)
+		return;
+
+	/* Schedule restart alarm if it is not scheduled yet */
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_REQUESTED)
+		return;
+	if (rte_atomic16_read(&hns->hw.reset.schedule) == SCHEDULE_DEFERRED)
+		rte_eal_alarm_cancel(hw->reset.ops->reset_service, hns);
+	rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_REQUESTED);
+
+	rte_eal_alarm_set(SWITCH_CONTEXT_US, hw->reset.ops->reset_service, hns);
+}
+
+void
+hns3_schedule_delayed_reset(struct hns3_adapter *hns)
+{
+#define DEFERRED_SCHED_US (3 * MSEC_PER_SEC * USEC_PER_MSEC)
+	struct hns3_hw *hw = &hns->hw;
+
+	/* Do nothing if it is uninited or closed */
+	if (hw->adapter_state == HNS3_NIC_UNINITIALIZED ||
+	    hw->adapter_state >= HNS3_NIC_CLOSED) {
+		return;
+	}
+
+	if (rte_atomic16_read(&hns->hw.reset.schedule) != SCHEDULE_NONE)
+		return;
+	rte_atomic16_set(&hns->hw.reset.schedule, SCHEDULE_DEFERRED);
+	rte_eal_alarm_set(DEFERRED_SCHED_US, hw->reset.ops->reset_service, hns);
+}
+
+void
+hns3_wait_callback(void *param)
+{
+	struct hns3_wait_data *data = (struct hns3_wait_data *)param;
+	struct hns3_adapter *hns = data->hns;
+	struct hns3_hw *hw = &hns->hw;
+	uint64_t msec;
+	bool done;
+
+	data->count--;
+	if (data->check_completion) {
+		/*
+		 * Check if the current time exceeds the deadline
+		 * or a pending reset coming, or reset during close.
+		 */
+		msec = get_timeofday_ms();
+		if (msec > data->end_ms || is_reset_pending(hns) ||
+		    hw->adapter_state == HNS3_NIC_CLOSING) {
+			done = false;
+			data->count = 0;
+		} else
+			done = data->check_completion(hw);
+	} else
+		done = true;
+
+	if (!done && data->count > 0) {
+		rte_eal_alarm_set(data->interval, hns3_wait_callback, data);
+		return;
+	}
+	if (done)
+		data->result = HNS3_WAIT_SUCCESS;
+	else {
+		hns3_err(hw, "%s wait timeout at stage %d",
+			 reset_string[hw->reset.level], hw->reset.stage);
+		data->result = HNS3_WAIT_TIMEOUT;
+	}
+	hns3_schedule_reset(hns);
+}
+
+void
+hns3_notify_reset_ready(struct hns3_hw *hw, bool enable)
+{
+	uint32_t reg_val;
+
+	reg_val = hns3_read_dev(hw, HNS3_CMDQ_TX_DEPTH_REG);
+	if (enable)
+		reg_val |= HNS3_NIC_SW_RST_RDY;
+	else
+		reg_val &= ~HNS3_NIC_SW_RST_RDY;
+
+	hns3_write_dev(hw, HNS3_CMDQ_TX_DEPTH_REG, reg_val);
+}
+
+int
+hns3_reset_req_hw_reset(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+
+	if (hw->reset.wait_data->result == HNS3_WAIT_UNKNOWN) {
+		hw->reset.wait_data->hns = hns;
+		hw->reset.wait_data->check_completion = NULL;
+		hw->reset.wait_data->interval = HNS3_RESET_SYNC_US;
+		hw->reset.wait_data->count = 1;
+		hw->reset.wait_data->result = HNS3_WAIT_REQUEST;
+		rte_eal_alarm_set(hw->reset.wait_data->interval,
+				  hns3_wait_callback, hw->reset.wait_data);
+		return -EAGAIN;
+	} else if (hw->reset.wait_data->result == HNS3_WAIT_REQUEST)
+		return -EAGAIN;
+
+	/* inform hardware that preparatory work is done */
+	hns3_notify_reset_ready(hw, true);
+	return 0;
+}
+
+static void
+hns3_clear_reset_level(struct hns3_hw *hw, uint64_t *levels)
+{
+	uint64_t merge_cnt = hw->reset.stats.merge_cnt;
+	int64_t tmp;
+
+	switch (hw->reset.level) {
+	case HNS3_IMP_RESET:
+		hns3_atomic_clear_bit(HNS3_IMP_RESET, levels);
+		tmp = hns3_test_and_clear_bit(HNS3_GLOBAL_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		tmp = hns3_test_and_clear_bit(HNS3_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		break;
+	case HNS3_GLOBAL_RESET:
+		hns3_atomic_clear_bit(HNS3_GLOBAL_RESET, levels);
+		tmp = hns3_test_and_clear_bit(HNS3_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		break;
+	case HNS3_FUNC_RESET:
+		hns3_atomic_clear_bit(HNS3_FUNC_RESET, levels);
+		break;
+	case HNS3_VF_RESET:
+		hns3_atomic_clear_bit(HNS3_VF_RESET, levels);
+		tmp = hns3_test_and_clear_bit(HNS3_VF_PF_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		break;
+	case HNS3_VF_FULL_RESET:
+		hns3_atomic_clear_bit(HNS3_VF_FULL_RESET, levels);
+		tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		break;
+	case HNS3_VF_PF_FUNC_RESET:
+		hns3_atomic_clear_bit(HNS3_VF_PF_FUNC_RESET, levels);
+		tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels);
+		HNS3_CHECK_MERGE_CNT(tmp);
+		break;
+	case HNS3_VF_FUNC_RESET:
+		hns3_atomic_clear_bit(HNS3_VF_FUNC_RESET, levels);
+		break;
+	case HNS3_FLR_RESET:
+		hns3_atomic_clear_bit(HNS3_FLR_RESET, levels);
+		break;
+	case HNS3_NONE_RESET:
+	default:
+		return;
+	};
+	if (merge_cnt != hw->reset.stats.merge_cnt)
+		hns3_warn(hw, "No need to do low-level reset after %s reset. "
+			      "merge cnt: %ld total merge_cnt: %ld",
+			  reset_string[hw->reset.level],
+			  hw->reset.stats.merge_cnt - merge_cnt,
+			  hw->reset.stats.merge_cnt);
+}
+
+static bool
+hns3_reset_err_handle(struct hns3_adapter *hns)
+{
+#define MAX_RESET_FAIL_CNT 5
+
+	struct hns3_hw *hw = &hns->hw;
+
+	if (hw->adapter_state == HNS3_NIC_CLOSING)
+		goto reset_fail;
+
+	if (is_reset_pending(hns)) {
+		hw->reset.attempts = 0;
+		hw->reset.stats.fail_cnt++;
+		hns3_warn(hw, "%s reset fail because new Reset is pending attempts:%lu",
+			  reset_string[hw->reset.level],
+			  hw->reset.stats.fail_cnt);
+		hw->reset.level = HNS3_NONE_RESET;
+		return true;
+	}
+
+	hw->reset.attempts++;
+	if (hw->reset.attempts < MAX_RESET_FAIL_CNT) {
+		hns3_atomic_set_bit(hw->reset.level, &hw->reset.pending);
+		hns3_warn(hw, "%s retry to reset attempts: %d",
+			  reset_string[hw->reset.level],
+			  hw->reset.attempts);
+		return true;
+	}
+
+	if (rte_atomic16_read(&hw->reset.disable_cmd))
+		hns3_cmd_init(hw);
+reset_fail:
+	hw->reset.attempts = 0;
+	hw->reset.stats.fail_cnt++;
+	hns3_warn(hw, "%s reset fail fail_cnt:%lu success_cnt:%lu "
+		  "global_cnt:%lu imp_cnt:%lu request_cnt:%lu exec_cnt:%lu "
+		  "merge_cnt:%lu",
+		  reset_string[hw->reset.level], hw->reset.stats.fail_cnt,
+		  hw->reset.stats.success_cnt, hw->reset.stats.global_cnt,
+		  hw->reset.stats.imp_cnt, hw->reset.stats.request_cnt,
+		  hw->reset.stats.exec_cnt, hw->reset.stats.merge_cnt);
+
+	/* IMP no longer waiting the ready flag */
+	hns3_notify_reset_ready(hw, true);
+	return false;
+}
+
+static int
+hns3_reset_pre(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct timeval tv;
+	int ret;
+
+	if (hw->reset.stage == RESET_STAGE_NONE) {
+		rte_atomic16_set(&hns->hw.reset.resetting, 1);
+		hw->reset.stage = RESET_STAGE_DOWN;
+		ret = hw->reset.ops->stop_service(hns);
+		gettimeofday(&tv, NULL);
+		if (ret) {
+			hns3_warn(hw, "Reset step1 down fail=%d time=%ld.%.6ld",
+				  ret, tv.tv_sec, tv.tv_usec);
+			return ret;
+		}
+		hns3_warn(hw, "Reset step1 down success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.stage = RESET_STAGE_PREWAIT;
+	}
+	if (hw->reset.stage == RESET_STAGE_PREWAIT) {
+		ret = hw->reset.ops->prepare_reset(hns);
+		gettimeofday(&tv, NULL);
+		if (ret) {
+			hns3_warn(hw,
+				  "Reset step2 prepare wait fail=%d time=%ld.%.6ld",
+				  ret, tv.tv_sec, tv.tv_usec);
+			return ret;
+		}
+		hns3_warn(hw, "Reset step2 prepare wait success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.stage = RESET_STAGE_REQ_HW_RESET;
+		hw->reset.wait_data->result = HNS3_WAIT_UNKNOWN;
+	}
+	return 0;
+}
+
+static int
+hns3_reset_post(struct hns3_adapter *hns)
+{
+#define TIMEOUT_RETRIES_CNT	5
+	struct hns3_hw *hw = &hns->hw;
+	struct timeval tv_delta;
+	struct timeval tv;
+	int ret = 0;
+
+	if (hw->adapter_state == HNS3_NIC_CLOSING) {
+		hns3_warn(hw, "Don't do reset_post during closing, just uninit cmd");
+		hns3_cmd_uninit(hw);
+		return -EPERM;
+	}
+
+	if (hw->reset.stage == RESET_STAGE_DEV_INIT) {
+		rte_spinlock_lock(&hw->lock);
+		if (hw->reset.mbuf_deferred_free) {
+			hns3_dev_release_mbufs(hns);
+			hw->reset.mbuf_deferred_free = false;
+		}
+		ret = hw->reset.ops->reinit_dev(hns);
+		rte_spinlock_unlock(&hw->lock);
+		gettimeofday(&tv, NULL);
+		if (ret) {
+			hns3_warn(hw, "Reset step5 devinit fail=%d retries=%d",
+				  ret, hw->reset.retries);
+			goto err;
+		}
+		hns3_warn(hw, "Reset step5 devinit success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.retries = 0;
+		hw->reset.stage = RESET_STAGE_RESTORE;
+		rte_eal_alarm_set(SWITCH_CONTEXT_US,
+				  hw->reset.ops->reset_service, hns);
+		return -EAGAIN;
+	}
+	if (hw->reset.stage == RESET_STAGE_RESTORE) {
+		rte_spinlock_lock(&hw->lock);
+		ret = hw->reset.ops->restore_conf(hns);
+		rte_spinlock_unlock(&hw->lock);
+		gettimeofday(&tv, NULL);
+		if (ret) {
+			hns3_warn(hw,
+				  "Reset step6 restore fail=%d retries=%d",
+				  ret, hw->reset.retries);
+			goto err;
+		}
+		hns3_warn(hw, "Reset step6 restore success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.retries = 0;
+		hw->reset.stage = RESET_STAGE_DONE;
+	}
+	if (hw->reset.stage == RESET_STAGE_DONE) {
+		/* IMP will wait ready flag before reset */
+		hns3_notify_reset_ready(hw, false);
+		hns3_clear_reset_level(hw, &hw->reset.pending);
+		rte_atomic16_clear(&hns->hw.reset.resetting);
+		hw->reset.attempts = 0;
+		hw->reset.stats.success_cnt++;
+		hw->reset.stage = RESET_STAGE_NONE;
+		hw->reset.ops->start_service(hns);
+		gettimeofday(&tv, NULL);
+		timersub(&tv, &hw->reset.start_time, &tv_delta);
+		hns3_warn(hw, "%s reset done fail_cnt:%lu success_cnt:%lu "
+			  "global_cnt:%lu imp_cnt:%lu request_cnt:%lu exec_cnt:%lu "
+			  "merge_cnt:%lu",
+			  reset_string[hw->reset.level],
+			  hw->reset.stats.fail_cnt, hw->reset.stats.success_cnt,
+			  hw->reset.stats.global_cnt, hw->reset.stats.imp_cnt,
+			  hw->reset.stats.request_cnt, hw->reset.stats.exec_cnt,
+			  hw->reset.stats.merge_cnt);
+		hns3_warn(hw,
+			  "%s reset done delta %ld ms time=%ld.%.6ld",
+			  reset_string[hw->reset.level],
+			  tv_delta.tv_sec * MSEC_PER_SEC +
+			  tv_delta.tv_usec / USEC_PER_MSEC,
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.level = HNS3_NONE_RESET;
+	}
+	return 0;
+
+err:
+	if (ret == -ETIME) {
+		hw->reset.retries++;
+		if (hw->reset.retries < TIMEOUT_RETRIES_CNT) {
+			rte_eal_alarm_set(HNS3_RESET_SYNC_US,
+					  hw->reset.ops->reset_service, hns);
+			return -EAGAIN;
+		}
+	}
+	hw->reset.retries = 0;
+	return -EIO;
+}
+
+/*
+ * There are three scenarios as follows:
+ * When the reset is not in progress, the reset process starts.
+ * During the reset process, if the reset level has not changed,
+ * the reset process continues; otherwise, the reset process is aborted.
+ *	hw->reset.level   new_level          action
+ *	HNS3_NONE_RESET	 HNS3_XXXX_RESET    start reset
+ *	HNS3_XXXX_RESET  HNS3_XXXX_RESET    continue reset
+ *	HNS3_LOW_RESET   HNS3_HIGH_RESET    abort
+ */
+int
+hns3_reset_process(struct hns3_adapter *hns, enum hns3_reset_level new_level)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct timeval tv_delta;
+	struct timeval tv;
+	int ret;
+
+	if (hw->reset.level == HNS3_NONE_RESET) {
+		hw->reset.level = new_level;
+		hw->reset.stats.exec_cnt++;
+		gettimeofday(&hw->reset.start_time, NULL);
+		hns3_warn(hw, "Start %s reset time=%ld.%.6ld",
+			  reset_string[hw->reset.level],
+			  hw->reset.start_time.tv_sec,
+			  hw->reset.start_time.tv_usec);
+	}
+
+	if (is_reset_pending(hns)) {
+		gettimeofday(&tv, NULL);
+		hns3_warn(hw,
+			  "%s reset is aborted by high level time=%ld.%.6ld",
+			  reset_string[hw->reset.level], tv.tv_sec, tv.tv_usec);
+		if (hw->reset.wait_data->result == HNS3_WAIT_REQUEST)
+			rte_eal_alarm_cancel(hns3_wait_callback,
+					     hw->reset.wait_data);
+		ret = -EBUSY;
+		goto err;
+	}
+
+	ret = hns3_reset_pre(hns);
+	if (ret)
+		goto err;
+
+	if (hw->reset.stage == RESET_STAGE_REQ_HW_RESET) {
+		ret = hns3_reset_req_hw_reset(hns);
+		if (ret == -EAGAIN)
+			return ret;
+		gettimeofday(&tv, NULL);
+		hns3_warn(hw,
+			  "Reset step3 request IMP reset success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.stage = RESET_STAGE_WAIT;
+		hw->reset.wait_data->result = HNS3_WAIT_UNKNOWN;
+	}
+	if (hw->reset.stage == RESET_STAGE_WAIT) {
+		ret = hw->reset.ops->wait_hardware_ready(hns);
+		if (ret)
+			goto retry;
+		gettimeofday(&tv, NULL);
+		hns3_warn(hw, "Reset step4 reset wait success time=%ld.%.6ld",
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.stage = RESET_STAGE_DEV_INIT;
+	}
+
+	ret = hns3_reset_post(hns);
+	if (ret)
+		goto retry;
+
+	return 0;
+retry:
+	if (ret == -EAGAIN)
+		return ret;
+err:
+	hns3_clear_reset_level(hw, &hw->reset.pending);
+	if (hns3_reset_err_handle(hns)) {
+		hw->reset.stage = RESET_STAGE_PREWAIT;
+		hns3_schedule_reset(hns);
+	} else {
+		rte_spinlock_lock(&hw->lock);
+		if (hw->reset.mbuf_deferred_free) {
+			hns3_dev_release_mbufs(hns);
+			hw->reset.mbuf_deferred_free = false;
+		}
+		rte_spinlock_unlock(&hw->lock);
+		rte_atomic16_clear(&hns->hw.reset.resetting);
+		hw->reset.stage = RESET_STAGE_NONE;
+		gettimeofday(&tv, NULL);
+		timersub(&tv, &hw->reset.start_time, &tv_delta);
+		hns3_warn(hw, "%s reset fail delta %ld ms time=%ld.%.6ld",
+			  reset_string[hw->reset.level],
+			  tv_delta.tv_sec * MSEC_PER_SEC +
+			  tv_delta.tv_usec / USEC_PER_MSEC,
+			  tv.tv_sec, tv.tv_usec);
+		hw->reset.level = HNS3_NONE_RESET;
+	}
+
+	return -EIO;
+}
+
+/*
+ * The reset process can only be terminated after handshake with IMP(step3),
+ * so that IMP can complete the reset process normally.
+ */
+void
+hns3_reset_abort(struct hns3_adapter *hns)
+{
+	struct hns3_hw *hw = &hns->hw;
+	struct timeval tv;
+	int i;
+
+	for (i = 0; i < HNS3_QUIT_RESET_CNT; i++) {
+		if (hw->reset.level == HNS3_NONE_RESET)
+			break;
+		rte_delay_ms(HNS3_QUIT_RESET_DELAY_MS);
+	}
+
+	/* IMP no longer waiting the ready flag */
+	hns3_notify_reset_ready(hw, true);
+
+	rte_eal_alarm_cancel(hw->reset.ops->reset_service, hns);
+	rte_eal_alarm_cancel(hns3_wait_callback, hw->reset.wait_data);
+
+	if (hw->reset.level != HNS3_NONE_RESET) {
+		gettimeofday(&tv, NULL);
+		hns3_err(hw, "Failed to terminate reset: %s time=%ld.%.6ld",
+			 reset_string[hw->reset.level], tv.tv_sec, tv.tv_usec);
+	}
+}
diff --git a/drivers/net/hns3/hns3_intr.h b/drivers/net/hns3/hns3_intr.h
index b57b4ac..d0af16c 100644
--- a/drivers/net/hns3/hns3_intr.h
+++ b/drivers/net/hns3/hns3_intr.h
@@ -49,6 +49,8 @@
 #define HNS3_SSU_COMMON_ERR_INT_MASK		GENMASK(9, 0)
 #define HNS3_SSU_PORT_INT_MSIX_MASK		0x7BFF
 
+#define HNS3_RESET_PROCESS_MS			200
+
 struct hns3_hw_blk {
 	const char *name;
 	int (*enable_err_intr)(struct hns3_adapter *hns, bool en);
@@ -64,5 +66,14 @@ int hns3_enable_hw_error_intr(struct hns3_adapter *hns, bool state);
 void hns3_handle_msix_error(struct hns3_adapter *hns, uint64_t *levels);
 void hns3_intr_unregister(const struct rte_intr_handle *hdl,
 			  rte_intr_callback_fn cb_fn, void *cb_arg);
+void hns3_notify_reset_ready(struct hns3_hw *hw, bool enable);
+int hns3_reset_init(struct hns3_hw *hw);
+void hns3_wait_callback(void *param);
+void hns3_schedule_reset(struct hns3_adapter *hns);
+void hns3_schedule_delayed_reset(struct hns3_adapter *hns);
+int hns3_reset_req_hw_reset(struct hns3_adapter *hns);
+int hns3_reset_process(struct hns3_adapter *hns,
+		       enum hns3_reset_level reset_level);
+void hns3_reset_abort(struct hns3_adapter *hns);
 
 #endif /* _HNS3_INTR_H_ */
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 44d8275..3ac78b1 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -107,6 +107,19 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code0, uint16_t code1,
 	end = now + HNS3_MAX_RETRY_MS;
 	while ((hw->mbx_resp.head != hw->mbx_resp.tail + hw->mbx_resp.lost) &&
 	       (now < end)) {
+		if (rte_atomic16_read(&hw->reset.disable_cmd)) {
+			hns3_err(hw, "Don't wait for mbx respone because of "
+				 "disable_cmd");
+			return -EBUSY;
+		}
+
+		if (is_reset_pending(hns)) {
+			hw->mbx_resp.req_msg_data = 0;
+			hns3_err(hw, "Don't wait for mbx respone because of "
+				 "reset pending");
+			return -EIO;
+		}
+
 		/*
 		 * The mbox response is running on the interrupt thread.
 		 * Sending mbox in the interrupt thread cannot wait for the
@@ -235,6 +248,7 @@ hns3_mbx_handler(struct hns3_hw *hw)
 
 			hns3_warn(hw, "PF inform reset level %d", reset_level);
 			hw->reset.stats.request_cnt++;
+			hns3_schedule_reset(HNS3_DEV_HW_TO_ADAPTER(hw));
 			break;
 		default:
 			hns3_err(hw, "Fetched unsupported(%d) message from arq",
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support for hns3 PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (19 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 20/22] net/hns3: add reset related process " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-30 15:14   ` Ferruh Yigit
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
  2019-08-30 15:23 ` [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Ferruh Yigit
  22 siblings, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch adds multiple process support for hns3 PMD driver.
Multi-process support selection queue by configuring RSS or
flow director. The primary process supports various management
ops, and the secondary process only supports queries ops.
The primary process notifies the secondary processes to start
or stop tranceiver.

Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 drivers/net/hns3/hns3_ethdev.c    |  36 ++++++-
 drivers/net/hns3/hns3_ethdev_vf.c |  29 ++++-
 drivers/net/hns3/hns3_mp.c        | 219 ++++++++++++++++++++++++++++++++++++++
 drivers/net/hns3/hns3_mp.h        |  14 +++
 4 files changed, 296 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/hns3/hns3_mp.c
 create mode 100644 drivers/net/hns3/hns3_mp.h

diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9a4c560..28fa9a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -40,6 +40,7 @@
 #include "hns3_intr.h"
 #include "hns3_regs.h"
 #include "hns3_dcb.h"
+#include "hns3_mp.h"
 
 #define HNS3_DEFAULT_PORT_CONF_BURST_SIZE	32
 #define HNS3_DEFAULT_PORT_CONF_QUEUES_NUM	1
@@ -4078,6 +4079,10 @@ hns3_dev_stop(struct rte_eth_dev *eth_dev)
 	hw->adapter_state = HNS3_NIC_STOPPING;
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
+	/* Disable datapath on secondary process. */
+	hns3_mp_req_stop_rxtx(eth_dev);
+	/* Prevent crashes when queues are still in use. */
+	rte_delay_ms(hw->tqps_num);
 
 	rte_spinlock_lock(&hw->lock);
 	if (rte_atomic16_read(&hw->reset.resetting) == 0) {
@@ -4108,6 +4113,7 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
 	hns3_uninit_pf(eth_dev);
 	hns3_free_all_queues(eth_dev);
 	rte_free(hw->reset.wait_data);
+	hns3_mp_uninit_primary();
 	hns3_warn(hw, "Close port %d finished", hw->data->port_id);
 }
 
@@ -4750,6 +4756,28 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
+static const struct eth_dev_ops hns3_eth_dev_secondary_ops = {
+	.stats_get          = hns3_stats_get,
+	.stats_reset        = hns3_stats_reset,
+	.xstats_get         = hns3_dev_xstats_get,
+	.xstats_get_names   = hns3_dev_xstats_get_names,
+	.xstats_reset       = hns3_dev_xstats_reset,
+	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
+	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
+	.dev_infos_get          = hns3_dev_infos_get,
+	.fw_version_get         = hns3_fw_version_get,
+	.flow_ctrl_get          = hns3_flow_ctrl_get,
+	.link_update            = hns3_dev_link_update,
+	.rss_hash_update        = hns3_dev_rss_hash_update,
+	.rss_hash_conf_get      = hns3_dev_rss_hash_conf_get,
+	.reta_update            = hns3_dev_rss_reta_update,
+	.reta_query             = hns3_dev_rss_reta_query,
+	.filter_ctrl            = hns3_dev_filter_ctrl,
+	.get_reg                = hns3_get_regs,
+	.get_dcb_info           = hns3_get_dcb_info,
+	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
+};
+
 static const struct hns3_reset_ops hns3_reset_ops = {
 	.reset_service       = hns3_reset_service,
 	.stop_service        = hns3_stop_service,
@@ -4783,10 +4811,16 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
 	hns3_filterlist_init(eth_dev);
 
 	hns3_set_rxtx_function(eth_dev);
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev->dev_ops = &hns3_eth_dev_secondary_ops;
+		hns3_mp_init_secondary();
+		hw->secondary_cnt++;
 		return 0;
+	}
 
 	eth_dev->dev_ops = &hns3_eth_dev_ops;
+	hns3_mp_init_primary();
+	hw->adapter_state = HNS3_NIC_UNINITIALIZED;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	if (device_id == HNS3_DEV_ID_25GE_RDMA ||
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 45360c4..1d33779 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -40,6 +40,7 @@
 #include "hns3_regs.h"
 #include "hns3_intr.h"
 #include "hns3_dcb.h"
+#include "hns3_mp.h"
 
 #define HNS3VF_KEEP_ALIVE_INTERVAL	2000000 /* us */
 #define HNS3VF_SERVICE_INTERVAL		1000000 /* us */
@@ -1174,6 +1175,7 @@ hns3vf_dev_close(struct rte_eth_dev *eth_dev)
 	hns3vf_uninit_vf(eth_dev);
 	hns3_free_all_queues(eth_dev);
 	rte_free(hw->reset.wait_data);
+	hns3_mp_uninit_primary();
 	hns3_warn(hw, "Close port %d finished", hw->data->port_id);
 }
 
@@ -1251,6 +1253,7 @@ hns3vf_dev_start(struct rte_eth_dev *eth_dev)
 	hw->adapter_state = HNS3_NIC_STARTED;
 	rte_spinlock_unlock(&hw->lock);
 	hns3_set_rxtx_function(eth_dev);
+	hns3_mp_req_start_rxtx(eth_dev);
 	return 0;
 }
 
@@ -1556,6 +1559,25 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
 	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
 };
 
+static const struct eth_dev_ops hns3vf_eth_dev_secondary_ops = {
+	.stats_get          = hns3_stats_get,
+	.stats_reset        = hns3_stats_reset,
+	.xstats_get         = hns3_dev_xstats_get,
+	.xstats_get_names   = hns3_dev_xstats_get_names,
+	.xstats_reset	    = hns3_dev_xstats_reset,
+	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
+	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
+	.dev_infos_get      = hns3vf_dev_infos_get,
+	.link_update        = hns3vf_dev_link_update,
+	.rss_hash_update    = hns3_dev_rss_hash_update,
+	.rss_hash_conf_get  = hns3_dev_rss_hash_conf_get,
+	.reta_update        = hns3_dev_rss_reta_update,
+	.reta_query         = hns3_dev_rss_reta_query,
+	.filter_ctrl        = hns3_dev_filter_ctrl,
+	.get_reg            = hns3_get_regs,
+	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
+};
+
 static const struct hns3_reset_ops hns3vf_reset_ops = {
 	.reset_service       = hns3vf_reset_service,
 	.stop_service        = hns3vf_stop_service,
@@ -1589,10 +1611,15 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	hns3_filterlist_init(eth_dev);
 
 	hns3_set_rxtx_function(eth_dev);
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		eth_dev->dev_ops = &hns3vf_eth_dev_secondary_ops;
+		hns3_mp_init_secondary();
+		hw->secondary_cnt++;
 		return 0;
+	}
 
 	eth_dev->dev_ops = &hns3vf_eth_dev_ops;
+	hns3_mp_init_primary();
 
 	hw->adapter_state = HNS3_NIC_UNINITIALIZED;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
diff --git a/drivers/net/hns3/hns3_mp.c b/drivers/net/hns3/hns3_mp.c
new file mode 100644
index 0000000..2f56d8b
--- /dev/null
+++ b/drivers/net/hns3/hns3_mp.c
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_ethdev_driver.h>
+#include <rte_string_fns.h>
+#include <rte_io.h>
+
+#include "hns3_cmd.h"
+#include "hns3_mbx.h"
+#include "hns3_rss.h"
+#include "hns3_fdir.h"
+#include "hns3_stats.h"
+#include "hns3_ethdev.h"
+#include "hns3_logs.h"
+#include "hns3_rxtx.h"
+#include "hns3_mp.h"
+
+/*
+ * Initialize IPC message.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ * @param[out] msg
+ *   Pointer to message to fill in.
+ * @param[in] type
+ *   Message type.
+ */
+static inline void
+mp_init_msg(struct rte_eth_dev *dev, struct rte_mp_msg *msg,
+	    enum hns3_mp_req_type type)
+{
+	struct hns3_mp_param *param = (struct hns3_mp_param *)msg->param;
+
+	memset(msg, 0, sizeof(*msg));
+	strlcpy(msg->name, HNS3_MP_NAME, sizeof(msg->name));
+	msg->len_param = sizeof(*param);
+	param->type = type;
+	param->port_id = dev->data->port_id;
+}
+
+/*
+ * IPC message handler of primary process.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ * @param[in] peer
+ *   Pointer to the peer socket path.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mp_primary_handle(const struct rte_mp_msg *mp_msg __rte_unused,
+		  const void *peer __rte_unused)
+{
+	return 0;
+}
+
+/*
+ * IPC message handler of a secondary process.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ * @param[in] peer
+ *   Pointer to the peer socket path.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
+{
+	struct rte_mp_msg mp_res;
+	struct hns3_mp_param *res = (struct hns3_mp_param *)mp_res.param;
+	const struct hns3_mp_param *param =
+		(const struct hns3_mp_param *)mp_msg->param;
+	struct rte_eth_dev *dev;
+	int ret;
+
+	if (!rte_eth_dev_is_valid_port(param->port_id)) {
+		rte_errno = ENODEV;
+		PMD_INIT_LOG(ERR, "port %u invalid port ID", param->port_id);
+		return -rte_errno;
+	}
+	dev = &rte_eth_devices[param->port_id];
+	switch (param->type) {
+	case HNS3_MP_REQ_START_RXTX:
+		PMD_INIT_LOG(INFO, "port %u starting datapath",
+			     dev->data->port_id);
+		rte_mb();
+		hns3_set_rxtx_function(dev);
+		mp_init_msg(dev, &mp_res, param->type);
+		res->result = 0;
+		ret = rte_mp_reply(&mp_res, peer);
+		break;
+	case HNS3_MP_REQ_STOP_RXTX:
+		PMD_INIT_LOG(INFO, "port %u stopping datapath",
+			     dev->data->port_id);
+		hns3_set_rxtx_function(dev);
+		rte_mb();
+		mp_init_msg(dev, &mp_res, param->type);
+		res->result = 0;
+		ret = rte_mp_reply(&mp_res, peer);
+		break;
+	default:
+		rte_errno = EINVAL;
+		PMD_INIT_LOG(ERR, "port %u invalid mp request type",
+			     dev->data->port_id);
+		return -rte_errno;
+	}
+	return ret;
+}
+
+/*
+ * Broadcast request of stopping/starting data-path to secondary processes.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ * @param[in] type
+ *   Request type.
+ */
+static void
+mp_req_on_rxtx(struct rte_eth_dev *dev, enum hns3_mp_req_type type)
+{
+	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_mp_msg mp_req;
+	struct rte_mp_msg *mp_res;
+	struct rte_mp_reply mp_rep;
+	struct hns3_mp_param *res;
+	struct timespec ts;
+	int ret;
+	int i;
+
+	if (!hw->secondary_cnt)
+		return;
+	if (type != HNS3_MP_REQ_START_RXTX && type != HNS3_MP_REQ_STOP_RXTX) {
+		hns3_err(hw, "port %u unknown request (req_type %d)",
+			 dev->data->port_id, type);
+		return;
+	}
+	mp_init_msg(dev, &mp_req, type);
+	ts.tv_sec = HNS3_MP_REQ_TIMEOUT_SEC;
+	ts.tv_nsec = 0;
+	ret = rte_mp_request_sync(&mp_req, &mp_rep, &ts);
+	if (ret) {
+		hns3_err(hw, "port %u failed to request stop/start Rx/Tx (%d)",
+			 dev->data->port_id, type);
+		goto exit;
+	}
+	if (mp_rep.nb_sent != mp_rep.nb_received) {
+		PMD_INIT_LOG(ERR,
+			"port %u not all secondaries responded (req_type %d)",
+			dev->data->port_id, type);
+		goto exit;
+	}
+	for (i = 0; i < mp_rep.nb_received; i++) {
+		mp_res = &mp_rep.msgs[i];
+		res = (struct hns3_mp_param *)mp_res->param;
+		if (res->result) {
+			hns3_err(hw, "port %u request failed on secondary #%d",
+				 dev->data->port_id, i);
+			goto exit;
+		}
+	}
+exit:
+	free(mp_rep.msgs);
+}
+
+/*
+ * Broadcast request of starting data-path to secondary processes. The request
+ * is synchronous.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ */
+void hns3_mp_req_start_rxtx(struct rte_eth_dev *dev)
+{
+	mp_req_on_rxtx(dev, HNS3_MP_REQ_START_RXTX);
+}
+
+/*
+ * Broadcast request of stopping data-path to secondary processes. The request
+ * is synchronous.
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet structure.
+ */
+void hns3_mp_req_stop_rxtx(struct rte_eth_dev *dev)
+{
+	mp_req_on_rxtx(dev, HNS3_MP_REQ_STOP_RXTX);
+}
+
+/*
+ * Initialize by primary process.
+ */
+void hns3_mp_init_primary(void)
+{
+	rte_mp_action_register(HNS3_MP_NAME, mp_primary_handle);
+}
+
+/*
+ * Un-initialize by primary process.
+ */
+void hns3_mp_uninit_primary(void)
+{
+	rte_mp_action_unregister(HNS3_MP_NAME);
+}
+
+/*
+ * Initialize by secondary process.
+ */
+void hns3_mp_init_secondary(void)
+{
+	rte_mp_action_register(HNS3_MP_NAME, mp_secondary_handle);
+}
diff --git a/drivers/net/hns3/hns3_mp.h b/drivers/net/hns3/hns3_mp.h
new file mode 100644
index 0000000..aefbeb1
--- /dev/null
+++ b/drivers/net/hns3/hns3_mp.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2019 Hisilicon Limited.
+ */
+
+#ifndef _HNS3_MP_H_
+#define _HNS3_MP_H_
+
+void hns3_mp_req_start_rxtx(struct rte_eth_dev *dev);
+void hns3_mp_req_stop_rxtx(struct rte_eth_dev *dev);
+void hns3_mp_init_primary(void);
+void hns3_mp_uninit_primary(void);
+void hns3_mp_init_secondary(void);
+
+#endif /* _HNS3_MP_H_ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (20 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support " Wei Hu (Xavier)
@ 2019-08-23 13:47 ` Wei Hu (Xavier)
  2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
                     ` (5 more replies)
  2019-08-30 15:23 ` [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Ferruh Yigit
  22 siblings, 6 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-23 13:47 UTC (permalink / raw)
  To: dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

This patch add build related files for hns3 PMD driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 MAINTAINERS                                  |  7 ++++
 config/common_armv8a_linux                   |  5 +++
 config/common_base                           |  5 +++
 config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
 doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
 doc/guides/nics/hns3.rst                     | 55 ++++++++++++++++++++++++++++
 drivers/net/Makefile                         |  1 +
 drivers/net/hns3/Makefile                    | 43 ++++++++++++++++++++++
 drivers/net/hns3/meson.build                 | 19 ++++++++++
 drivers/net/hns3/rte_pmd_hns3_version.map    |  3 ++
 drivers/net/meson.build                      |  1 +
 mk/rte.app.mk                                |  1 +
 12 files changed, 180 insertions(+)
 create mode 100644 doc/guides/nics/features/hns3.ini
 create mode 100644 doc/guides/nics/hns3.rst
 create mode 100644 drivers/net/hns3/Makefile
 create mode 100644 drivers/net/hns3/meson.build
 create mode 100644 drivers/net/hns3/rte_pmd_hns3_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 4100260..1794923 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -616,6 +616,13 @@ F: drivers/net/hinic/
 F: doc/guides/nics/hinic.rst
 F: doc/guides/nics/features/hinic.ini
 
+Hisilicon hns3
+M: Wei Hu (Xavier) <xavier.huwei@huawei.com>
+M: Min Hu (Connor) <humin29@huawei.com>
+F: drivers/net/hns3/
+F: doc/guides/nics/hns3.rst
+F: doc/guides/nics/features/hns3.ini
+
 Intel e1000
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
diff --git a/config/common_armv8a_linux b/config/common_armv8a_linux
index 481712e..bf455c5 100644
--- a/config/common_armv8a_linux
+++ b/config/common_armv8a_linux
@@ -37,3 +37,8 @@ CONFIG_RTE_LIBRTE_AVP_PMD=n
 CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV=n
 
 CONFIG_RTE_SCHED_VECTOR=n
+
+#
+# Hisilicon HNS3 PMD driver
+#
+CONFIG_RTE_LIBRTE_HNS3_PMD=y
diff --git a/config/common_base b/config/common_base
index 8ef75c2..71a2c33 100644
--- a/config/common_base
+++ b/config/common_base
@@ -282,6 +282,11 @@ CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
 CONFIG_RTE_LIBRTE_HINIC_PMD=n
 
 #
+# Compile burst-oriented HNS3 PMD driver
+#
+CONFIG_RTE_LIBRTE_HNS3_PMD=n
+
+#
 # Compile burst-oriented IXGBE PMD driver
 #
 CONFIG_RTE_LIBRTE_IXGBE_PMD=y
diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang b/config/defconfig_arm64-armv8a-linuxapp-clang
index d3b4dad..c73f5fb 100644
--- a/config/defconfig_arm64-armv8a-linuxapp-clang
+++ b/config/defconfig_arm64-armv8a-linuxapp-clang
@@ -6,3 +6,5 @@
 
 CONFIG_RTE_TOOLCHAIN="clang"
 CONFIG_RTE_TOOLCHAIN_CLANG=y
+
+CONFIG_RTE_LIBRTE_HNS3_PMD=n
diff --git a/doc/guides/nics/features/hns3.ini b/doc/guides/nics/features/hns3.ini
new file mode 100644
index 0000000..d38d35e
--- /dev/null
+++ b/doc/guides/nics/features/hns3.ini
@@ -0,0 +1,38 @@
+;
+; Supported features of the 'hns3' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+MTU update           = Y
+Jumbo frame          = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+DCB                  = Y
+VLAN filter          = Y
+Flow director        = Y
+Flow control         = Y
+Flow API             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Inner L3 checksum    = Y
+Inner L4 checksum    = Y
+Basic stats          = Y
+Extended stats       = Y
+Stats per queue      = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+BSD nic_uio          = N
+x86-64               = N
+ARMv8                = Y
+ARMv7                = N
+x86-32               = N
+Power8               = N
diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
new file mode 100644
index 0000000..c9d0253
--- /dev/null
+++ b/doc/guides/nics/hns3.rst
@@ -0,0 +1,55 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018-2019 Hisilicon Limited.
+
+HNS3 Poll Mode Driver
+===============================
+
+The Hisilicon Network Subsystem is a long term evolution IP which is
+supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
+
+The HNS3 PMD (librte_pmd_hns3) provides poll mode driver support
+for hns3(Hisilicon Network Subsystem 3) network engine.
+
+Features
+--------
+
+Features of the HNS3 PMD are:
+
+- Arch support: ARMv8.
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- VLAN stripping
+- NUMA support
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_HNS3_PMD`` (default ``y``)
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+Build with clang is not supported yet.
+Currently, only ARMv8 architecture is supported.
\ No newline at end of file
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 5767fdf..1770d8b 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_HINIC_PMD) += hinic
+DIRS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
 DIRS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf
 DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
diff --git a/drivers/net/hns3/Makefile b/drivers/net/hns3/Makefile
new file mode 100644
index 0000000..a2e6502
--- /dev/null
+++ b/drivers/net/hns3/Makefile
@@ -0,0 +1,43 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2019 Hisilicon Limited.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_hns3.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API -fsigned-char
+
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_hns3_version.map
+
+LIBABIVER := 2
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_ethdev_vf.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_cmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_mbx.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_rss.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_flow.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_fdir.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_intr.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_stats.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_regs.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_dcb.c
+SRCS-$(CONFIG_RTE_LIBRTE_HNS3_PMD) += hns3_mp.c
+
+# install this header file
+SYMLINK-$(CONFIG_RTE_LIBRTE_HNS3_PMD)-include := hns3_ethdev.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/hns3/meson.build b/drivers/net/hns3/meson.build
new file mode 100644
index 0000000..ad301a5
--- /dev/null
+++ b/drivers/net/hns3/meson.build
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2019 Hisilicon Limited
+
+sources = files('hns3_cmd.c',
+	'hns3_dcb.c',
+	'hns3_intr.c',
+	'hns3_ethdev.c',
+	'hns3_ethdev_vf.c',
+	'hns3_fdir.c',
+	'hns3_flow.c',
+	'hns3_mbx.c',
+	'hns3_regs.c',
+	'hns3_rss.c',
+	'hns3_rxtx.c',
+	'hns3_stats.c',
+	'hns3_mp.c')
+deps += ['hash']
+
+cflags += '-DALLOW_EXPERIMENTAL_API'
diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
new file mode 100644
index 0000000..3aef967
--- /dev/null
+++ b/drivers/net/hns3/rte_pmd_hns3_version.map
@@ -0,0 +1,3 @@
+DPDK_19.08 {
+	 local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 513f19b..eb1c6b6 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -18,6 +18,7 @@ drivers = ['af_packet',
 	'failsafe',
 	'fm10k', 'i40e',
 	'hinic',
+	'hns3',
 	'iavf',
 	'ice',
 	'ifc',
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index ba5c39e..17b9916 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -172,6 +172,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_HINIC_PMD)      += -lrte_pmd_hinic
+_LDLIBS-$(CONFIG_RTE_LIBRTE_HNS3_PMD)       += -lrte_pmd_hns3
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IAVF_PMD)       += -lrte_pmd_iavf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
@ 2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
  2019-08-30  3:22     ` Wei Hu (Xavier)
  2019-08-30 14:57     ` Ferruh Yigit
  2019-08-30  6:16   ` Stephen Hemminger
                     ` (4 subsequent siblings)
  5 siblings, 2 replies; 75+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-08-23 14:08 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
> Sent: Friday, August 23, 2019 7:17 PM
> To: dev@dpdk.org
> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
> liudongdong3@huawei.com; forest.zhouchang@huawei.com
> Subject: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
> 
> This patch add build related files for hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
> +# Hisilicon HNS3 PMD driver
> +#
> +CONFIG_RTE_LIBRTE_HNS3_PMD=y

# Please add meson support
# Move build infra to the first patch
# See git log drivers/net/octeontx2 as example


> diff --git a/config/common_base b/config/common_base
> index 8ef75c2..71a2c33 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -282,6 +282,11 @@
> CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
>  CONFIG_RTE_LIBRTE_HINIC_PMD=n
> 
>  #
> +# Compile burst-oriented HNS3 PMD driver
> +#
> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
> +
> +#
>  # Compile burst-oriented IXGBE PMD driver
>  #
>  CONFIG_RTE_LIBRTE_IXGBE_PMD=y
> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang
> b/config/defconfig_arm64-armv8a-linuxapp-clang
> index d3b4dad..c73f5fb 100644
> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
> @@ -6,3 +6,5 @@
> 
>  CONFIG_RTE_TOOLCHAIN="clang"
>  CONFIG_RTE_TOOLCHAIN_CLANG=y
> +
> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
> diff --git a/doc/guides/nics/features/hns3.ini
> b/doc/guides/nics/features/hns3.ini
> new file mode 100644
> index 0000000..d38d35e
> --- /dev/null
> +++ b/doc/guides/nics/features/hns3.ini
> @@ -0,0 +1,38 @@
> +;
> +; Supported features of the 'hns3' network poll mode driver.

Add doc changes when driver feature gets added.
# See git log drivers/net/octeontx2 as example

> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Link status          = Y
> +MTU update           = Y
> +Jumbo frame          = Y
> +Promiscuous mode     = Y
> +Allmulticast mode    = Y
> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
> new file mode 100644
> index 0000000..c9d0253
> --- /dev/null
> +++ b/doc/guides/nics/hns3.rst
> @@ -0,0 +1,55 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018-2019 Hisilicon Limited.
> +
> +HNS3 Poll Mode Driver
> +===============================
> +
> +The Hisilicon Network Subsystem is a long term evolution IP which is
> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
> +
> +The HNS3 PMD (librte_pmd_hns3) provides poll mode driver support
> +for hns3(Hisilicon Network Subsystem 3) network engine.
> +
> +Features
> +--------
> +
> +Features of the HNS3 PMD are:
> +
> +- Arch support: ARMv8.

Is it an integrated NIC controller? Why it is supported only on ARMv8?
The reason why I asking because, Enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y
only on arm64 will create a case where build fails for arm64 and passes for
x86. I would like to avoid such disparity. If the build is passing on x86 make it
enable in the common code, not in arm64 config.


> +- Multiple queues for TX and RX
> +- Receive Side Scaling (RSS)
> +- Packet type information
> +- Checksum offload
> +- Promiscuous mode
> +- Multicast mode
> +- Port hardware statistics
> +- Jumbo frames
> +- Link state information
> +- VLAN stripping


> +cflags += '-DALLOW_EXPERIMENTAL_API'
> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map
> b/drivers/net/hns3/rte_pmd_hns3_version.map
> new file mode 100644
> index 0000000..3aef967
> --- /dev/null
> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
> @@ -0,0 +1,3 @@
> +DPDK_19.08 {

Change to 19.11


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation Wei Hu (Xavier)
@ 2019-08-23 15:42   ` Aaron Conole
  2019-08-30 15:13   ` Ferruh Yigit
  1 sibling, 0 replies; 75+ messages in thread
From: Aaron Conole @ 2019-08-23 15:42 UTC (permalink / raw)
  To: Wei Hu (Xavier)
  Cc: dev, linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

"Wei Hu (Xavier)" <xavier.huwei@huawei.com> writes:

> This patch adds queue related operation, package sending and
> receiving function codes.
>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  drivers/net/hns3/hns3_dcb.c       |    1 +
>  drivers/net/hns3/hns3_ethdev.c    |    8 +
>  drivers/net/hns3/hns3_ethdev_vf.c |    7 +
>  drivers/net/hns3/hns3_rxtx.c      | 1343 +++++++++++++++++++++++++++++++++++++
>  drivers/net/hns3/hns3_rxtx.h      |  287 ++++++++
>  5 files changed, 1646 insertions(+)
>  create mode 100644 drivers/net/hns3/hns3_rxtx.c
>  create mode 100644 drivers/net/hns3/hns3_rxtx.h
>
> diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
> index 6fb97de..b86a4b0 100644
> --- a/drivers/net/hns3/hns3_dcb.c
> +++ b/drivers/net/hns3/hns3_dcb.c
> @@ -13,6 +13,7 @@
>  #include <rte_memcpy.h>
>  #include <rte_spinlock.h>
>  
> +#include "hns3_rxtx.h"
>  #include "hns3_logs.h"
>  #include "hns3_cmd.h"
>  #include "hns3_mbx.h"
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index df4749e..73b34d2 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -35,6 +35,7 @@
>  #include "hns3_fdir.h"
>  #include "hns3_ethdev.h"
>  #include "hns3_logs.h"
> +#include "hns3_rxtx.h"
>  #include "hns3_regs.h"
>  #include "hns3_dcb.h"
>  
> @@ -3603,6 +3604,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.mtu_set            = hns3_dev_mtu_set,
>  	.dev_infos_get          = hns3_dev_infos_get,
>  	.fw_version_get         = hns3_fw_version_get,
> +	.rx_queue_setup         = hns3_rx_queue_setup,
> +	.tx_queue_setup         = hns3_tx_queue_setup,
> +	.rx_queue_release       = hns3_dev_rx_queue_release,
> +	.tx_queue_release       = hns3_dev_tx_queue_release,
>  	.flow_ctrl_get          = hns3_flow_ctrl_get,
>  	.flow_ctrl_set          = hns3_flow_ctrl_set,
>  	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,
> @@ -3645,6 +3650,7 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
>  	/* initialize flow filter lists */
>  	hns3_filterlist_init(eth_dev);
>  
> +	hns3_set_rxtx_function(eth_dev);
>  	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>  		return 0;
>  
> @@ -3698,6 +3704,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
>  
>  err_init_pf:
>  	eth_dev->dev_ops = NULL;
> +	eth_dev->rx_pkt_burst = NULL;
> +	eth_dev->tx_pkt_burst = NULL;
>  	rte_free(eth_dev->process_private);
>  	eth_dev->process_private = NULL;
>  	return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index 43b27ed..abab2b4 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -35,6 +35,7 @@
>  #include "hns3_fdir.h"
>  #include "hns3_ethdev.h"
>  #include "hns3_logs.h"
> +#include "hns3_rxtx.h"
>  #include "hns3_regs.h"
>  #include "hns3_dcb.h"
>  
> @@ -1133,6 +1134,7 @@ hns3vf_dev_start(struct rte_eth_dev *eth_dev)
>  	}
>  	hw->adapter_state = HNS3_NIC_STARTED;
>  	rte_spinlock_unlock(&hw->lock);
> +	hns3_set_rxtx_function(eth_dev);
>  	return 0;
>  }
>  
> @@ -1142,6 +1144,10 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
>  	.dev_close          = hns3vf_dev_close,
>  	.mtu_set            = hns3vf_dev_mtu_set,
>  	.dev_infos_get      = hns3vf_dev_infos_get,
> +	.rx_queue_setup     = hns3_rx_queue_setup,
> +	.tx_queue_setup     = hns3_tx_queue_setup,
> +	.rx_queue_release   = hns3_dev_rx_queue_release,
> +	.tx_queue_release   = hns3_dev_tx_queue_release,
>  	.dev_configure      = hns3vf_dev_configure,
>  	.mac_addr_add       = hns3vf_add_mac_addr,
>  	.mac_addr_remove    = hns3vf_remove_mac_addr,
> @@ -1179,6 +1185,7 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
>  	/* initialize flow filter lists */
>  	hns3_filterlist_init(eth_dev);
>  
> +	hns3_set_rxtx_function(eth_dev);
>  	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>  		return 0;
>  
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> new file mode 100644
> index 0000000..8a3ca4f
> --- /dev/null
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -0,0 +1,1343 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2019 Hisilicon Limited.
> + */
> +
> +#include <stdarg.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <inttypes.h>
> +#include <rte_bus_pci.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_debug.h>
> +#include <rte_dev.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev_driver.h>
> +#include <rte_io.h>
> +#include <rte_malloc.h>
> +#include <rte_pci.h>
> +#include <rte_spinlock.h>
> +
> +#include "hns3_cmd.h"
> +#include "hns3_mbx.h"
> +#include "hns3_rss.h"
> +#include "hns3_fdir.h"
> +#include "hns3_ethdev.h"
> +#include "hns3_rxtx.h"
> +#include "hns3_regs.h"
> +#include "hns3_logs.h"
> +
> +#define HNS3_CFG_DESC_NUM(num)	((num) / 8 - 1)
> +#define DEFAULT_RX_FREE_THRESH	16
> +
> +static void
> +hns3_rx_queue_release_mbufs(struct hns3_rx_queue *rxq)
> +{
> +	uint16_t i;
> +
> +	if (rxq->sw_ring) {
> +		for (i = 0; i < rxq->nb_rx_desc; i++) {
> +			if (rxq->sw_ring[i].mbuf) {
> +				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
> +				rxq->sw_ring[i].mbuf = NULL;
> +			}
> +		}
> +	}
> +}
> +
> +static void
> +hns3_tx_queue_release_mbufs(struct hns3_tx_queue *txq)
> +{
> +	uint16_t i;
> +
> +	if (txq->sw_ring) {
> +		for (i = 0; i < txq->nb_tx_desc; i++) {
> +			if (txq->sw_ring[i].mbuf) {
> +				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> +				txq->sw_ring[i].mbuf = NULL;
> +			}
> +		}
> +	}
> +}
> +
> +static void
> +hns3_rx_queue_release(void *queue)
> +{
> +	struct hns3_rx_queue *rxq = queue;
> +	if (rxq) {
> +		hns3_rx_queue_release_mbufs(rxq);
> +		if (rxq->mz)
> +			rte_memzone_free(rxq->mz);
> +		if (rxq->sw_ring)
> +			rte_free(rxq->sw_ring);
> +		rte_free(rxq);
> +	}
> +}
> +
> +static void
> +hns3_tx_queue_release(void *queue)
> +{
> +	struct hns3_tx_queue *txq = queue;
> +	if (txq) {
> +		hns3_tx_queue_release_mbufs(txq);
> +		if (txq->mz)
> +			rte_memzone_free(txq->mz);
> +		if (txq->sw_ring)
> +			rte_free(txq->sw_ring);
> +		rte_free(txq);
> +	}
> +}
> +
> +void
> +hns3_dev_rx_queue_release(void *queue)
> +{
> +	struct hns3_rx_queue *rxq = queue;
> +	struct hns3_adapter *hns;
> +
> +	if (rxq == NULL)
> +		return;
> +
> +	hns = rxq->hns;
> +	rte_spinlock_lock(&hns->hw.lock);
> +	hns3_rx_queue_release(queue);
> +	rte_spinlock_unlock(&hns->hw.lock);
> +}
> +
> +void
> +hns3_dev_tx_queue_release(void *queue)
> +{
> +	struct hns3_tx_queue *txq = queue;
> +	struct hns3_adapter *hns;
> +
> +	if (txq == NULL)
> +		return;
> +
> +	hns = txq->hns;
> +	rte_spinlock_lock(&hns->hw.lock);
> +	hns3_tx_queue_release(queue);
> +	rte_spinlock_unlock(&hns->hw.lock);
> +}
> +
> +void
> +hns3_free_all_queues(struct rte_eth_dev *dev)
> +{
> +	uint16_t i;
> +
> +	if (dev->data->rx_queues)
> +		for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +			hns3_rx_queue_release(dev->data->rx_queues[i]);
> +			dev->data->rx_queues[i] = NULL;
> +		}
> +
> +	if (dev->data->tx_queues)
> +		for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +			hns3_tx_queue_release(dev->data->tx_queues[i]);
> +			dev->data->tx_queues[i] = NULL;
> +		}
> +}
> +
> +static int
> +hns3_alloc_rx_queue_mbufs(struct hns3_hw *hw, struct hns3_rx_queue *rxq)
> +{
> +	struct rte_mbuf *mbuf;
> +	uint64_t dma_addr;
> +	uint16_t i;
> +
> +	for (i = 0; i < rxq->nb_rx_desc; i++) {
> +		mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
> +		if (unlikely(mbuf == NULL)) {
> +			hns3_err(hw, "Failed to allocate RXD[%d] for rx queue!",
> +				 i);
> +			hns3_rx_queue_release_mbufs(rxq);
> +			return -ENOMEM;
> +		}
> +
> +		rte_mbuf_refcnt_set(mbuf, 1);
> +		mbuf->next = NULL;
> +		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
> +		mbuf->nb_segs = 1;
> +		mbuf->port = rxq->port_id;
> +
> +		rxq->sw_ring[i].mbuf = mbuf;
> +		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
> +		rxq->rx_ring[i].addr = dma_addr;
> +		rxq->rx_ring[i].rx.bd_base_info = 0;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +hns3_buf_size2type(uint32_t buf_size)
> +{
> +	int bd_size_type;
> +
> +	switch (buf_size) {
> +	case 512:
> +		bd_size_type = HNS3_BD_SIZE_512_TYPE;
> +		break;
> +	case 1024:
> +		bd_size_type = HNS3_BD_SIZE_1024_TYPE;
> +		break;
> +	case 4096:
> +		bd_size_type = HNS3_BD_SIZE_4096_TYPE;
> +		break;
> +	default:
> +		bd_size_type = HNS3_BD_SIZE_2048_TYPE;
> +	}
> +
> +	return bd_size_type;
> +}
> +
> +static void
> +hns3_init_rx_queue_hw(struct hns3_rx_queue *rxq)
> +{
> +	uint32_t rx_buf_len = rxq->rx_buf_len;
> +	uint64_t dma_addr = rxq->rx_ring_phys_addr;
> +
> +	hns3_write_dev(rxq, HNS3_RING_RX_BASEADDR_L_REG, (uint32_t)dma_addr);
> +	hns3_write_dev(rxq, HNS3_RING_RX_BASEADDR_H_REG,
> +		       (uint32_t)((dma_addr >> 31) >> 1));
> +
> +	hns3_write_dev(rxq, HNS3_RING_RX_BD_LEN_REG,
> +		       hns3_buf_size2type(rx_buf_len));
> +	hns3_write_dev(rxq, HNS3_RING_RX_BD_NUM_REG,
> +		       HNS3_CFG_DESC_NUM(rxq->nb_rx_desc));
> +}
> +
> +static void
> +hns3_init_tx_queue_hw(struct hns3_tx_queue *txq)
> +{
> +	uint64_t dma_addr = txq->tx_ring_phys_addr;
> +
> +	hns3_write_dev(txq, HNS3_RING_TX_BASEADDR_L_REG, (uint32_t)dma_addr);
> +	hns3_write_dev(txq, HNS3_RING_TX_BASEADDR_H_REG,
> +		       (uint32_t)((dma_addr >> 31) >> 1));
> +
> +	hns3_write_dev(txq, HNS3_RING_TX_BD_NUM_REG,
> +		       HNS3_CFG_DESC_NUM(txq->nb_tx_desc));
> +}
> +
> +static void
> +hns3_enable_all_queues(struct hns3_hw *hw, bool en)
> +{
> +	struct hns3_rx_queue *rxq;
> +	struct hns3_tx_queue *txq;
> +	uint32_t rcb_reg;
> +	int i;
> +
> +	for (i = 0; i < hw->data->nb_rx_queues; i++) {
> +		rxq = hw->data->rx_queues[i];
> +		txq = hw->data->tx_queues[i];
> +		if (rxq == NULL || txq == NULL ||
> +		    (en && (rxq->rx_deferred_start || txq->tx_deferred_start)))
> +			continue;
> +		rcb_reg = hns3_read_dev(rxq, HNS3_RING_EN_REG);
> +		if (en)
> +			rcb_reg |= BIT(HNS3_RING_EN_B);
> +		else
> +			rcb_reg &= ~BIT(HNS3_RING_EN_B);
> +		hns3_write_dev(rxq, HNS3_RING_EN_REG, rcb_reg);
> +	}
> +}
> +
> +static int
> +hns3_tqp_enable(struct hns3_hw *hw, uint16_t queue_id, bool enable)
> +{
> +	struct hns3_cfg_com_tqp_queue_cmd *req;
> +	struct hns3_cmd_desc desc;
> +	int ret;
> +
> +	req = (struct hns3_cfg_com_tqp_queue_cmd *)desc.data;
> +
> +	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CFG_COM_TQP_QUEUE, false);
> +	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
> +	req->stream_id = 0;
> +	hns3_set_bit(req->enable, HNS3_TQP_ENABLE_B, enable ? 1 : 0);
> +
> +	ret = hns3_cmd_send(hw, &desc, 1);
> +	if (ret)
> +		hns3_err(hw, "TQP enable fail, ret = %d", ret);
> +
> +	return ret;
> +}
> +
> +static int
> +hns3_send_reset_tqp_cmd(struct hns3_hw *hw, uint16_t queue_id, bool enable)
> +{
> +	struct hns3_reset_tqp_queue_cmd *req;
> +	struct hns3_cmd_desc desc;
> +	int ret;
> +
> +	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RESET_TQP_QUEUE, false);
> +
> +	req = (struct hns3_reset_tqp_queue_cmd *)desc.data;
> +	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
> +	hns3_set_bit(req->reset_req, HNS3_TQP_RESET_B, enable ? 1 : 0);
> +
> +	ret = hns3_cmd_send(hw, &desc, 1);
> +	if (ret)
> +		hns3_err(hw, "Send tqp reset cmd error, ret = %d", ret);
> +
> +	return ret;
> +}
> +
> +static int
> +hns3_get_reset_status(struct hns3_hw *hw, uint16_t queue_id)
> +{
> +	struct hns3_reset_tqp_queue_cmd *req;
> +	struct hns3_cmd_desc desc;
> +	int ret;
> +
> +	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RESET_TQP_QUEUE, true);
> +
> +	req = (struct hns3_reset_tqp_queue_cmd *)desc.data;
> +	req->tqp_id = rte_cpu_to_le_16(queue_id & HNS3_RING_ID_MASK);
> +
> +	ret = hns3_cmd_send(hw, &desc, 1);
> +	if (ret) {
> +		hns3_err(hw, "Get reset status error, ret =%d", ret);
> +		return ret;
> +	}
> +
> +	return hns3_get_bit(req->ready_to_reset, HNS3_TQP_RESET_B);
> +}
> +
> +static int
> +hns3_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
> +{
> +#define HNS3_TQP_RESET_TRY_MS	200
> +	uint64_t end;
> +	int reset_status;
> +	int ret;
> +
> +	ret = hns3_tqp_enable(hw, queue_id, false);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In current version VF is not supported when PF is taken over by DPDK,
> +	 * all task queue pairs are mapped to PF function, so PF's queue id is
> +	 * equals to the global queue id in PF range.
> +	 */
> +	ret = hns3_send_reset_tqp_cmd(hw, queue_id, true);
> +	if (ret) {
> +		hns3_err(hw, "Send reset tqp cmd fail, ret = %d", ret);
> +		return ret;
> +	}
> +	ret = -ETIMEDOUT;
> +	end = get_timeofday_ms() + HNS3_TQP_RESET_TRY_MS;
> +	do {
> +		/* Wait for tqp hw reset */
> +		rte_delay_ms(HNS3_POLL_RESPONE_MS);
> +		reset_status = hns3_get_reset_status(hw, queue_id);
> +		if (reset_status) {
> +			ret = 0;
> +			break;
> +		}
> +	} while (get_timeofday_ms() < end);
> +
> +	if (ret) {
> +		hns3_err(hw, "Reset TQP fail, ret = %d", ret);
> +		return ret;
> +	}
> +
> +	ret = hns3_send_reset_tqp_cmd(hw, queue_id, false);
> +	if (ret)
> +		hns3_err(hw, "Deassert the soft reset fail, ret = %d", ret);
> +
> +	return ret;
> +}
> +
> +static int
> +hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
> +{
> +	uint8_t msg_data[2];
> +	int ret;
> +
> +	/* Disable VF's queue before send queue reset msg to PF */
> +	ret = hns3_tqp_enable(hw, queue_id, false);
> +	if (ret)
> +		return ret;
> +
> +	memcpy(msg_data, &queue_id, sizeof(uint16_t));
> +
> +	return hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data,
> +				 sizeof(msg_data), true, NULL, 0);
> +}
> +
> +static int
> +hns3_reset_queue(struct hns3_adapter *hns, uint16_t queue_id)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	if (hns->is_vf)
> +		return hns3vf_reset_tqp(hw, queue_id);
> +	else
> +		return hns3_reset_tqp(hw, queue_id);
> +}
> +
> +int
> +hns3_reset_all_queues(struct hns3_adapter *hns)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	int ret;
> +	uint16_t i;
> +
> +	for (i = 0; i < hw->data->nb_rx_queues; i++) {
> +		ret = hns3_reset_queue(hns, i);
> +		if (ret) {
> +			hns3_err(hw, "Failed to reset No.%d queue: %d", i, ret);
> +			return ret;
> +		}
> +	}
> +	return 0;
> +}
> +
> +static int
> +hns3_dev_rx_queue_start(struct hns3_adapter *hns, uint16_t idx)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	struct hns3_rx_queue *rxq;
> +	int ret;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	rxq = hw->data->rx_queues[idx];
> +
> +	ret = hns3_alloc_rx_queue_mbufs(hw, rxq);
> +	if (ret) {
> +		hns3_err(hw, "Failed to alloc mbuf for No.%d rx queue: %d",
> +			    idx, ret);
> +		return ret;
> +	}
> +
> +	rxq->next_to_use = 0;
> +	rxq->next_to_clean = 0;
> +	hns3_init_rx_queue_hw(rxq);
> +
> +	return 0;
> +}
> +
> +static void
> +hns3_dev_tx_queue_start(struct hns3_adapter *hns, uint16_t idx)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	struct hns3_tx_queue *txq;
> +	struct hns3_desc *desc;
> +	int i;
> +
> +	txq = hw->data->tx_queues[idx];
> +
> +	/* Clear tx bd */
> +	desc = txq->tx_ring;
> +	for (i = 0; i < txq->nb_tx_desc; i++) {
> +		desc->tx.tp_fe_sc_vld_ra_ri = 0;
> +		desc++;
> +	}
> +
> +	txq->next_to_use = 0;
> +	txq->next_to_clean = 0;
> +	txq->tx_bd_ready   = txq->nb_tx_desc;
> +	hns3_init_tx_queue_hw(txq);
> +}
> +
> +static void
> +hns3_init_tx_ring_tc(struct hns3_adapter *hns)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	struct hns3_tx_queue *txq;
> +	int i, num;
> +
> +	for (i = 0; i < HNS3_MAX_TC_NUM; i++) {
> +		struct hns3_tc_queue_info *tc_queue = &hw->tc_queue[i];
> +		int j;
> +
> +		if (!tc_queue->enable)
> +			continue;
> +
> +		for (j = 0; j < tc_queue->tqp_count; j++) {
> +			num = tc_queue->tqp_offset + j;
> +			txq = hw->data->tx_queues[num];
> +			if (txq == NULL)
> +				continue;
> +
> +			hns3_write_dev(txq, HNS3_RING_TX_TC_REG, tc_queue->tc);
> +		}
> +	}
> +}
> +
> +int
> +hns3_start_queues(struct hns3_adapter *hns, bool reset_queue)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	struct rte_eth_dev_data *dev_data = hw->data;
> +	struct hns3_rx_queue *rxq;
> +	struct hns3_tx_queue *txq;
> +	int ret;
> +	int i;
> +	int j;
> +
> +	/* Initialize RSS for queues */
> +	ret = hns3_config_rss(hns);
> +	if (ret) {
> +		hns3_err(hw, "Failed to configure rss %d", ret);
> +		return ret;
> +	}
> +
> +	if (reset_queue) {
> +		ret = hns3_reset_all_queues(hns);
> +		if (ret) {
> +			hns3_err(hw, "Failed to reset all queues %d", ret);
> +			return ret;
> +		}
> +	}
> +
> +	/*
> +	 * Hardware does not support where the number of rx and tx queues is
> +	 * not equal in hip08. In .dev_configure callback function we will
> +	 * check the two values, here we think that the number of rx and tx
> +	 * queues is equal.
> +	 */
> +	for (i = 0; i < hw->data->nb_rx_queues; i++) {
> +		rxq = dev_data->rx_queues[i];
> +		txq = dev_data->tx_queues[i];
> +		if (rxq == NULL || txq == NULL || rxq->rx_deferred_start ||
> +		    txq->tx_deferred_start)
> +			continue;
> +
> +		ret = hns3_dev_rx_queue_start(hns, i);
> +		if (ret) {
> +			hns3_err(hw, "Failed to start No.%d rx queue: %d", i,
> +				 ret);
> +			goto out;
> +		}
> +		hns3_dev_tx_queue_start(hns, i);
> +	}
> +	hns3_init_tx_ring_tc(hns);
> +
> +	hns3_enable_all_queues(hw, true);
> +	return 0;
> +
> +out:
> +	for (j = 0; j < i; j++) {
> +		rxq = dev_data->rx_queues[j];
> +		hns3_rx_queue_release_mbufs(rxq);
> +	}
> +
> +	return ret;
> +}
> +
> +int
> +hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue)
> +{
> +	struct hns3_hw *hw = &hns->hw;
> +	int ret;
> +
> +	hns3_enable_all_queues(hw, false);
> +	if (reset_queue) {
> +		ret = hns3_reset_all_queues(hns);
> +		if (ret) {
> +			hns3_err(hw, "Failed to reset all queues %d", ret);
> +			return ret;
> +		}
> +	}
> +	return 0;
> +}
> +
> +void
> +hns3_dev_release_mbufs(struct hns3_adapter *hns)
> +{
> +	struct rte_eth_dev_data *dev_data = hns->hw.data;
> +	struct hns3_rx_queue *rxq;
> +	struct hns3_tx_queue *txq;
> +	int i;
> +
> +	if (dev_data->rx_queues)
> +		for (i = 0; i < dev_data->nb_rx_queues; i++) {
> +			rxq = dev_data->rx_queues[i];
> +			if (rxq == NULL || rxq->rx_deferred_start)
> +				continue;
> +			hns3_rx_queue_release_mbufs(rxq);
> +		}
> +
> +	if (dev_data->tx_queues)
> +		for (i = 0; i < dev_data->nb_tx_queues; i++) {
> +			txq = dev_data->tx_queues[i];
> +			if (txq == NULL || txq->tx_deferred_start)
> +				continue;
> +			hns3_tx_queue_release_mbufs(txq);
> +		}
> +}
> +
> +int
> +hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
> +		    unsigned int socket_id, const struct rte_eth_rxconf *conf,
> +		    struct rte_mempool *mp)
> +{
> +	struct hns3_adapter *hns = dev->data->dev_private;
> +	const struct rte_memzone *rx_mz;
> +	struct hns3_hw *hw = &hns->hw;
> +	struct hns3_rx_queue *rxq;
> +	unsigned int desc_size = sizeof(struct hns3_desc);
> +	unsigned int rx_desc;
> +	int rx_entry_len;
> +
> +	if (dev->data->dev_started) {
> +		hns3_err(hw, "rx_queue_setup after dev_start no supported");
> +		return -EINVAL;
> +	}
> +
> +	if (nb_desc > HNS3_MAX_RING_DESC || nb_desc < HNS3_MIN_RING_DESC ||
> +	    nb_desc % HNS3_ALIGN_RING_DESC) {
> +		hns3_err(hw, "Number (%u) of rx descriptors is invalid",
> +			 nb_desc);
> +		return -EINVAL;
> +	}
> +
> +	if (dev->data->rx_queues[idx]) {
> +		hns3_rx_queue_release(dev->data->rx_queues[idx]);
> +		dev->data->rx_queues[idx] = NULL;
> +	}
> +
> +	rxq = rte_zmalloc_socket("hns3 RX queue", sizeof(struct hns3_rx_queue),
> +				 RTE_CACHE_LINE_SIZE, socket_id);
> +	if (rxq == NULL) {
> +		hns3_err(hw, "Failed to allocate memory for rx queue!");
> +		return -ENOMEM;
> +	}
> +
> +	rxq->hns = hns;
> +	rxq->mb_pool = mp;
> +	rxq->nb_rx_desc = nb_desc;
> +	rxq->queue_id = idx;
> +	if (conf->rx_free_thresh <= 0)
> +		rxq->rx_free_thresh = DEFAULT_RX_FREE_THRESH;
> +	else
> +		rxq->rx_free_thresh = conf->rx_free_thresh;
> +	rxq->rx_deferred_start = conf->rx_deferred_start;
> +
> +	rx_entry_len = sizeof(struct hns3_entry) * rxq->nb_rx_desc;
> +	rxq->sw_ring = rte_zmalloc_socket("hns3 RX sw ring", rx_entry_len,
> +					  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (rxq->sw_ring == NULL) {
> +		hns3_err(hw, "Failed to allocate memory for rx sw ring!");
> +		hns3_rx_queue_release(rxq);
> +		return -ENOMEM;
> +	}
> +
> +	/* Allocate rx ring hardware descriptors. */
> +	rx_desc = rxq->nb_rx_desc * desc_size;
> +	rx_mz = rte_eth_dma_zone_reserve(dev, "rx_ring", idx, rx_desc,
> +					 HNS3_RING_BASE_ALIGN, socket_id);
> +	if (rx_mz == NULL) {
> +		hns3_err(hw, "Failed to reserve DMA memory for No.%d rx ring!",
> +			 idx);
> +		hns3_rx_queue_release(rxq);
> +		return -ENOMEM;
> +	}
> +	rxq->mz = rx_mz;
> +	rxq->rx_ring = (struct hns3_desc *)rx_mz->addr;
> +	rxq->rx_ring_phys_addr = rx_mz->iova;
> +
> +	hns3_dbg(hw, "No.%d rx descriptors iova 0x%lx", idx,
> +		 rxq->rx_ring_phys_addr);
> +
> +	rxq->next_to_use = 0;
> +	rxq->next_to_clean = 0;
> +	rxq->nb_rx_hold = 0;
> +	rxq->pkt_first_seg = NULL;
> +	rxq->pkt_last_seg = NULL;
> +	rxq->port_id = dev->data->port_id;
> +	rxq->configured = true;
> +	rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
> +				idx * HNS3_TQP_REG_SIZE);
> +	rxq->rx_buf_len = hw->rx_buf_len;
> +	rxq->non_vld_descs = 0;
> +	rxq->l2_errors = 0;
> +	rxq->csum_erros = 0;
> +	rxq->pkt_len_errors = 0;
> +	rxq->errors = 0;
> +
> +	rte_spinlock_lock(&hw->lock);
> +	dev->data->rx_queues[idx] = rxq;
> +	rte_spinlock_unlock(&hw->lock);
> +
> +	return 0;
> +}
> +
> +static inline uint32_t
> +rxd_pkt_info_to_pkt_type(uint32_t pkt_info)
> +{
> +#define HNS3_L2TBL_NUM	4
> +#define HNS3_L3TBL_NUM	16
> +#define HNS3_L4TBL_NUM	16
> +	uint32_t pkt_type = 0;
> +	uint32_t l2id, l3id, l4id;
> +
> +	static const uint32_t l2table[HNS3_L2TBL_NUM] = {
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L2_ETHER_VLAN,
> +		RTE_PTYPE_L2_ETHER_QINQ,
> +		0
> +	};
> +
> +	static const uint32_t l3table[HNS3_L3TBL_NUM] = {
> +		RTE_PTYPE_L3_IPV4,
> +		RTE_PTYPE_L3_IPV6,
> +		RTE_PTYPE_L2_ETHER_ARP,
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L3_IPV4_EXT,
> +		RTE_PTYPE_L3_IPV6_EXT,
> +		RTE_PTYPE_L2_ETHER_LLDP,
> +		0, 0, 0, 0, 0, 0, 0, 0, 0
> +	};
> +
> +	static const uint32_t l4table[HNS3_L4TBL_NUM] = {
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_TUNNEL_GRE,
> +		RTE_PTYPE_L4_SCTP,
> +		RTE_PTYPE_L4_IGMP,
> +		RTE_PTYPE_L4_ICMP,
> +		0, 0, 0, 0, 0, 0, 0, 0, 0, 0
> +	};
> +
> +	l2id = hns3_get_field(pkt_info, HNS3_RXD_STRP_TAGP_M,
> +			      HNS3_RXD_STRP_TAGP_S);
> +	l3id = hns3_get_field(pkt_info, HNS3_RXD_L3ID_M, HNS3_RXD_L3ID_S);
> +	l4id = hns3_get_field(pkt_info, HNS3_RXD_L4ID_M, HNS3_RXD_L4ID_S);
> +	pkt_type |= (l2table[l2id] | l3table[l3id] | l4table[l4id]);
> +
> +	return pkt_type;
> +}
> +
> +const uint32_t *
> +hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev)
> +{
> +	static const uint32_t ptypes[] = {
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L2_ETHER_VLAN,
> +		RTE_PTYPE_L2_ETHER_QINQ,
> +		RTE_PTYPE_L2_ETHER_LLDP,
> +		RTE_PTYPE_L2_ETHER_ARP,
> +		RTE_PTYPE_L3_IPV4,
> +		RTE_PTYPE_L3_IPV4_EXT,
> +		RTE_PTYPE_L3_IPV6,
> +		RTE_PTYPE_L3_IPV6_EXT,
> +		RTE_PTYPE_L4_IGMP,
> +		RTE_PTYPE_L4_ICMP,
> +		RTE_PTYPE_L4_SCTP,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_TUNNEL_GRE,
> +		RTE_PTYPE_UNKNOWN
> +	};
> +
> +	if (dev->rx_pkt_burst == hns3_recv_pkts)
> +		return ptypes;
> +
> +	return NULL;
> +}
> +
> +static void
> +hns3_clean_rx_buffers(struct hns3_rx_queue *rxq, int count)
> +{
> +	rxq->next_to_use += count;
> +	if (rxq->next_to_use >= rxq->nb_rx_desc)
> +		rxq->next_to_use -= rxq->nb_rx_desc;
> +
> +	hns3_write_dev(rxq, HNS3_RING_RX_HEAD_REG, count);
> +}
> +
> +static int
> +hns3_handle_bdinfo(struct hns3_rx_queue *rxq, uint16_t pkt_len,
> +		   uint32_t bd_base_info, uint32_t l234_info)
> +{
> +	if (unlikely(l234_info & BIT(HNS3_RXD_L2E_B))) {
> +		rxq->l2_errors++;
> +		return -EINVAL;
> +	}
> +
> +	if (unlikely(pkt_len == 0 || (l234_info & BIT(HNS3_RXD_TRUNCAT_B)))) {
> +		rxq->pkt_len_errors++;
> +		return -EINVAL;
> +	}
> +
> +	if ((bd_base_info & BIT(HNS3_RXD_L3L4P_B)) &&
> +	      unlikely(l234_info & (BIT(HNS3_RXD_L3E_B) | BIT(HNS3_RXD_L4E_B) |
> +		       BIT(HNS3_RXD_OL3E_B) | BIT(HNS3_RXD_OL4E_B)))) {
> +		rxq->csum_erros++;
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +uint16_t
> +hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
> +{
> +	struct hns3_rx_queue *rxq;      /* RX queue */
> +	struct hns3_desc *rx_ring;      /* RX ring (desc) */
> +	struct hns3_entry *sw_ring;
> +	struct hns3_entry *rxe;
> +	struct hns3_desc *rxdp;         /* pointer of the current desc */
> +	struct rte_mbuf *first_seg;
> +	struct rte_mbuf *last_seg;
> +	struct rte_mbuf *nmb;           /* pointer of the new mbuf */
> +	struct rte_mbuf *rxm;
> +	struct rte_eth_dev *dev;
> +	struct hns3_hw *hw;
> +	uint32_t bd_base_info;
> +	uint32_t l234_info;
> +	uint64_t dma_addr;
> +	uint16_t data_len;
> +	uint16_t nb_rx_bd;
> +	uint16_t pkt_len;
> +	uint16_t nb_rx;
> +	uint16_t rx_id;
> +	int num;                        /* num of desc in ring */
> +	int ret;
> +
> +	nb_rx = 0;
> +	nb_rx_bd = 0;
> +	rxq = rx_queue;
> +	dev = &rte_eth_devices[rxq->port_id];
> +	hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +
> +	rx_id = rxq->next_to_clean;
> +	rx_ring = rxq->rx_ring;
> +	first_seg = rxq->pkt_first_seg;
> +	last_seg = rxq->pkt_last_seg;
> +	sw_ring = rxq->sw_ring;
> +
> +	/* Get num of packets in desc ring */
> +	num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
> +	while (nb_rx_bd < num && nb_rx < nb_pkts) {
> +		rxdp = &rx_ring[rx_id];
> +		bd_base_info = rte_le_to_cpu_32(rxdp->rx.bd_base_info);
> +		rte_cio_rmb();
> +		if (unlikely(!hns3_get_bit(bd_base_info, HNS3_RXD_VLD_B))) {
> +			nb_rx_bd++;
> +			rx_id++;
> +			if (unlikely(rx_id == rxq->nb_rx_desc))
> +				rx_id = 0;
> +			rxq->non_vld_descs++;
> +			break;
> +		}
> +
> +		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
> +		if (unlikely(nmb == NULL)) {
> +			dev->data->rx_mbuf_alloc_failed++;
> +			break;
> +		}
> +
> +		nb_rx_bd++;
> +		rxe = &sw_ring[rx_id];
> +		rx_id++;
> +		if (rx_id == rxq->nb_rx_desc)
> +			rx_id = 0;
> +
> +		rte_prefetch0(rxe->mbuf);
> +		if ((rx_id & 0x3) == 0) {
> +			rte_prefetch0(&rx_ring[rx_id]);
> +			rte_prefetch0(&sw_ring[rx_id]);
> +		}
> +
> +		rxm = rxe->mbuf;
> +		rxe->mbuf = nmb;
> +
> +		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> +		rxdp->addr = dma_addr;
> +		rxdp->rx.bd_base_info = 0;
> +
> +		data_len = (uint16_t)(rte_le_to_cpu_16(rxdp->rx.size));
> +		l234_info = rte_le_to_cpu_32(rxdp->rx.l234_info);
> +
> +		if (first_seg == NULL) {
> +			first_seg = rxm;
> +			first_seg->nb_segs = 1;
> +		} else {
> +			first_seg->nb_segs++;
> +			last_seg->next = rxm;
> +		}
> +
> +		rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +		rxm->data_len = data_len;
> +
> +		if (!hns3_get_bit(bd_base_info, HNS3_RXD_FE_B)) {
> +			last_seg = rxm;
> +			continue;
> +		}
> +
> +		/* The last buffer of the received packet */
> +		pkt_len = (uint16_t)(rte_le_to_cpu_16(rxdp->rx.pkt_len));
> +		first_seg->pkt_len = pkt_len;
> +		first_seg->port = rxq->port_id;
> +		first_seg->hash.rss = rxdp->rx.rss_hash;
> +		if (unlikely(hns3_get_bit(bd_base_info, HNS3_RXD_LUM_B))) {
> +			first_seg->hash.fdir.hi =
> +				rte_le_to_cpu_32(rxdp->rx.fd_id);
> +			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
> +		}
> +		rxm->next = NULL;
> +
> +		ret = hns3_handle_bdinfo(rxq, pkt_len, bd_base_info, l234_info);
> +		if (unlikely(ret))
> +			goto pkt_err;
> +
> +		first_seg->packet_type = rxd_pkt_info_to_pkt_type(l234_info);
> +		first_seg->vlan_tci = rxdp->rx.vlan_tag;
> +		first_seg->vlan_tci_outer = rxdp->rx.ot_vlan_tag;
> +		rx_pkts[nb_rx++] = first_seg;
> +		first_seg = NULL;
> +		continue;
> +pkt_err:
> +		rte_pktmbuf_free(first_seg);
> +		first_seg = NULL;
> +		rxq->errors++;
> +		hns3_dbg(hw, "Found pkt err in Port %d No.%d Rx queue, "
> +			     "rx bd: l234_info 0x%x, bd_base_info 0x%x, "
> +			     "addr 0x%x, 0x%x, pkt_len 0x%x, size 0x%x",
> +			     rxq->port_id, rxq->queue_id, l234_info,
> +			     bd_base_info, rxdp->addr0, rxdp->addr1,
> +			     rxdp->rx.pkt_len, rxdp->rx.size);
> +	}
> +
> +	rxq->next_to_clean = rx_id;
> +	rxq->pkt_first_seg = first_seg;
> +	rxq->pkt_last_seg = last_seg;
> +	hns3_clean_rx_buffers(rxq, nb_rx_bd);
> +
> +	return nb_rx;
> +}
> +
> +int
> +hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
> +		    unsigned int socket_id, const struct rte_eth_txconf *conf)
> +{
> +	struct hns3_adapter *hns = dev->data->dev_private;
> +	const struct rte_memzone *tx_mz;
> +	struct hns3_hw *hw = &hns->hw;
> +	struct hns3_tx_queue *txq;
> +	struct hns3_desc *desc;
> +	unsigned int desc_size = sizeof(struct hns3_desc);
> +	unsigned int tx_desc;
> +	int tx_entry_len;
> +	int i;
> +
> +	if (dev->data->dev_started) {
> +		hns3_err(hw, "tx_queue_setup after dev_start no supported");
> +		return -EINVAL;
> +	}
> +
> +	if (nb_desc > HNS3_MAX_RING_DESC || nb_desc < HNS3_MIN_RING_DESC ||
> +	    nb_desc % HNS3_ALIGN_RING_DESC) {
> +		hns3_err(hw, "Number (%u) of tx descriptors is invalid",
> +			    nb_desc);
> +		return -EINVAL;
> +	}
> +
> +	if (dev->data->tx_queues[idx] != NULL) {
> +		hns3_tx_queue_release(dev->data->tx_queues[idx]);
> +		dev->data->tx_queues[idx] = NULL;
> +	}
> +
> +	txq = rte_zmalloc_socket("hns3 TX queue", sizeof(struct hns3_tx_queue),
> +				 RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq == NULL) {
> +		hns3_err(hw, "Failed to allocate memory for tx queue!");
> +		return -ENOMEM;
> +	}
> +
> +	txq->nb_tx_desc = nb_desc;
> +	txq->queue_id = idx;
> +	txq->tx_deferred_start = conf->tx_deferred_start;
> +
> +	tx_entry_len = sizeof(struct hns3_entry) * txq->nb_tx_desc;
> +	txq->sw_ring = rte_zmalloc_socket("hns3 TX sw ring", tx_entry_len,
> +					  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq->sw_ring == NULL) {
> +		hns3_err(hw, "Failed to allocate memory for tx sw ring!");
> +		hns3_tx_queue_release(txq);
> +		return -ENOMEM;
> +	}
> +
> +	/* Allocate tx ring hardware descriptors. */
> +	tx_desc = txq->nb_tx_desc * desc_size;
> +	tx_mz = rte_eth_dma_zone_reserve(dev, "tx_ring", idx, tx_desc,
> +					 HNS3_RING_BASE_ALIGN, socket_id);
> +	if (tx_mz == NULL) {
> +		hns3_err(hw, "Failed to reserve DMA memory for No.%d tx ring!",
> +			 idx);
> +		hns3_tx_queue_release(txq);
> +		return -ENOMEM;
> +	}
> +	txq->mz = tx_mz;
> +	txq->tx_ring = (struct hns3_desc *)tx_mz->addr;
> +	txq->tx_ring_phys_addr = tx_mz->iova;
> +
> +	hns3_dbg(hw, "No.%d tx descriptors iova 0x%lx", idx,
> +		 txq->tx_ring_phys_addr);
> +
> +	/* Clear tx bd */
> +	desc = txq->tx_ring;
> +	for (i = 0; i < txq->nb_tx_desc; i++) {
> +		desc->tx.tp_fe_sc_vld_ra_ri = 0;
> +		desc++;
> +	}
> +
> +	txq->hns = hns;
> +	txq->next_to_use = 0;
> +	txq->next_to_clean = 0;
> +	txq->tx_bd_ready   = txq->nb_tx_desc;
> +	txq->port_id = dev->data->port_id;
> +	txq->pkt_len_errors = 0;
> +	txq->configured = true;
> +	txq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET +
> +				idx * HNS3_TQP_REG_SIZE);
> +	rte_spinlock_lock(&hw->lock);
> +	dev->data->tx_queues[idx] = txq;
> +	rte_spinlock_unlock(&hw->lock);
> +
> +	return 0;
> +}
> +
> +static inline int
> +tx_ring_dist(struct hns3_tx_queue *txq, int begin, int end)
> +{
> +	return (end - begin + txq->nb_tx_desc) % txq->nb_tx_desc;
> +}
> +
> +static inline int
> +tx_ring_space(struct hns3_tx_queue *txq)
> +{
> +	return txq->nb_tx_desc -
> +		tx_ring_dist(txq, txq->next_to_clean, txq->next_to_use) - 1;
> +}
> +
> +static inline void
> +hns3_queue_xmit(struct hns3_tx_queue *txq, uint32_t buf_num)
> +{
> +	hns3_write_dev(txq, HNS3_RING_TX_TAIL_REG, buf_num);
> +}
> +
> +static void
> +hns3_tx_free_useless_buffer(struct hns3_tx_queue *txq)
> +{
> +	uint16_t tx_next_clean = txq->next_to_clean;
> +	uint16_t tx_next_use   = txq->next_to_use;
> +	uint16_t tx_bd_ready   = txq->tx_bd_ready;
> +	uint16_t tx_bd_max     = txq->nb_tx_desc;
> +	struct hns3_entry *tx_bak_pkt = &txq->sw_ring[tx_next_clean];
> +	struct hns3_desc *desc = &txq->tx_ring[tx_next_clean];
> +	struct rte_mbuf *mbuf;
> +
> +	while ((!hns3_get_bit(desc->tx.tp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B)) &&
> +		(tx_next_use != tx_next_clean || tx_bd_ready < tx_bd_max)) {
> +		mbuf = tx_bak_pkt->mbuf;
> +		if (mbuf) {
> +			mbuf->next = NULL;
> +			rte_pktmbuf_free(mbuf);
> +			tx_bak_pkt->mbuf = NULL;
> +		}
> +
> +		desc++;
> +		tx_bak_pkt++;
> +		tx_next_clean++;
> +		tx_bd_ready++;
> +
> +		if (tx_next_clean >= tx_bd_max) {
> +			tx_next_clean = 0;
> +			desc = txq->tx_ring;
> +			tx_bak_pkt = txq->sw_ring;
> +		}
> +	}
> +
> +	txq->next_to_clean = tx_next_clean;
> +	txq->tx_bd_ready   = tx_bd_ready;
> +}
> +
> +static void
> +fill_desc(struct hns3_tx_queue *txq, uint16_t tx_desc_id, struct rte_mbuf *rxm,
> +	  bool first, int offset)
> +{
> +	struct hns3_desc *tx_ring = txq->tx_ring;
> +	struct hns3_desc *desc = &tx_ring[tx_desc_id];
> +	uint8_t frag_end = rxm->next == NULL ? 1 : 0;
> +	uint8_t ol_type_vlan_msec = 0;
> +	uint8_t type_cs_vlan_tso = 0;
> +	uint16_t size = rxm->data_len;
> +	uint16_t paylen = 0;
> +	uint16_t rrcfv = 0;
> +	uint64_t ol_flags;
> +
> +	desc->addr = rte_mbuf_data_iova(rxm) + offset;
> +	desc->tx.send_size = rte_cpu_to_le_16(size);
> +	hns3_set_bit(rrcfv, HNS3_TXD_VLD_B, 1);
> +
> +	if (first) {
> +		if (rxm->packet_type &
> +		    (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
> +			hns3_set_bit(type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
> +			if (rxm->packet_type & RTE_PTYPE_L3_IPV6)
> +				hns3_set_field(type_cs_vlan_tso, HNS3_TXD_L3T_M,
> +					       HNS3_TXD_L3T_S, HNS3_L3T_IPV6);
> +			else
> +				hns3_set_field(type_cs_vlan_tso, HNS3_TXD_L3T_M,
> +					       HNS3_TXD_L3T_S, HNS3_L3T_IPV4);
> +		}
> +		desc->tx.paylen = rte_cpu_to_le_16(paylen);
> +		desc->tx.l4_len = rxm->l4_len;
> +	}
> +
> +	hns3_set_bit(rrcfv, HNS3_TXD_FE_B, frag_end);
> +	desc->tx.tp_fe_sc_vld_ra_ri = rrcfv;
> +
> +	if (frag_end) {
> +		ol_flags = rxm->ol_flags;
> +		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
> +			hns3_set_bit(type_cs_vlan_tso, HNS3_TXD_VLAN_B, 1);
> +			desc->tx.vlan_tag = rxm->vlan_tci;
> +		}
> +
> +		if (ol_flags & PKT_TX_QINQ_PKT) {
> +			hns3_set_bit(ol_type_vlan_msec, HNS3_TXD_OVLAN_B, 1);
> +			desc->tx.outer_vlan_tag = rxm->vlan_tci_outer;
> +		}
> +	}
> +
> +	desc->tx.type_cs_vlan_tso = type_cs_vlan_tso;
> +	desc->tx.ol_type_vlan_msec = ol_type_vlan_msec;
> +}
> +
> +static int
> +hns3_tx_alloc_mbufs(struct hns3_tx_queue *txq, struct rte_mempool *mb_pool,
> +		    uint16_t nb_new_buf, struct rte_mbuf **alloc_mbuf)
> +{
> +	struct rte_mbuf *new_mbuf = NULL;
> +	struct rte_eth_dev *dev;
> +	struct rte_mbuf *temp;
> +	struct hns3_hw *hw;
> +	uint16_t i;
> +
> +	/* Allocate enough mbufs */
> +	for (i = 0; i < nb_new_buf; i++) {
> +		temp = rte_pktmbuf_alloc(mb_pool);
> +		if (unlikely(temp == NULL)) {
> +			dev = &rte_eth_devices[txq->port_id];
> +			hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +			hns3_err(hw, "Failed to alloc TX mbuf port_id=%d,"
> +				     "queue_id=%d in reassemble tx pkts.",
> +				     txq->port_id, txq->queue_id);
> +			rte_pktmbuf_free(new_mbuf);
> +			return -ENOMEM;
> +		}
> +		temp->next = new_mbuf;
> +		new_mbuf = temp;
> +	}
> +
> +	if (new_mbuf == NULL)
> +		return -ENOMEM;
> +
> +	new_mbuf->nb_segs = nb_new_buf;
> +	*alloc_mbuf = new_mbuf;
> +
> +	return 0;
> +}
> +
> +static int
> +hns3_reassemble_tx_pkts(void *tx_queue, struct rte_mbuf *tx_pkt,
> +			struct rte_mbuf **new_pkt)
> +{
> +	struct hns3_tx_queue *txq = tx_queue;
> +	struct rte_mempool *mb_pool;
> +	struct rte_mbuf *new_mbuf;
> +	struct rte_mbuf *temp_new;
> +	struct rte_mbuf *temp;
> +	uint16_t last_buf_len;
> +	uint16_t nb_new_buf;
> +	uint16_t buf_size;
> +	uint16_t buf_len;
> +	uint16_t len_s;
> +	uint16_t len_d;
> +	uint16_t len;
> +	uint16_t i;
> +	int ret;
> +	char *s;
> +	char *d;
> +
> +	mb_pool = tx_pkt->pool;
> +	buf_size = tx_pkt->buf_len - RTE_PKTMBUF_HEADROOM;
> +	nb_new_buf = (tx_pkt->pkt_len - 1) / buf_size + 1;
> +
> +	last_buf_len = tx_pkt->pkt_len % buf_size;
> +	if (last_buf_len == 0)
> +		last_buf_len = buf_size;
> +
> +	/* Allocate enough mbufs */
> +	ret = hns3_tx_alloc_mbufs(txq, mb_pool, nb_new_buf, &new_mbuf);
> +	if (ret)
> +		return ret;
> +
> +	/* Copy the original packet content to the new mbufs */
> +	temp = tx_pkt;
> +	s = rte_pktmbuf_mtod(temp, char *);
> +	len_s = temp->data_len;
> +	temp_new = new_mbuf;
> +	for (i = 0; i < nb_new_buf; i++) {
> +		d = rte_pktmbuf_mtod(temp_new, char *);
> +		if (i < nb_new_buf - 1)
> +			buf_len = buf_size;
> +		else
> +			buf_len = last_buf_len;
> +		len_d = buf_len;
> +
> +		while (len_d) {
> +			len = RTE_MIN(len_s, len_d);
> +			memcpy(d, s, len);
> +			s = s + len;
> +			d = d + len;
> +			len_d = len_d - len;
> +			len_s = len_s - len;
> +
> +			if (len_s == 0) {
> +				temp = temp->next;
> +				if (temp == NULL)
> +					break;
> +				s = rte_pktmbuf_mtod(temp, char *);
> +				len_s = temp->data_len;
> +			}
> +		}
> +
> +		temp_new->data_len = buf_len;
> +		temp_new = temp_new->next;
> +	}
> +
> +	/* free original mbufs */
> +	rte_pktmbuf_free(tx_pkt);
> +
> +	*new_pkt = new_mbuf;
> +
> +	return 0;
> +}
> +
> +uint16_t
> +hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> +{
> +	struct hns3_tx_queue *txq = tx_queue;
> +	struct hns3_entry *tx_bak_pkt;
> +	struct rte_mbuf *new_pkt;
> +	struct rte_mbuf *tx_pkt;
> +	struct rte_mbuf *m_seg;
> +	struct rte_mbuf *temp;
> +	uint32_t nb_hold = 0;
> +	uint16_t tx_next_clean;
> +	uint16_t tx_next_use;
> +	uint16_t tx_bd_ready;
> +	uint16_t tx_pkt_num;
> +	uint16_t tx_bd_max;
> +	uint16_t nb_buf;
> +	uint16_t nb_tx;
> +	uint16_t i;
> +
> +	/* free useless buffer */
> +	hns3_tx_free_useless_buffer(txq);
> +	tx_bd_ready = txq->tx_bd_ready;
> +	if (tx_bd_ready == 0)
> +		return 0;
> +
> +	tx_next_clean = txq->next_to_clean;
> +	tx_next_use   = txq->next_to_use;
> +	tx_bd_max     = txq->nb_tx_desc;
> +	tx_bak_pkt = &txq->sw_ring[tx_next_clean];
> +
> +	tx_pkt_num = (tx_bd_ready < nb_pkts) ? tx_bd_ready : nb_pkts;
> +
> +	/* send packets */
> +	tx_bak_pkt = &txq->sw_ring[tx_next_use];
> +	for (nb_tx = 0; nb_tx < tx_pkt_num; nb_tx++) {
> +		tx_pkt = *tx_pkts++;
> +
> +		nb_buf = tx_pkt->nb_segs;
> +
> +		if (nb_buf > tx_ring_space(txq)) {
> +			if (nb_tx == 0)
> +				return 0;
> +
> +			goto end_of_tx;
> +		}
> +
> +		/*
> +		 * If the length of the packet is too long or zero, the packet
> +		 * will be ignored.
> +		 */
> +		if (unlikely(tx_pkt->pkt_len > HNS3_MAX_FRAME_LEN ||
> +			     tx_pkt->pkt_len == 0)) {
> +			txq->pkt_len_errors++;
> +			continue;
> +		}
> +
> +		m_seg = tx_pkt;
> +		if (unlikely(nb_buf > HNS3_MAX_TX_BD_PER_PKT)) {
> +			if (hns3_reassemble_tx_pkts(txq, tx_pkt, &new_pkt))
> +				goto end_of_tx;
> +			m_seg = new_pkt;
> +			nb_buf = m_seg->nb_segs;
> +		}
> +
> +		i = 0;
> +		do {
> +			fill_desc(txq, tx_next_use, m_seg, (i == 0), 0);
> +			temp = m_seg->next;
> +			tx_bak_pkt->mbuf = m_seg;
> +			m_seg = temp;
> +			tx_next_use++;
> +			tx_bak_pkt++;
> +			if (tx_next_use >= tx_bd_max) {
> +				tx_next_use = 0;
> +				tx_bak_pkt = txq->sw_ring;
> +			}
> +
> +			i++;
> +		} while (m_seg != NULL);
> +
> +		nb_hold += i;
> +	}
> +
> +end_of_tx:
> +
> +	if (likely(nb_tx)) {
> +		hns3_queue_xmit(txq, nb_hold);
> +		txq->next_to_clean = tx_next_clean;
> +		txq->next_to_use   = tx_next_use;
> +		txq->tx_bd_ready   = tx_bd_ready - nb_hold;
> +	}
> +
> +	return nb_tx;
> +}
> +
> +static uint16_t
> +hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
> +		      struct rte_mbuf **pkts __rte_unused,
> +		      uint16_t pkts_n __rte_unused)
> +{
> +	return 0;
> +}
> +
> +void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
> +{
> +	struct hns3_adapter *hns = eth_dev->data->dev_private;
> +
> +	if (hns->hw.adapter_state == HNS3_NIC_STARTED &&
> +	    rte_atomic16_read(&hns->hw.reset.resetting) == 0) {
> +		eth_dev->rx_pkt_burst = hns3_recv_pkts;
> +		eth_dev->tx_pkt_burst = hns3_xmit_pkts;
> +	} else {
> +		eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
> +		eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
> +	}
> +}
> diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
> new file mode 100644
> index 0000000..6189f56
> --- /dev/null
> +++ b/drivers/net/hns3/hns3_rxtx.h
> @@ -0,0 +1,287 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2019 Hisilicon Limited.
> + */
> +
> +#ifndef _HNS3_RXTX_H_
> +#define _HNS3_RXTX_H_
> +
> +#define	HNS3_MIN_RING_DESC	32
> +#define	HNS3_MAX_RING_DESC	32768
> +#define HNS3_DEFAULT_RING_DESC  1024
> +#define	HNS3_ALIGN_RING_DESC	32
> +#define HNS3_RING_BASE_ALIGN	128
> +
> +#define HNS3_BD_SIZE_512_TYPE			0
> +#define HNS3_BD_SIZE_1024_TYPE			1
> +#define HNS3_BD_SIZE_2048_TYPE			2
> +#define HNS3_BD_SIZE_4096_TYPE			3
> +
> +#define HNS3_RX_FLAG_VLAN_PRESENT		0x1
> +#define HNS3_RX_FLAG_L3ID_IPV4			0x0
> +#define HNS3_RX_FLAG_L3ID_IPV6			0x1
> +#define HNS3_RX_FLAG_L4ID_UDP			0x0
> +#define HNS3_RX_FLAG_L4ID_TCP			0x1
> +
> +#define HNS3_RXD_DMAC_S				0
> +#define HNS3_RXD_DMAC_M				(0x3 << HNS3_RXD_DMAC_S)
> +#define HNS3_RXD_VLAN_S				2
> +#define HNS3_RXD_VLAN_M				(0x3 << HNS3_RXD_VLAN_S)
> +#define HNS3_RXD_L3ID_S				4
> +#define HNS3_RXD_L3ID_M				(0xf << HNS3_RXD_L3ID_S)
> +#define HNS3_RXD_L4ID_S				8
> +#define HNS3_RXD_L4ID_M				(0xf << HNS3_RXD_L4ID_S)
> +#define HNS3_RXD_FRAG_B				12
> +#define HNS3_RXD_STRP_TAGP_S			13
> +#define HNS3_RXD_STRP_TAGP_M			(0x3 << HNS3_RXD_STRP_TAGP_S)
> +
> +#define HNS3_RXD_L2E_B				16
> +#define HNS3_RXD_L3E_B				17
> +#define HNS3_RXD_L4E_B				18
> +#define HNS3_RXD_TRUNCAT_B			19
> +#define HNS3_RXD_HOI_B				20
> +#define HNS3_RXD_DOI_B				21
> +#define HNS3_RXD_OL3E_B				22
> +#define HNS3_RXD_OL4E_B				23
> +#define HNS3_RXD_GRO_COUNT_S			24
> +#define HNS3_RXD_GRO_COUNT_M			(0x3f << HNS3_RXD_GRO_COUNT_S)
> +#define HNS3_RXD_GRO_FIXID_B			30
> +#define HNS3_RXD_GRO_ECN_B			31
> +
> +#define HNS3_RXD_ODMAC_S			0
> +#define HNS3_RXD_ODMAC_M			(0x3 << HNS3_RXD_ODMAC_S)
> +#define HNS3_RXD_OVLAN_S			2
> +#define HNS3_RXD_OVLAN_M			(0x3 << HNS3_RXD_OVLAN_S)
> +#define HNS3_RXD_OL3ID_S			4
> +#define HNS3_RXD_OL3ID_M			(0xf << HNS3_RXD_OL3ID_S)
> +#define HNS3_RXD_OL4ID_S			8
> +#define HNS3_RXD_OL4ID_M			(0xf << HNS3_RXD_OL4ID_S)
> +#define HNS3_RXD_FBHI_S				12
> +#define HNS3_RXD_FBHI_M				(0x3 << HNS3_RXD_FBHI_S)
> +#define HNS3_RXD_FBLI_S				14
> +#define HNS3_RXD_FBLI_M				(0x3 << HNS3_RXD_FBLI_S)
> +
> +#define HNS3_RXD_BDTYPE_S			0
> +#define HNS3_RXD_BDTYPE_M			(0xf << HNS3_RXD_BDTYPE_S)
> +#define HNS3_RXD_VLD_B				4
> +#define HNS3_RXD_UDP0_B				5
> +#define HNS3_RXD_EXTEND_B			7
> +#define HNS3_RXD_FE_B				8
> +#define HNS3_RXD_LUM_B				9
> +#define HNS3_RXD_CRCP_B				10
> +#define HNS3_RXD_L3L4P_B			11
> +#define HNS3_RXD_TSIND_S			12
> +#define HNS3_RXD_TSIND_M			(0x7 << HNS3_RXD_TSIND_S)
> +#define HNS3_RXD_LKBK_B				15
> +#define HNS3_RXD_GRO_SIZE_S			16
> +#define HNS3_RXD_GRO_SIZE_M			(0x3ff << HNS3_RXD_GRO_SIZE_S)
> +
> +#define HNS3_TXD_L3T_S				0
> +#define HNS3_TXD_L3T_M				(0x3 << HNS3_TXD_L3T_S)
> +#define HNS3_TXD_L4T_S				2
> +#define HNS3_TXD_L4T_M				(0x3 << HNS3_TXD_L4T_S)
> +#define HNS3_TXD_L3CS_B				4
> +#define HNS3_TXD_L4CS_B				5
> +#define HNS3_TXD_VLAN_B				6
> +#define HNS3_TXD_TSO_B				7
> +
> +#define HNS3_TXD_L2LEN_S			8
> +#define HNS3_TXD_L2LEN_M			(0xff << HNS3_TXD_L2LEN_S)
> +#define HNS3_TXD_L3LEN_S			16
> +#define HNS3_TXD_L3LEN_M			(0xff << HNS3_TXD_L3LEN_S)
> +#define HNS3_TXD_L4LEN_S			24
> +#define HNS3_TXD_L4LEN_M			(0xff << HNS3_TXD_L4LEN_S)
> +
> +#define HNS3_TXD_OL3T_S				0
> +#define HNS3_TXD_OL3T_M				(0x3 << HNS3_TXD_OL3T_S)
> +#define HNS3_TXD_OVLAN_B			2
> +#define HNS3_TXD_MACSEC_B			3
> +#define HNS3_TXD_TUNTYPE_S			4
> +#define HNS3_TXD_TUNTYPE_M			(0xf << HNS3_TXD_TUNTYPE_S)
> +
> +#define HNS3_TXD_BDTYPE_S			0
> +#define HNS3_TXD_BDTYPE_M			(0xf << HNS3_TXD_BDTYPE_S)
> +#define HNS3_TXD_FE_B				4
> +#define HNS3_TXD_SC_S				5
> +#define HNS3_TXD_SC_M				(0x3 << HNS3_TXD_SC_S)
> +#define HNS3_TXD_EXTEND_B			7
> +#define HNS3_TXD_VLD_B				8
> +#define HNS3_TXD_RI_B				9
> +#define HNS3_TXD_RA_B				10
> +#define HNS3_TXD_TSYN_B				11
> +#define HNS3_TXD_DECTTL_S			12
> +#define HNS3_TXD_DECTTL_M			(0xf << HNS3_TXD_DECTTL_S)
> +
> +#define HNS3_TXD_MSS_S				0
> +#define HNS3_TXD_MSS_M				(0x3fff << HNS3_TXD_MSS_S)
> +
> +enum hns3_pkt_l2t_type {
> +	HNS3_L2_TYPE_UNICAST,
> +	HNS3_L2_TYPE_MULTICAST,
> +	HNS3_L2_TYPE_BROADCAST,
> +	HNS3_L2_TYPE_INVALID,
> +};
> +
> +enum hns3_pkt_l3t_type {
> +	HNS3_L3T_NONE,
> +	HNS3_L3T_IPV6,
> +	HNS3_L3T_IPV4,
> +	HNS3_L3T_RESERVED
> +};
> +
> +enum hns3_pkt_l4t_type {
> +	HNS3_L4T_UNKNOWN,
> +	HNS3_L4T_TCP,
> +	HNS3_L4T_UDP,
> +	HNS3_L4T_SCTP
> +};
> +
> +enum hns3_pkt_ol3t_type {
> +	HNS3_OL3T_NONE,
> +	HNS3_OL3T_IPV6,
> +	HNS3_OL3T_IPV4_NO_CSUM,
> +	HNS3_OL3T_IPV4_CSUM
> +};
> +
> +enum hns3_pkt_tun_type {
> +	HNS3_TUN_NONE,
> +	HNS3_TUN_MAC_IN_UDP,
> +	HNS3_TUN_NVGRE,
> +	HNS3_TUN_OTHER
> +};
> +
> +#define __packed __attribute__((packed))
> +/* hardware spec ring buffer format */
> +__packed struct hns3_desc {

This will cause the following error under clang:

   ../drivers/net/hns3/hns3_rxtx.h:154:1: error: attribute 'packed' is ignored, place it after "struct" to apply attribute to type declaration [-Werror,-Wignored-attributes]
   __packed struct hns3_desc {
   ^
   ../drivers/net/hns3/hns3_rxtx.h:152:33: note: expanded from macro '__packed'
   #define __packed __attribute__((packed))

> +	union {
> +		uint64_t addr;
> +		struct {
> +			uint32_t addr0;
> +			uint32_t addr1;
> +		};
> +	};
> +	union {
> +		struct {
> +			uint16_t vlan_tag;
> +			uint16_t send_size;
> +			union {
> +				uint32_t type_cs_vlan_tso_len;
> +				struct {
> +					uint8_t type_cs_vlan_tso;
> +					uint8_t l2_len;
> +					uint8_t l3_len;
> +					uint8_t l4_len;
> +				};
> +			};
> +			uint16_t outer_vlan_tag;
> +			uint16_t tv;
> +			union {
> +				uint32_t ol_type_vlan_len_msec;
> +				struct {
> +					uint8_t ol_type_vlan_msec;
> +					uint8_t ol2_len;
> +					uint8_t ol3_len;
> +					uint8_t ol4_len;
> +				};
> +			};
> +
> +			uint32_t paylen;
> +			uint16_t tp_fe_sc_vld_ra_ri;
> +			uint16_t mss;
> +		} tx;
> +
> +		struct {
> +			uint32_t l234_info;
> +			uint16_t pkt_len;
> +			uint16_t size;
> +			uint32_t rss_hash;
> +			uint16_t fd_id;
> +			uint16_t vlan_tag;
> +			union {
> +				uint32_t ol_info;
> +				struct {
> +					uint16_t o_dm_vlan_id_fb;
> +					uint16_t ot_vlan_tag;
> +				};
> +			};
> +			uint32_t bd_base_info;
> +		} rx;
> +	};
> +};
> +
> +struct hns3_entry {
> +	struct rte_mbuf *mbuf;
> +};
> +
> +struct hns3_rx_queue {
> +	void *io_base;
> +	struct hns3_adapter *hns;
> +	struct rte_mempool *mb_pool;
> +	struct hns3_desc *rx_ring;
> +	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
> +	const struct rte_memzone *mz;
> +	struct hns3_entry *sw_ring;
> +
> +	struct rte_mbuf *pkt_first_seg;
> +	struct rte_mbuf *pkt_last_seg;
> +
> +	uint16_t queue_id;
> +	uint16_t port_id;
> +	uint16_t nb_rx_desc;
> +	uint16_t nb_rx_hold;
> +	uint16_t rx_tail;
> +	uint16_t next_to_clean;
> +	uint16_t next_to_use;
> +	uint16_t rx_buf_len;
> +	uint16_t rx_free_thresh;
> +
> +	bool rx_deferred_start; /* don't start this queue in dev start */
> +	bool configured;        /* indicate if rx queue has been configured */
> +
> +	uint64_t non_vld_descs; /* num of non valid rx descriptors */
> +	uint64_t l2_errors;
> +	uint64_t csum_erros;
> +	uint64_t pkt_len_errors;
> +	uint64_t errors;        /* num of error rx packets recorded by driver */
> +};
> +
> +struct hns3_tx_queue {
> +	void *io_base;
> +	struct hns3_adapter *hns;
> +	struct hns3_desc *tx_ring;
> +	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
> +	const struct rte_memzone *mz;
> +	struct hns3_entry *sw_ring;
> +
> +	uint16_t queue_id;
> +	uint16_t port_id;
> +	uint16_t nb_tx_desc;
> +	uint16_t next_to_clean;
> +	uint16_t next_to_use;
> +	uint16_t tx_bd_ready;
> +
> +	bool tx_deferred_start; /* don't start this queue in dev start */
> +	bool configured;        /* indicate if tx queue has been configured */
> +
> +	uint64_t pkt_len_errors;
> +};
> +
> +void hns3_dev_rx_queue_release(void *queue);
> +void hns3_dev_tx_queue_release(void *queue);
> +void hns3_free_all_queues(struct rte_eth_dev *dev);
> +int hns3_reset_all_queues(struct hns3_adapter *hns);
> +int hns3_start_queues(struct hns3_adapter *hns, bool reset_queue);
> +int hns3_stop_queues(struct hns3_adapter *hns, bool reset_queue);
> +void hns3_dev_release_mbufs(struct hns3_adapter *hns);
> +int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
> +			unsigned int socket, const struct rte_eth_rxconf *conf,
> +			struct rte_mempool *mp);
> +int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
> +			unsigned int socket, const struct rte_eth_txconf *conf);
> +uint16_t hns3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +			uint16_t nb_pkts);
> +uint16_t hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> +			uint16_t nb_pkts);
> +
> +const uint32_t *hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev);
> +void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
> +#endif /* _HNS3_RXTX_H_ */

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
@ 2019-08-30  3:22     ` Wei Hu (Xavier)
  2019-08-31  2:10       ` Wei Hu (Xavier)
  2019-08-30 14:57     ` Ferruh Yigit
  1 sibling, 1 reply; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-30  3:22 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi,  Jerin


On 2019/8/23 22:08, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
>> Sent: Friday, August 23, 2019 7:17 PM
>> To: dev@dpdk.org
>> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
>> liudongdong3@huawei.com; forest.zhouchang@huawei.com
>> Subject: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
>>
>> This patch add build related files for hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>> +# Hisilicon HNS3 PMD driver
>> +#
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=y
> # Please add meson support
This patch already contains meson support,  thanks
> # Move build infra to the first patch
> # See git log drivers/net/octeontx2 as example
OK, I will  adjust the order of the patches in this series and send V2.
>
>
>> diff --git a/config/common_base b/config/common_base
>> index 8ef75c2..71a2c33 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -282,6 +282,11 @@
>> CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
>>  CONFIG_RTE_LIBRTE_HINIC_PMD=n
>>
>>  #
>> +# Compile burst-oriented HNS3 PMD driver
>> +#
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>> +
>> +#
>>  # Compile burst-oriented IXGBE PMD driver
>>  #
>>  CONFIG_RTE_LIBRTE_IXGBE_PMD=y
>> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang
>> b/config/defconfig_arm64-armv8a-linuxapp-clang
>> index d3b4dad..c73f5fb 100644
>> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
>> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
>> @@ -6,3 +6,5 @@
>>
>>  CONFIG_RTE_TOOLCHAIN="clang"
>>  CONFIG_RTE_TOOLCHAIN_CLANG=y
>> +
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>> diff --git a/doc/guides/nics/features/hns3.ini
>> b/doc/guides/nics/features/hns3.ini
>> new file mode 100644
>> index 0000000..d38d35e
>> --- /dev/null
>> +++ b/doc/guides/nics/features/hns3.ini
>> @@ -0,0 +1,38 @@
>> +;
>> +; Supported features of the 'hns3' network poll mode driver.
> Add doc changes when driver feature gets added.
> # See git log drivers/net/octeontx2 as example
OK, I will modify the patches and send V2.
Thanks
>
>> +;
>> +; Refer to default.ini for the full list of available PMD features.
>> +;
>> +[Features]
>> +Link status          = Y
>> +MTU update           = Y
>> +Jumbo frame          = Y
>> +Promiscuous mode     = Y
>> +Allmulticast mode    = Y
>> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
>> new file mode 100644
>> index 0000000..c9d0253
>> --- /dev/null
>> +++ b/doc/guides/nics/hns3.rst
>> @@ -0,0 +1,55 @@
>> +..  SPDX-License-Identifier: BSD-3-Clause
>> +    Copyright(c) 2018-2019 Hisilicon Limited.
>> +
>> +HNS3 Poll Mode Driver
>> +===============================
>> +
>> +The Hisilicon Network Subsystem is a long term evolution IP which is
>> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
>> +
>> +The HNS3 PMD (librte_pmd_hns3) provides poll mode driver support
>> +for hns3(Hisilicon Network Subsystem 3) network engine.
>> +
>> +Features
>> +--------
>> +
>> +Features of the HNS3 PMD are:
>> +
>> +- Arch support: ARMv8.
> Is it an integrated NIC controller? Why it is supported only on ARMv8?
> The reason why I asking because, Enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y
> only on arm64 will create a case where build fails for arm64 and passes for
> x86. I would like to avoid such disparity. If the build is passing on x86 make it
> enable in the common code, not in arm64 config.
Currently this network engine is integrated in the SoCs, the SoCs can be
used
as a PCIe EP integrated NIC controllers or be used as universal cpus on
the device,
such as servers. The network engine is accessed by ARM cores in the SoCs.
We will enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y in common_linux config in V2.
Thanks.
>
>> +- Multiple queues for TX and RX
>> +- Receive Side Scaling (RSS)
>> +- Packet type information
>> +- Checksum offload
>> +- Promiscuous mode
>> +- Multicast mode
>> +- Port hardware statistics
>> +- Jumbo frames
>> +- Link state information
>> +- VLAN stripping
>
>> +cflags += '-DALLOW_EXPERIMENTAL_API'
>> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map
>> b/drivers/net/hns3/rte_pmd_hns3_version.map
>> new file mode 100644
>> index 0000000..3aef967
>> --- /dev/null
>> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
>> @@ -0,0 +1,3 @@
>> +DPDK_19.08 {
> Change to 19.11
OK, I will modify the patches and send V2. Thanks.

Regards
Xavier
>
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
  2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
@ 2019-08-30  6:16   ` Stephen Hemminger
  2019-08-31  8:46     ` Wei Hu (Xavier)
  2019-08-30  6:17   ` Stephen Hemminger
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 75+ messages in thread
From: Stephen Hemminger @ 2019-08-30  6:16 UTC (permalink / raw)
  To: Wei Hu (Xavier)
  Cc: dev, linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On Fri, 23 Aug 2019 21:47:11 +0800
"Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:

Thanks for doing documentation on this driver.

> +The Hisilicon Network Subsystem is a long term evolution IP which is
> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.

This sentence needs some editing to remove marketing speak.
It is not clear what "long term evolution IP" means when read
by a DPDK user.  Why not just say that "Hisilcon Network Subsystem
(HNS) is used in System On Chip (SOC) devices such as the Kunpeng 920".

It is standard practice in technical documents to spell out
the meaning of acronyms on first use; then use the acronym
there after.

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
  2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
  2019-08-30  6:16   ` Stephen Hemminger
@ 2019-08-30  6:17   ` Stephen Hemminger
  2019-08-31  8:44     ` Wei Hu (Xavier)
  2019-09-03 15:27     ` Ye Xiaolong
  2019-08-30 14:58   ` Ferruh Yigit
                     ` (2 subsequent siblings)
  5 siblings, 2 replies; 75+ messages in thread
From: Stephen Hemminger @ 2019-08-30  6:17 UTC (permalink / raw)
  To: Wei Hu (Xavier)
  Cc: dev, linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On Fri, 23 Aug 2019 21:47:11 +0800
"Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:

> +Limitations or Known issues
> +---------------------------
> +Build with clang is not supported yet.
> +Currently, only ARMv8 architecture is supported.
> \ No newline at end of file

Pleas fix this. You need to add new line at end of
file. Vi does this by default, and Emacs has an option
to do this.

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro Wei Hu (Xavier)
@ 2019-08-30  8:25   ` Gavin Hu (Arm Technology China)
  2019-09-05  6:01     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Gavin Hu (Arm Technology China) @ 2019-08-30  8:25 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi Xavier,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
> Sent: Friday, August 23, 2019 9:47 PM
> To: dev@dpdk.org
> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
> liudongdong3@huawei.com; forest.zhouchang@huawei.com
> Subject: [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data
> structure and macro
>
> This patch adds some data structure definitions, macro definitions and
> inline functions for hns3 PMD drivers.
>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  drivers/net/hns3/hns3_ethdev.h | 609
> +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 609 insertions(+)
>  create mode 100644 drivers/net/hns3/hns3_ethdev.h
>
> diff --git a/drivers/net/hns3/hns3_ethdev.h
> b/drivers/net/hns3/hns3_ethdev.h
> new file mode 100644
> index 0000000..bfb54f2
> --- /dev/null
> +++ b/drivers/net/hns3/hns3_ethdev.h
> @@ -0,0 +1,609 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2019 Hisilicon Limited.
> + */
> +
> +#ifndef _HNS3_ETHDEV_H_
> +#define _HNS3_ETHDEV_H_
> +
> +#include <sys/time.h>
> +#include <rte_alarm.h>
> +
> +/* Vendor ID */
> +#define PCI_VENDOR_ID_HUAWEI                 0x19e5
> +
> +/* Device IDs */
> +#define HNS3_DEV_ID_GE                               0xA220
> +#define HNS3_DEV_ID_25GE                     0xA221
> +#define HNS3_DEV_ID_25GE_RDMA                        0xA222
> +#define HNS3_DEV_ID_50GE_RDMA                        0xA224
> +#define HNS3_DEV_ID_100G_RDMA_MACSEC         0xA226
> +#define HNS3_DEV_ID_100G_VF                  0xA22E
> +#define HNS3_DEV_ID_100G_RDMA_PFC_VF         0xA22F
> +
> +#define HNS3_UC_MACADDR_NUM          96
> +#define HNS3_MC_MACADDR_NUM          128
> +
> +#define HNS3_MAX_BD_SIZE             65535
> +#define HNS3_MAX_TX_BD_PER_PKT               8
> +#define HNS3_MAX_FRAME_LEN           9728
> +#define HNS3_MIN_FRAME_LEN           64
> +#define HNS3_VLAN_TAG_SIZE           4
> +#define HNS3_DEFAULT_RX_BUF_LEN              2048
> +
> +#define HNS3_ETH_OVERHEAD \
> +     (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> HNS3_VLAN_TAG_SIZE * 2)
> +#define HNS3_PKTLEN_TO_MTU(pktlen)   ((pktlen) -
> HNS3_ETH_OVERHEAD)
> +#define HNS3_MAX_MTU (HNS3_MAX_FRAME_LEN -
> HNS3_ETH_OVERHEAD)
> +#define HNS3_DEFAULT_MTU             1500UL
> +#define HNS3_DEFAULT_FRAME_LEN               (HNS3_DEFAULT_MTU +
> HNS3_ETH_OVERHEAD)
> +
> +#define HNS3_4_TCS                   4
> +#define HNS3_8_TCS                   8
> +#define HNS3_MAX_TC_NUM                      8
> +
> +#define HNS3_MAX_PF_NUM                      8
> +#define HNS3_UMV_TBL_SIZE            3072
> +#define HNS3_DEFAULT_UMV_SPACE_PER_PF \
> +     (HNS3_UMV_TBL_SIZE / HNS3_MAX_PF_NUM)
> +
> +#define HNS3_PF_CFG_BLOCK_SIZE               32
> +#define HNS3_PF_CFG_DESC_NUM \
> +     (HNS3_PF_CFG_BLOCK_SIZE / HNS3_CFG_RD_LEN_BYTES)
> +
> +#define HNS3_DEFAULT_ENABLE_PFC_NUM  0
> +
> +#define HNS3_INTR_UNREG_FAIL_RETRY_CNT       5
> +#define HNS3_INTR_UNREG_FAIL_DELAY_MS        500
> +
> +#define HNS3_QUIT_RESET_CNT          10
> +#define HNS3_QUIT_RESET_DELAY_MS     100
> +
> +#define HNS3_POLL_RESPONE_MS         1
> +
> +#define HNS3_MAX_USER_PRIO           8
> +#define HNS3_PG_NUM                  4
> +enum hns3_fc_mode {
> +     HNS3_FC_NONE,
> +     HNS3_FC_RX_PAUSE,
> +     HNS3_FC_TX_PAUSE,
> +     HNS3_FC_FULL,
> +     HNS3_FC_DEFAULT
> +};
> +
> +#define HNS3_SCH_MODE_SP     0
> +#define HNS3_SCH_MODE_DWRR   1
> +struct hns3_pg_info {
> +     uint8_t pg_id;
> +     uint8_t pg_sch_mode;  /* 0: sp; 1: dwrr */
> +     uint8_t tc_bit_map;
> +     uint32_t bw_limit;
> +     uint8_t tc_dwrr[HNS3_MAX_TC_NUM];
> +};
> +
> +struct hns3_tc_info {
> +     uint8_t tc_id;
> +     uint8_t tc_sch_mode;  /* 0: sp; 1: dwrr */
> +     uint8_t pgid;
> +     uint32_t bw_limit;
> +     uint8_t up_to_tc_map; /* user priority maping on the TC */
> +};
> +
> +struct hns3_dcb_info {
> +     uint8_t num_tc;
> +     uint8_t num_pg;     /* It must be 1 if vNET-Base schd */
> +     uint8_t pg_dwrr[HNS3_PG_NUM];
> +     uint8_t prio_tc[HNS3_MAX_USER_PRIO];
> +     struct hns3_pg_info pg_info[HNS3_PG_NUM];
> +     struct hns3_tc_info tc_info[HNS3_MAX_TC_NUM];
> +     uint8_t hw_pfc_map; /* Allow for packet drop or not on this TC */
> +     uint8_t pfc_en; /* Pfc enabled or not for user priority */
> +};
> +
> +enum hns3_fc_status {
> +     HNS3_FC_STATUS_NONE,
> +     HNS3_FC_STATUS_MAC_PAUSE,
> +     HNS3_FC_STATUS_PFC,
> +};
> +
> +struct hns3_tc_queue_info {
> +     uint8_t tqp_offset;     /* TQP offset from base TQP */
> +     uint8_t tqp_count;      /* Total TQPs */
> +     uint8_t tc;             /* TC index */
> +     bool enable;            /* If this TC is enable or not */
> +};
> +
> +struct hns3_cfg {
> +     uint8_t vmdq_vport_num;
> +     uint8_t tc_num;
> +     uint16_t tqp_desc_num;
> +     uint16_t rx_buf_len;
> +     uint16_t rss_size_max;
> +     uint8_t phy_addr;
> +     uint8_t media_type;
> +     uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
> +     uint8_t default_speed;
> +     uint32_t numa_node_map;
> +     uint8_t speed_ability;
> +     uint16_t umv_space;
> +};
> +
> +/* mac media type */
> +enum hns3_media_type {
> +     HNS3_MEDIA_TYPE_UNKNOWN,
> +     HNS3_MEDIA_TYPE_FIBER,
> +     HNS3_MEDIA_TYPE_COPPER,
> +     HNS3_MEDIA_TYPE_BACKPLANE,
> +     HNS3_MEDIA_TYPE_NONE,
> +};
> +
> +struct hns3_mac {
> +     uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
> +     bool default_addr_setted; /* whether default addr(mac_addr) is
> setted */
> +     uint8_t media_type;
> +     uint8_t phy_addr;
> +     uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
> +     uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
> +     uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
> +     uint32_t link_speed;      /* ETH_SPEED_NUM_ */
> +};
> +
> +
> +/* Primary process maintains driver state in main thread.
> + *
> + * +---------------+
> + * | UNINITIALIZED |<-----------+
> + * +---------------+         |
> + *   |.eth_dev_init          |.eth_dev_uninit
> + *   V                       |
> + * +---------------+------------+
> + * |  INITIALIZED  |
> + * +---------------+<-----------<---------------+
> + *   |.dev_configure         |               |
> + *   V                       |failed         |
> + * +---------------+------------+            |
> + * |  CONFIGURING  |                         |
> + * +---------------+----+                    |
> + *   |success        |                       |
> + *   |               |               +---------------+
> + *   |               |               |    CLOSING    |
> + *   |               |               +---------------+
> + *   |               |                       ^
> + *   V               |.dev_configure         |
> + * +---------------+----+                    |.dev_close
> + * |  CONFIGURED   |----------------------------+
> + * +---------------+<-----------+
> + *   |.dev_start             |
> + *   V                       |
> + * +---------------+         |
> + * |   STARTING    |------------^
> + * +---------------+ failed  |
> + *   |success                |
> + *   |               +---------------+
> + *   |               |   STOPPING    |
> + *   |               +---------------+
> + *   |                       ^
> + *   V                       |.dev_stop
> + * +---------------+------------+
> + * |    STARTED    |
> + * +---------------+
> + */
> +enum hns3_adapter_state {
> +     HNS3_NIC_UNINITIALIZED = 0,
> +     HNS3_NIC_INITIALIZED,
> +     HNS3_NIC_CONFIGURING,
> +     HNS3_NIC_CONFIGURED,
> +     HNS3_NIC_STARTING,
> +     HNS3_NIC_STARTED,
> +     HNS3_NIC_STOPPING,
> +     HNS3_NIC_CLOSING,
> +     HNS3_NIC_CLOSED,
> +     HNS3_NIC_REMOVED,
> +     HNS3_NIC_NSTATES
> +};
> +
> +/* Reset various stages, execute in order */
> +enum hns3_reset_stage {
> +     /* Stop query services, stop transceiver, disable MAC */
> +     RESET_STAGE_DOWN,
> +     /* Clear reset completion flags, disable send command */
> +     RESET_STAGE_PREWAIT,
> +     /* Inform IMP to start resetting */
> +     RESET_STAGE_REQ_HW_RESET,
> +     /* Waiting for hardware reset to complete */
> +     RESET_STAGE_WAIT,
> +     /* Reinitialize hardware */
> +     RESET_STAGE_DEV_INIT,
> +     /* Restore user settings and enable MAC */
> +     RESET_STAGE_RESTORE,
> +     /* Restart query services, start transceiver */
> +     RESET_STAGE_DONE,
> +     /* Not in reset state */
> +     RESET_STAGE_NONE,
> +};
> +
> +enum hns3_reset_level {
> +     HNS3_NONE_RESET,
> +     HNS3_VF_FUNC_RESET, /* A VF function reset */
> +     /*
> +      * All VFs under a PF perform function reset.
> +      * Kernel PF driver use mailbox to inform DPDK VF to do reset, the
> value
> +      * of the reset level and the one defined in kernel driver should be
> +      * same.
> +      */
> +     HNS3_VF_PF_FUNC_RESET = 2,
> +     /*
> +      * All VFs under a PF perform FLR reset.
> +      * Kernel PF driver use mailbox to inform DPDK VF to do reset, the
> value
> +      * of the reset level and the one defined in kernel driver should be
> +      * same.
> +      */
> +     HNS3_VF_FULL_RESET = 3,
> +     HNS3_FLR_RESET,     /* A VF perform FLR reset */
> +     /* All VFs under the rootport perform a global or IMP reset */
> +     HNS3_VF_RESET,
> +     HNS3_FUNC_RESET,    /* A PF function reset */
> +     /* All PFs under the rootport perform a global reset */
> +     HNS3_GLOBAL_RESET,
> +     HNS3_IMP_RESET,     /* All PFs under the rootport perform a IMP
> reset */
> +     HNS3_MAX_RESET
> +};
> +
> +enum hns3_wait_result {
> +     HNS3_WAIT_UNKNOWN,
> +     HNS3_WAIT_REQUEST,
> +     HNS3_WAIT_SUCCESS,
> +     HNS3_WAIT_TIMEOUT
> +};
> +
> +#define HNS3_RESET_SYNC_US 100000
> +
> +struct hns3_reset_stats {
> +     uint64_t request_cnt; /* Total request reset times */
> +     uint64_t global_cnt;  /* Total GLOBAL reset times */
> +     uint64_t imp_cnt;     /* Total IMP reset times */
> +     uint64_t exec_cnt;    /* Total reset executive times */
> +     uint64_t success_cnt; /* Total reset successful times */
> +     uint64_t fail_cnt;    /* Total reset failed times */
> +     uint64_t merge_cnt;   /* Total merged in high reset times */
> +};
> +
> +typedef bool (*check_completion_func)(struct hns3_hw *hw);
> +
> +struct hns3_wait_data {
> +     void *hns;
> +     uint64_t end_ms;
> +     uint64_t interval;
> +     int16_t count;
> +     enum hns3_wait_result result;
> +     check_completion_func check_completion;
> +};
> +
> +struct hns3_reset_ops {
> +     void (*reset_service)(void *arg);
> +     int (*stop_service)(struct hns3_adapter *hns);
> +     int (*prepare_reset)(struct hns3_adapter *hns);
> +     int (*wait_hardware_ready)(struct hns3_adapter *hns);
> +     int (*reinit_dev)(struct hns3_adapter *hns);
> +     int (*restore_conf)(struct hns3_adapter *hns);
> +     int (*start_service)(struct hns3_adapter *hns);
> +};
> +
> +enum hns3_schedule {
> +     SCHEDULE_NONE,
> +     SCHEDULE_PENDING,
> +     SCHEDULE_REQUESTED,
> +     SCHEDULE_DEFERRED,
> +};
> +
> +struct hns3_reset_data {
> +     enum hns3_reset_stage stage;
> +     rte_atomic16_t schedule;
> +     /* Reset flag, covering the entire reset process */
> +     rte_atomic16_t resetting;
> +     /* Used to disable sending cmds during reset */
> +     rte_atomic16_t disable_cmd;
> +     /* The reset level being processed */
> +     enum hns3_reset_level level;
> +     /* Reset level set, each bit represents a reset level */
> +     uint64_t pending;
> +     /* Request reset level set, from interrupt or mailbox */
> +     uint64_t request;
> +     int attempts; /* Reset failure retry */
> +     int retries;  /* Timeout failure retry in reset_post */
> +     /*
> +      * At the time of global or IMP reset, the command cannot be sent
> to
> +      * stop the tx/rx queues. Tx/Rx queues may be access mbuf during
> the
> +      * reset process, so the mbuf is required to be released after the
> reset
> +      * is completed.The mbuf_deferred_free is used to mark whether
> mbuf
> +      * needs to be released.
> +      */
> +     bool mbuf_deferred_free;
> +     struct timeval start_time;
> +     struct hns3_reset_stats stats;
> +     const struct hns3_reset_ops *ops;
> +     struct hns3_wait_data *wait_data;
> +};
> +
> +struct hns3_hw {
> +     struct rte_eth_dev_data *data;
> +     void *io_base;
> +     struct hns3_mac mac;
> +     unsigned int secondary_cnt; /* Number of secondary processes
> init'd. */
> +     uint32_t fw_version;
> +
> +     uint16_t num_msi;
> +     uint16_t total_tqps_num;    /* total task queue pairs of this PF */
> +     uint16_t tqps_num;          /* num task queue pairs of this function */
> +     uint16_t rss_size_max;      /* HW defined max RSS task queue */
> +     uint16_t rx_buf_len;
> +     uint16_t num_tx_desc;       /* desc num of per tx queue */
> +     uint16_t num_rx_desc;       /* desc num of per rx queue */
> +
> +     struct rte_ether_addr mc_addrs[HNS3_MC_MACADDR_NUM];
> +     int mc_addrs_num; /* Multicast mac addresses number */
> +
> +     uint8_t num_tc;             /* Total number of enabled TCs */
> +     uint8_t hw_tc_map;
> +     enum hns3_fc_mode current_mode;
> +     enum hns3_fc_mode requested_mode;
> +     struct hns3_dcb_info dcb_info;
> +     enum hns3_fc_status current_fc_status; /* current flow control
> status */
> +     struct hns3_tc_queue_info tc_queue[HNS3_MAX_TC_NUM];
> +     uint16_t alloc_tqps;
> +     uint16_t alloc_rss_size;    /* Queue number per TC */
> +
> +     uint32_t flag;
> +     /*
> +      * PMD setup and configuration is not thread safe. Since it is not
> +      * performance sensitive, it is better to guarantee thread-safety
> +      * and add device level lock. Adapter control operations which
> +      * change its state should acquire the lock.
> +      */
> +     rte_spinlock_t lock;
> +     enum hns3_adapter_state adapter_state;
> +     struct hns3_reset_data reset;
> +};
> +
> +#define HNS3_FLAG_TC_BASE_SCH_MODE           1
> +#define HNS3_FLAG_VNET_BASE_SCH_MODE         2
> +
> +struct hns3_err_msix_intr_stats {
> +     uint64_t mac_afifo_tnl_intr_cnt;
> +     uint64_t ppu_mpf_abnormal_intr_st2_cnt;
> +     uint64_t ssu_port_based_pf_intr_cnt;
> +     uint64_t ppp_pf_abnormal_intr_cnt;
> +     uint64_t ppu_pf_abnormal_intr_cnt;
> +};
> +
> +/* vlan entry information. */
> +struct hns3_user_vlan_table {
> +     LIST_ENTRY(hns3_user_vlan_table) next;
> +     bool hd_tbl_status;
> +     uint16_t vlan_id;
> +};
> +
> +struct hns3_port_base_vlan_config {
> +     uint16_t state;
> +     uint16_t pvid;
> +};
> +
> +/* Vlan tag configuration for RX direction */
> +struct hns3_rx_vtag_cfg {
> +     uint8_t rx_vlan_offload_en; /* Whether enable rx vlan offload */
> +     uint8_t strip_tag1_en;      /* Whether strip inner vlan tag */
> +     uint8_t strip_tag2_en;      /* Whether strip outer vlan tag */
> +     uint8_t vlan1_vlan_prionly; /* Inner VLAN Tag up to descriptor
> Enable */
> +     uint8_t vlan2_vlan_prionly; /* Outer VLAN Tag up to descriptor
> Enable */
> +};
> +
> +/* Vlan tag configuration for TX direction */
> +struct hns3_tx_vtag_cfg {
> +     bool accept_tag1;           /* Whether accept tag1 packet from host
> */
> +     bool accept_untag1;         /* Whether accept untag1 packet from
> host */
> +     bool accept_tag2;
> +     bool accept_untag2;
> +     bool insert_tag1_en;        /* Whether insert inner vlan tag */
> +     bool insert_tag2_en;        /* Whether insert outer vlan tag */
> +     uint16_t default_tag1;      /* The default inner vlan tag to insert */
> +     uint16_t default_tag2;      /* The default outer vlan tag to insert */
> +};
> +
> +struct hns3_vtag_cfg {
> +     struct hns3_rx_vtag_cfg rx_vcfg;
> +     struct hns3_tx_vtag_cfg tx_vcfg;
> +};
> +
> +/* Request types for IPC. */
> +enum hns3_mp_req_type {
> +     HNS3_MP_REQ_START_RXTX = 1,
> +     HNS3_MP_REQ_STOP_RXTX,
> +     HNS3_MP_REQ_MAX
> +};
> +
> +/* Pameters for IPC. */
> +struct hns3_mp_param {
> +     enum hns3_mp_req_type type;
> +     int port_id;
> +     int result;
> +};
> +
> +/* Request timeout for IPC. */
> +#define HNS3_MP_REQ_TIMEOUT_SEC 5
> +
> +/* Key string for IPC. */
> +#define HNS3_MP_NAME "net_hns3_mp"
> +
> +struct hns3_pf {
> +     struct hns3_adapter *adapter;
> +     bool is_main_pf;
> +
> +     uint32_t pkt_buf_size; /* Total pf buf size for tx/rx */
> +     uint32_t tx_buf_size; /* Tx buffer size for each TC */
> +     uint32_t dv_buf_size; /* Dv buffer size for each TC */
> +
> +     uint16_t mps; /* Max packet size */
> +
> +     uint8_t tx_sch_mode;
> +     uint8_t tc_max; /* max number of tc driver supported */
> +     uint8_t local_max_tc; /* max number of local tc */
> +     uint8_t pfc_max;
> +     uint8_t prio_tc[HNS3_MAX_USER_PRIO]; /* TC indexed by prio */
> +     uint16_t pause_time;
> +     bool support_fc_autoneg;       /* support FC autonegotiate */
> +
> +     uint16_t wanted_umv_size;
> +     uint16_t max_umv_size;
> +     uint16_t used_umv_size;
> +
> +     /* Statistics information for abnormal interrupt */
> +     struct hns3_err_msix_intr_stats abn_int_stats;
> +
> +     bool support_sfp_query;
> +
> +     struct hns3_vtag_cfg vtag_config;
> +     struct hns3_port_base_vlan_config port_base_vlan_cfg;
> +     LIST_HEAD(vlan_tbl, hns3_user_vlan_table) vlan_list;
> +};
> +
> +struct hns3_vf {
> +     struct hns3_adapter *adapter;
> +};
> +
> +struct hns3_adapter {
> +     struct hns3_hw hw;
> +
> +     /* Specific for PF or VF */
> +     bool is_vf; /* false - PF, true - VF */
> +     union {
> +             struct hns3_pf pf;
> +             struct hns3_vf vf;
> +     };
> +};
> +
> +#define HNS3_DEV_SUPPORT_DCB_B                       0x0
> +
> +#define hns3_dev_dcb_supported(hw) \
> +     hns3_get_bit((hw)->flag, HNS3_DEV_SUPPORT_DCB_B)
> +
> +#define HNS3_DEV_PRIVATE_TO_HW(adapter) \
> +     (&((struct hns3_adapter *)adapter)->hw)
> +#define HNS3_DEV_PRIVATE_TO_ADAPTER(adapter) \
> +     ((struct hns3_adapter *)adapter)
> +#define HNS3_DEV_PRIVATE_TO_PF(adapter) \
> +     (&((struct hns3_adapter *)adapter)->pf)
> +#define HNS3VF_DEV_PRIVATE_TO_VF(adapter) \
> +     (&((struct hns3_adapter *)adapter)->vf)
> +#define HNS3_DEV_HW_TO_ADAPTER(hw) \
> +     container_of(hw, struct hns3_adapter, hw)
> +
> +#define hns3_set_field(origin, mask, shift, val) \
> +     do { \
> +             (origin) &= (~(mask)); \
> +             (origin) |= ((val) << (shift)) & (mask); \
> +     } while (0)
> +#define hns3_get_field(origin, mask, shift) \
> +     (((origin) & (mask)) >> (shift))
> +#define hns3_set_bit(origin, shift, val) \
> +     hns3_set_field((origin), (0x1UL << (shift)), (shift), (val))
> +#define hns3_get_bit(origin, shift) \
> +     hns3_get_field((origin), (0x1UL << (shift)), (shift))
> +
> +/*
> + * upper_32_bits - return bits 32-63 of a number
> + * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
> + * the "right shift count >= width of type" warning when that quantity is
> + * 32-bits.
> + */
> +#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
> +
> +/* lower_32_bits - return bits 0-31 of a number */
> +#define lower_32_bits(n) ((uint32_t)(n))
> +
> +#define BIT(nr) (1UL << (nr))
> +
> +#define BITS_PER_LONG        (__SIZEOF_LONG__ * 8)
> +#define GENMASK(h, l) \
> +     (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
> +
> +#define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y))
> +#define rounddown(x, y) ((x) - ((x) % (y)))
> +
> +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
> +
> +#define max_t(type, x, y) ({                    \
> +     type __max1 = (x);                      \
> +     type __max2 = (y);                      \
> +     __max1 > __max2 ? __max1 : __max2; })
> +
> +static inline void hns3_write_reg(void *base, uint32_t reg, uint32_t value)
> +{
> +     rte_write32(value, (volatile void *)((char *)base + reg));
> +}
> +
> +static inline uint32_t hns3_read_reg(void *base, uint32_t reg)
> +{
> +     return rte_read32((volatile void *)((char *)base + reg));
> +}
> +
> +#define hns3_write_dev(a, reg, value) \
> +     hns3_write_reg((a)->io_base, (reg), (value))
> +
> +#define hns3_read_dev(a, reg) \
> +     hns3_read_reg((a)->io_base, (reg))
> +
> +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
> +
> +#define NEXT_ITEM_OF_ACTION(act, actions, index)                        \
> +     do {                                                            \
> +             act = (actions) + (index);                              \
> +             while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {        \
> +                     (index)++;                                      \
> +                     act = actions + index;                          \
> +             }                                                       \
> +     } while (0)
> +
> +#define MSEC_PER_SEC              1000L
> +#define USEC_PER_MSEC             1000L
> +
> +static inline uint64_t
> +get_timeofday_ms(void)
> +{
> +     struct timeval tv;
> +
> +     (void)gettimeofday(&tv, NULL);
> +
> +     return (uint64_t)tv.tv_sec * MSEC_PER_SEC + tv.tv_usec /
> USEC_PER_MSEC;
> +}
> +
> +static inline uint64_t
> +hns3_atomic_test_bit(unsigned int nr, volatile uint64_t *addr)
> +{
> +     uint64_t res;
> +
> +     rte_mb();
Is *addr CIO memory or a MMIO register?
Looking at the patch 20/22, it should be a MMIO register. Whether a barrier is required depends on preceding accesses, so advise move the barrier out.
> +     res = ((*addr) & (1UL << nr)) != 0;
> +     rte_mb();

If *addr is CIO memory, rte_cio_rmb is enough, rte_mb is overkilling.
If *addr is MMIO register, rte_rmb one-way barrier is also adequate.
Whether this barrier is required is depending on the following accesses, also advise to move out.
This API should not includes barriers inside as a common API.

> +     return res;
> +}
> +
> +static inline void
> +hns3_atomic_set_bit(unsigned int nr, volatile uint64_t *addr)
> +{
> +     __sync_fetch_and_or(addr, (1UL << nr));

Gcc/clang provides '__atomic' builtins to replace the legacy '__sync' builtins, new code should always use the '__atomic' builtins rather than the '__sync' builtins.
This function can be replaced with __atomic_fetch_or(addr, (1UL << nr), __ATOMIC_RELAXED);
https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gcc/_005f_005fatomic-Builtins.html

> +}
> +
> +static inline void
> +hns3_atomic_clear_bit(unsigned int nr, volatile uint64_t *addr)
> +{
> +     __sync_fetch_and_and(addr, ~(1UL << nr));
> +}
> +
> +static inline int64_t
> +hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr)
> +{
> +     uint64_t mask = (1UL << nr);
> +
> +     return __sync_fetch_and_and(addr, ~mask) & mask;
> +}
> +
> +#endif /* _HNS3_ETHDEV_H_ */
> --
> 2.7.4

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
  2019-08-30  3:22     ` Wei Hu (Xavier)
@ 2019-08-30 14:57     ` Ferruh Yigit
  1 sibling, 0 replies; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 14:57 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 3:08 PM, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
>> Sent: Friday, August 23, 2019 7:17 PM
>> To: dev@dpdk.org
>> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
>> liudongdong3@huawei.com; forest.zhouchang@huawei.com
>> Subject: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
>>
>> This patch add build related files for hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>> +# Hisilicon HNS3 PMD driver
>> +#
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=y
> 
> # Please add meson support
> # Move build infra to the first patch

+1 to move this to be beginning of the patchset

> # See git log drivers/net/octeontx2 as example
> 
> 
>> diff --git a/config/common_base b/config/common_base
>> index 8ef75c2..71a2c33 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -282,6 +282,11 @@
>> CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
>>  CONFIG_RTE_LIBRTE_HINIC_PMD=n
>>
>>  #
>> +# Compile burst-oriented HNS3 PMD driver
>> +#
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>> +
>> +#
>>  # Compile burst-oriented IXGBE PMD driver
>>  #
>>  CONFIG_RTE_LIBRTE_IXGBE_PMD=y
>> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang
>> b/config/defconfig_arm64-armv8a-linuxapp-clang
>> index d3b4dad..c73f5fb 100644
>> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
>> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
>> @@ -6,3 +6,5 @@
>>
>>  CONFIG_RTE_TOOLCHAIN="clang"
>>  CONFIG_RTE_TOOLCHAIN_CLANG=y
>> +
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>> diff --git a/doc/guides/nics/features/hns3.ini
>> b/doc/guides/nics/features/hns3.ini
>> new file mode 100644
>> index 0000000..d38d35e
>> --- /dev/null
>> +++ b/doc/guides/nics/features/hns3.ini
>> @@ -0,0 +1,38 @@
>> +;
>> +; Supported features of the 'hns3' network poll mode driver.
> 
> Add doc changes when driver feature gets added.
> # See git log drivers/net/octeontx2 as example

+1, I put comments on the patches for same thing

> 
>> +;
>> +; Refer to default.ini for the full list of available PMD features.
>> +;
>> +[Features]
>> +Link status          = Y
>> +MTU update           = Y
>> +Jumbo frame          = Y
>> +Promiscuous mode     = Y
>> +Allmulticast mode    = Y
>> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
>> new file mode 100644
>> index 0000000..c9d0253
>> --- /dev/null
>> +++ b/doc/guides/nics/hns3.rst
>> @@ -0,0 +1,55 @@
>> +..  SPDX-License-Identifier: BSD-3-Clause
>> +    Copyright(c) 2018-2019 Hisilicon Limited.
>> +
>> +HNS3 Poll Mode Driver
>> +===============================
>> +
>> +The Hisilicon Network Subsystem is a long term evolution IP which is
>> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
>> +
>> +The HNS3 PMD (librte_pmd_hns3) provides poll mode driver support
>> +for hns3(Hisilicon Network Subsystem 3) network engine.
>> +
>> +Features
>> +--------
>> +
>> +Features of the HNS3 PMD are:
>> +
>> +- Arch support: ARMv8.
> 
> Is it an integrated NIC controller? Why it is supported only on ARMv8?
> The reason why I asking because, Enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y
> only on arm64 will create a case where build fails for arm64 and passes for
> x86. I would like to avoid such disparity. If the build is passing on x86 make it
> enable in the common code, not in arm64 config.
> 
> 
>> +- Multiple queues for TX and RX
>> +- Receive Side Scaling (RSS)
>> +- Packet type information
>> +- Checksum offload
>> +- Promiscuous mode
>> +- Multicast mode
>> +- Port hardware statistics
>> +- Jumbo frames
>> +- Link state information
>> +- VLAN stripping
> 
> 
>> +cflags += '-DALLOW_EXPERIMENTAL_API'
>> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map
>> b/drivers/net/hns3/rte_pmd_hns3_version.map
>> new file mode 100644
>> index 0000000..3aef967
>> --- /dev/null
>> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
>> @@ -0,0 +1,3 @@
>> +DPDK_19.08 {
> 
> Change to 19.11
> 


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
                     ` (2 preceding siblings ...)
  2019-08-30  6:17   ` Stephen Hemminger
@ 2019-08-30 14:58   ` Ferruh Yigit
  2019-09-10 11:43     ` Wei Hu (Xavier)
  2019-08-30 15:00   ` Ferruh Yigit
  2019-08-30 15:12   ` Ferruh Yigit
  5 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 14:58 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch add build related files for hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  MAINTAINERS                                  |  7 ++++
>  config/common_armv8a_linux                   |  5 +++
>  config/common_base                           |  5 +++
>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
>  doc/guides/nics/hns3.rst                     | 55 ++++++++++++++++++++++++++++

This file needs to be added to the index file: 'doc/guides/nics/index.rst'

<...>

> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang b/config/defconfig_arm64-armv8a-linuxapp-clang
> index d3b4dad..c73f5fb 100644
> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
> @@ -6,3 +6,5 @@
>  
>  CONFIG_RTE_TOOLCHAIN="clang"
>  CONFIG_RTE_TOOLCHAIN_CLANG=y
> +
> +CONFIG_RTE_LIBRTE_HNS3_PMD=n

I can understand the architecture ones, but why clang is not supported? Can you
please add this support?
<...>

> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
> new file mode 100644
> index 0000000..c9d0253
> --- /dev/null
> +++ b/doc/guides/nics/hns3.rst
> @@ -0,0 +1,55 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018-2019 Hisilicon Limited.
> +
> +HNS3 Poll Mode Driver
> +===============================
> +
> +The Hisilicon Network Subsystem is a long term evolution IP which is
> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.

Can you please add a official link/reference to the product?


<...>

> @@ -0,0 +1,43 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018-2019 Hisilicon Limited.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_hns3.a
> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +CFLAGS += -DALLOW_EXPERIMENTAL_API -fsigned-char

Why '-DALLOW_EXPERIMENTAL_API' is required? Can we remove it?

> +
> +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
> +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
> +LDLIBS += -lrte_bus_pci

Are all these libraries really required, like kvargs? Can you please clean the
unused ones?

> +
> +EXPORT_MAP := rte_pmd_hns3_version.map
> +
> +LIBABIVER := 2

It should be 1.

<...>

> +# install this header file
> +SYMLINK-$(CONFIG_RTE_LIBRTE_HNS3_PMD)-include := hns3_ethdev.h

No need to expose the header file, it is not public header.

<...>

> @@ -0,0 +1,19 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018-2019 Hisilicon Limited
> +
> +sources = files('hns3_cmd.c',
> +	'hns3_dcb.c',
> +	'hns3_intr.c',
> +	'hns3_ethdev.c',
> +	'hns3_ethdev_vf.c',
> +	'hns3_fdir.c',
> +	'hns3_flow.c',
> +	'hns3_mbx.c',
> +	'hns3_regs.c',
> +	'hns3_rss.c',
> +	'hns3_rxtx.c',
> +	'hns3_stats.c',
> +	'hns3_mp.c')
> +deps += ['hash']
> +
> +cflags += '-DALLOW_EXPERIMENTAL_API'

There is better way to do this in meson, please check other samples. But as the
makefile comment, does it really needed, if so can you please add the
experimental APIs used as a comment, to both meson and Makefile?

> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
> new file mode 100644
> index 0000000..3aef967
> --- /dev/null
> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
> @@ -0,0 +1,3 @@
> +DPDK_19.08 {

DPDK_19.11

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
                     ` (3 preceding siblings ...)
  2019-08-30 14:58   ` Ferruh Yigit
@ 2019-08-30 15:00   ` Ferruh Yigit
  2019-08-31  8:07     ` Wei Hu (Xavier)
  2019-08-30 15:12   ` Ferruh Yigit
  5 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:00 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch add build related files for hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  MAINTAINERS                                  |  7 ++++
>  config/common_armv8a_linux                   |  5 +++
>  config/common_base                           |  5 +++
>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++

There are separate PF and VF drivers in the patchset, this is mostly represent
by to different .ini files, hns3.ini and hns3_vf.ini, and can you please reflect
the feature differences into these files?

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver Wei Hu (Xavier)
@ 2019-08-30 15:01   ` Ferruh Yigit
  2019-09-06  6:20     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:01 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch registers hns3 PMD driver and adds the definition for log
> interfaces.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
<...>

> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> new file mode 100644
> index 0000000..0587a9c
> --- /dev/null
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -0,0 +1,141 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2019 Hisilicon Limited.
> + */
> +
> +#include <errno.h>
> +#include <stdarg.h>
> +#include <stdbool.h>
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <sys/queue.h>
> +#include <inttypes.h>
> +#include <unistd.h>
> +#include <arpa/inet.h>
> +#include <rte_alarm.h>
> +#include <rte_atomic.h>
> +#include <rte_bus_pci.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_debug.h>
> +#include <rte_dev.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev_driver.h>
> +#include <rte_ethdev_pci.h>
> +#include <rte_interrupts.h>
> +#include <rte_io.h>
> +#include <rte_log.h>
> +#include <rte_pci.h>

Are all these headers really used at this stage? Can you please clean them and
add later patches when they are required?

<...>

> +static int
> +hns3_dev_init(struct rte_eth_dev *eth_dev)
> +{
> +	struct rte_device *dev = eth_dev->device;
> +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev);
> +	struct hns3_adapter *hns = eth_dev->data->dev_private;
> +	struct hns3_hw *hw = &hns->hw;
> +	uint16_t device_id = pci_dev->id.device_id;
> +	int ret;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> +		return 0;
> +
> +	eth_dev->dev_ops = &hns3_eth_dev_ops;
> +	rte_eth_copy_pci_info(eth_dev, pci_dev);

I think no need to call 'rte_eth_copy_pci_info()', it is called by
'rte_eth_dev_pci_generic_probe()' before 'hns3_dev_init()' called.

> +
> +	hns->is_vf = false;

There is a separate VF driver, is this field still needed?

> +	hw->data = eth_dev->data;
> +	hw->adapter_state = HNS3_NIC_INITIALIZED;
> +
> +	return 0;

Init should set 'RTE_ETH_DEV_CLOSE_REMOVE' flag, and '.dev_close' should free
the driver allocated resources, which there is not up until this patch:

 +eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of hns3 PMD driver
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of " Wei Hu (Xavier)
@ 2019-08-30 15:02   ` Ferruh Yigit
  2019-09-06  6:49     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:02 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds support for cmd of hns3 PMD driver, driver can interact
> with firmware through command to complete hardware configuration.
> 
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
> index bfb54f2..84fcf34 100644
> --- a/drivers/net/hns3/hns3_ethdev.h
> +++ b/drivers/net/hns3/hns3_ethdev.h
> @@ -39,7 +39,6 @@
>  
>  #define HNS3_4_TCS			4
>  #define HNS3_8_TCS			8
> -#define HNS3_MAX_TC_NUM			8

This definition is used by 'hns3_ethdev.h' but moved to 'hns3_cmd.h', and
'hns3_ethdev.h' doesn't include 'hns3_cmd.h', which will force whatever .c file
include 'hns3_ethdev.h' to include 'hns3_cmd.h' before it and these kind of .h
order dependencies are easy to break.
Would it work if 'hns3_ethdev.h' includes 'hns3_cmd.h'


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations Wei Hu (Xavier)
@ 2019-08-30 15:03   ` Ferruh Yigit
  2019-09-05  5:40     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:03 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds the following mac address related operations defined in
> struct eth_dev_ops: mac_addr_add, mac_addr_remove, mac_addr_set
> and set_mc_addr_list.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> +static int
> +hns3_set_mc_mac_addr_list(struct rte_eth_dev *dev,
> +			  struct rte_ether_addr *mc_addr_set,
> +			  uint32_t nb_mc_addr)
> +{
> +	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct rte_ether_addr reserved_addr_list[HNS3_MC_MACADDR_NUM];
> +	struct rte_ether_addr add_addr_list[HNS3_MC_MACADDR_NUM];
> +	struct rte_ether_addr rm_addr_list[HNS3_MC_MACADDR_NUM];
> +	struct rte_ether_addr *addr;
> +	int reserved_addr_num;
> +	int add_addr_num;
> +	int rm_addr_num;
> +	int mc_addr_num;
> +	int num;
> +	int ret;
> +	int i;
> +
> +	/* Check if input parameters are valid */
> +	ret = hns3_set_mc_addr_chk_param(hw, mc_addr_set, nb_mc_addr);
> +	if (ret)
> +		return ret;
> +
> +	rte_spinlock_lock(&hw->lock);

Is locking required here?

<...>

> @@ -1582,6 +2394,10 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
>  
>  static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.dev_close          = hns3_dev_close,
> +	.mac_addr_add           = hns3_add_mac_addr,
> +	.mac_addr_remove        = hns3_remove_mac_addr,
> +	.mac_addr_set           = hns3_set_default_mac_addr,
> +	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>  };

Can you please update .ini file in this patch and mark following features as
supported:
Unicast MAC filter
Multicast MAC filter

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations Wei Hu (Xavier)
@ 2019-08-30 15:04   ` Ferruh Yigit
  0 siblings, 0 replies; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:04 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds the following operations defined in struct eth_dev_ops:
> mtu_set, infos_get and fw_version_get for hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  drivers/net/hns3/hns3_ethdev.c | 137 ++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 136 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index 44e21ac..ced9348 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -40,6 +40,8 @@
>  int hns3_logtype_init;
>  int hns3_logtype_driver;
>  
> +static int hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
> +

This forward deceleration is not needed.

>  static int
>  hns3_config_tso(struct hns3_hw *hw, unsigned int tso_mss_min,
>  		unsigned int tso_mss_max)
> @@ -1000,6 +1002,131 @@ hns3_config_mtu(struct hns3_hw *hw, uint16_t mps)
>  }
>  
>  static int
> +hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +{
> +	struct hns3_adapter *hns = dev->data->dev_private;
> +	uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
> +	struct hns3_hw *hw = &hns->hw;
> +	bool is_jumbo_frame;
> +	int ret;
> +
> +	if (mtu < RTE_ETHER_MIN_MTU || frame_size > HNS3_MAX_FRAME_LEN) {
> +		hns3_err(hw, "Failed to set mtu, mtu(%u) invalid. valid "
> +			 "range: %d~%d", mtu, RTE_ETHER_MIN_MTU, HNS3_MAX_MTU);
> +		return -EINVAL;
> +	}

If 'hns3_dev_infos_get()' sets 'min_mtu' & 'max_mtu' properly, above check will
be done by 'rte_eth_dev_set_mtu()' already.

<...>

> +static void
> +hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
> +{
> +	struct hns3_adapter *hns = eth_dev->data->dev_private;
> +	struct hns3_hw *hw = &hns->hw;
> +
> +	info->max_rx_queues = hw->tqps_num;
> +	info->max_tx_queues = hw->tqps_num;
> +	info->max_rx_pktlen = HNS3_MAX_FRAME_LEN; /* CRC included */
> +	info->min_rx_bufsize = hw->rx_buf_len;
> +	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
> +	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
> +	info->min_mtu = RTE_ETHER_MIN_MTU;

'RTE_ETHER_MIN_MTU' is default value and can be skipped.

<...>

> @@ -2394,6 +2521,9 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
>  
>  static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.dev_close          = hns3_dev_close,
> +	.mtu_set            = hns3_dev_mtu_set,
> +	.dev_infos_get          = hns3_dev_infos_get,
> +	.fw_version_get         = hns3_fw_version_get,

Can you please update .ini file in this patch and mark following features as
supported:
MTU update
FW version



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation Wei Hu (Xavier)
@ 2019-08-30 15:04   ` Ferruh Yigit
  2019-09-06  6:56     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:04 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds link update operation to hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -2528,6 +2725,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.mac_addr_remove        = hns3_remove_mac_addr,
>  	.mac_addr_set           = hns3_set_default_mac_addr,
>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
> +	.link_update            = hns3_dev_link_update,

Can you please update .ini file in this patch and mark following features as
supported:
Link status

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver Wei Hu (Xavier)
@ 2019-08-30 15:06   ` Ferruh Yigit
  2019-09-06  8:23     ` Wei Hu (Xavier)
  2019-09-06 11:08     ` Wei Hu (Xavier)
  0 siblings, 2 replies; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:06 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds support for flow directory of hns3 PMD driver.
> Flow directory feature is only supported in hns3 PF driver.
> It supports the network L2\L3\L4 and tunnel packet creation,
> deletion, flushing, and querying hit statistics.

This patch also adds rte_flow support, can you please add this into commit log?

> 
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -2726,6 +2744,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.mac_addr_set           = hns3_set_default_mac_addr,
>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>  	.link_update            = hns3_dev_link_update,
> +	.filter_ctrl            = hns3_dev_filter_ctrl,

'hns3_dev_filter_ctrl()' is not exists up until this patch.

This is the problem of not enabling the driver yet, it is very hard to see these
kind of issues. When Makefile/meson patch moved to the begging of the patches
and start to build the driver, these issues will be visible.

>  };
>  
>  static int
> @@ -2739,6 +2758,16 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
>  	int ret;
>  
>  	PMD_INIT_FUNC_TRACE();
> +	eth_dev->process_private = (struct hns3_process_private *)
> +	    rte_zmalloc_socket("hns3_filter_list",
> +			       sizeof(struct hns3_process_private),
> +			       RTE_CACHE_LINE_SIZE, eth_dev->device->numa_node);
> +	if (eth_dev->process_private == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
> +		return -ENOMEM;
> +	}
> +	/* initialize flow filter lists */
> +	hns3_filterlist_init(eth_dev);

Can you please free 'process_private' in, close dev_ops?

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS of hns3 PMD driver
  2019-08-23 13:46 ` [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS " Wei Hu (Xavier)
@ 2019-08-30 15:07   ` Ferruh Yigit
  2019-08-31  9:16     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:07 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang, Thomas Monjalon

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> This patch adds support for RSS of hns3 PMD driver.
> It included the following functions in file hns3_rss.c:
> 1) Set/query hash key, rss_hf by .rss_hash_update/.rss_hash_conf_get ops
>    callback functions.
> 2) Set/query redirection table by .reta_update/.reta_query. ops callback
>    functions.
> 3) Set/query hash algorithm by .filter_ctrl ops callback function when
>    the 'filter_type' is RTE_ETH_FILTER_HASH.

Legacy filter API is deprecated, there is a recent patch from Thomas to
deprecate documenting this as feature:
Commit 030febb6642c ("doc: remove deprecated ethdev features")

> 
> And it included the following functions in file hns3_flow.c:
> 1) Set hash key, rss_hf, redirection table and algorithm by .create ops
>    callback function.
> 2) Disable RSS by .destroy or .flush ops callback function.
> 3) Check the effectiveness of the RSS's configuration by .validate ops
>    callback function.
> 
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
<...>

> @@ -2744,6 +2748,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.mac_addr_set           = hns3_set_default_mac_addr,
>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>  	.link_update            = hns3_dev_link_update,
> +	.rss_hash_update        = hns3_dev_rss_hash_update,
> +	.rss_hash_conf_get      = hns3_dev_rss_hash_conf_get,
> +	.reta_update            = hns3_dev_rss_reta_update,
> +	.reta_query             = hns3_dev_rss_reta_query,

Can you please update .ini file in this patch and mark following features as
supported:
RSS key update
RSS reta update

For 'RSS hash' datapath update is also required, I am not sure in which patch
that support it added.

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control of hns3 PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control " Wei Hu (Xavier)
@ 2019-08-30 15:07   ` Ferruh Yigit
  2019-08-31  8:04     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:07 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds support for MAC PAUSE flow control and priority flow
> control of hns3 PMD driver. All user priorities(up) must be mapped to
> tc0 when MAC PAUSE flow control is enabled. Ups can be mapped to other
> tcs driver permit when PFC is enabled. Flow control function by default
> is turned off to ensure that app startup state is the same each time.

As far as I can see the patch both enable DCB and flow control, can you please
either split the patch or update the commit log to cover both features?

> 
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>

<...>

>  static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.dev_close          = hns3_dev_close,
>  	.mtu_set            = hns3_dev_mtu_set,
>  	.dev_infos_get          = hns3_dev_infos_get,
>  	.fw_version_get         = hns3_fw_version_get,
> +	.flow_ctrl_get          = hns3_flow_ctrl_get,
> +	.flow_ctrl_set          = hns3_flow_ctrl_set,
> +	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,

Can you please update .ini file in this patch and mark following features as
supported:
Flow control

>  	.mac_addr_add           = hns3_add_mac_addr,
>  	.mac_addr_remove        = hns3_remove_mac_addr,
>  	.mac_addr_set           = hns3_set_default_mac_addr,
> @@ -2753,6 +2949,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.reta_update            = hns3_dev_rss_reta_update,
>  	.reta_query             = hns3_dev_rss_reta_query,
>  	.filter_ctrl            = hns3_dev_filter_ctrl,
> +	.get_dcb_info           = hns3_get_dcb_info,

Can you please update .ini file in this patch and mark following features as
supported:
DCB

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN of hns3 PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN " Wei Hu (Xavier)
@ 2019-08-30 15:08   ` Ferruh Yigit
  2019-08-31  9:04     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:08 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds support for VLAN related operation of hns3 PMD driver.
> 
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -2949,6 +3615,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.reta_update            = hns3_dev_rss_reta_update,
>  	.reta_query             = hns3_dev_rss_reta_query,
>  	.filter_ctrl            = hns3_dev_filter_ctrl,
> +	.vlan_filter_set        = hns3_vlan_filter_set,
> +	.vlan_tpid_set          = hns3_vlan_tpid_set,
> +	.vlan_offload_set       = hns3_vlan_offload_set,
> +	.vlan_pvid_set          = hns3_vlan_pvid_set,

Can you please update .ini file in this patch and mark following features as
supported:
VLAN filter
VLAN offload

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox of hns3 PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox " Wei Hu (Xavier)
@ 2019-08-30 15:08   ` Ferruh Yigit
  2019-09-06 11:25     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:08 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds support for mailbox of hns3 PMD driver, mailbox is
> used for communication between PF and VF driver.
> 
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -27,6 +27,7 @@
>  #include <rte_io.h>
>  
>  #include "hns3_cmd.h"
> +#include "hns3_mbx.h"

Why need to include the new header if .c file is not using from the header? Same
for other .c files below.

<...>

> @@ -0,0 +1,337 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2019 Hisilicon Limited.
> + */
> +
> +#include <errno.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/queue.h>
> +#include <inttypes.h>
> +#include <unistd.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_debug.h>
> +#include <rte_dev.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev_driver.h>
> +#include <rte_io.h>
> +#include <rte_spinlock.h>
> +#include <rte_pci.h>
> +#include <rte_bus_pci.h>

Same comment for all .c files in the driver, above inclusion list feels like a
copy/paste, can you please include only necessary headers?

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF " Wei Hu (Xavier)
@ 2019-08-30 15:11   ` Ferruh Yigit
  2019-08-31  9:03     ` Wei Hu (Xavier)
  2019-09-06 11:27     ` Wei Hu (Xavier)
  0 siblings, 2 replies; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:11 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds support for hns3 VF PMD driver.
> 
> In current version, we only support VF device is bound to vfio_pci or
> igb_uio and then taken over by DPDK when PF device is taken over by kernel
> mode hns3 ethdev driver, VF is not supported when PF devcie is taken over
> by DPDK.

I think better to say 'when PF is driven by DPDK driver' than 'when PF device is
taken over by DPDK'.

Can you please this (VF only supported when PF is driver by kernel) in your
documentation please?
And perhaps VF driver support in feature list to highlight it...

> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
                     ` (4 preceding siblings ...)
  2019-08-30 15:00   ` Ferruh Yigit
@ 2019-08-30 15:12   ` Ferruh Yigit
  2019-08-31  8:07     ` Wei Hu (Xavier)
  5 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:12 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch add build related files for hns3 PMD driver.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
>  MAINTAINERS                                  |  7 ++++
>  config/common_armv8a_linux                   |  5 +++
>  config/common_base                           |  5 +++
>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
>  doc/guides/nics/hns3.rst                     | 55 ++++++++++++++++++++++++++++
>  drivers/net/Makefile                         |  1 +
>  drivers/net/hns3/Makefile                    | 43 ++++++++++++++++++++++
>  drivers/net/hns3/meson.build                 | 19 ++++++++++
>  drivers/net/hns3/rte_pmd_hns3_version.map    |  3 ++
>  drivers/net/meson.build                      |  1 +
>  mk/rte.app.mk                                |  1 +

Can you also update the release notes to announce the new PMD:
'doc/guides/rel_notes/release_19_11.rst'


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation Wei Hu (Xavier)
  2019-08-23 15:42   ` Aaron Conole
@ 2019-08-30 15:13   ` Ferruh Yigit
  2019-09-11 11:40     ` Wei Hu (Xavier)
  1 sibling, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:13 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds queue related operation, package sending and
> receiving function codes.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> +
> +#define __packed __attribute__((packed))
> +/* hardware spec ring buffer format */
> +__packed struct hns3_desc {

Can you use existing '__rte_packed' instead?

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops Wei Hu (Xavier)
@ 2019-08-30 15:14   ` Ferruh Yigit
  2019-09-06 11:51     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:14 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds dev_start, dev_stop, dev_configure, promiscuous_enable,
> promiscuous_disable, allmulticast_enable, allmulticast_disable,
> dev_infos_get related function codes.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -3626,6 +4031,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>  	.vlan_offload_set       = hns3_vlan_offload_set,
>  	.vlan_pvid_set          = hns3_vlan_pvid_set,
>  	.get_dcb_info           = hns3_get_dcb_info,
> +	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,

'hns3_dev_supported_ptypes_get' has been defined in previous patch, what do you
thinks defining and using in same patch?

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support for hns3 PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support " Wei Hu (Xavier)
@ 2019-08-30 15:14   ` Ferruh Yigit
  2019-09-02 13:41     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:14 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds multiple process support for hns3 PMD driver.
> Multi-process support selection queue by configuring RSS or
> flow director. The primary process supports various management
> ops, and the secondary process only supports queries ops.
> The primary process notifies the secondary processes to start
> or stop tranceiver.
> 
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> @@ -1556,6 +1559,25 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
>  	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
>  };
>  
> +static const struct eth_dev_ops hns3vf_eth_dev_secondary_ops = {
> +	.stats_get          = hns3_stats_get,
> +	.stats_reset        = hns3_stats_reset,
> +	.xstats_get         = hns3_dev_xstats_get,
> +	.xstats_get_names   = hns3_dev_xstats_get_names,
> +	.xstats_reset	    = hns3_dev_xstats_reset,
> +	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
> +	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
> +	.dev_infos_get      = hns3vf_dev_infos_get,
> +	.link_update        = hns3vf_dev_link_update,
> +	.rss_hash_update    = hns3_dev_rss_hash_update,
> +	.rss_hash_conf_get  = hns3_dev_rss_hash_conf_get,
> +	.reta_update        = hns3_dev_rss_reta_update,
> +	.reta_query         = hns3_dev_rss_reta_query,
> +	.filter_ctrl        = hns3_dev_filter_ctrl,
> +	.get_reg            = hns3_get_regs,
> +	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
> +};
> +

There shouldn't need to define separate dev_ops for the secondary processes,
what is the difference of this one used for primary process, why not use that one?

<...>

> +/*
> + * Initialize by secondary process.
> + */
> +void hns3_mp_init_secondary(void)
> +{
> +	rte_mp_action_register(HNS3_MP_NAME, mp_secondary_handle);

What is this handler for? Most of the case the MP communication is done in eal
level and nothing need to be done in the driver level.

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops for hns3 PMD driver
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops " Wei Hu (Xavier)
@ 2019-08-30 15:20   ` Ferruh Yigit
  2019-08-31  8:49     ` Wei Hu (Xavier)
  0 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:20 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
> This patch adds stats_get, stats_reset, xstats_get, xstats_get_names
> xstats_reset, xstats_get_by_id and xstats_get_names_by_id related
> function codes.
> 
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>

> +	for (i = 0; i < size; i++) {
> +		if (ids[i] >= cnt_stats) {
> +			PMD_INIT_LOG(ERR, "id value is invalid");
> +			return -EINVAL;
> +		}
> +		strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
> +			strlen(xstats_names_copy[ids[i]].name));

Getting following warning from this line:

.../drivers/net/hns3/hns3_stats.c: In function
‘hns3_dev_xstats_get_names_by_id’:

.../drivers/net/hns3/hns3_stats.c:825:3: error: ‘strncpy’ output truncated
before terminating nul copying as many bytes from a string as its length
[-Werror=stringop-truncation]
  825 |   strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,

      |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  826 |    strlen(xstats_names_copy[ids[i]].name));

      |    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver
  2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
                   ` (21 preceding siblings ...)
  2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
@ 2019-08-30 15:23 ` Ferruh Yigit
  2019-08-31  8:06   ` Wei Hu (Xavier)
  22 siblings, 1 reply; 75+ messages in thread
From: Ferruh Yigit @ 2019-08-30 15:23 UTC (permalink / raw)
  To: Wei Hu (Xavier), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
> The Hisilicon Network Subsystem is a long term evolution IP which is
> supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
> 
> This series add DPDK rte_ethdev poll mode driver for hns3(Hisilicon
> Network Subsystem 3) network engine.
> 
> Wei Hu (Xavier) (22):
>   net/hns3: add hardware registers definition
>   net/hns3: add some definitions for data structure and macro
>   net/hns3: register hns3 PMD driver
>   net/hns3: add support for cmd of hns3 PMD driver
>   net/hns3: add the initialization of hns3 PMD driver
>   net/hns3: add support for MAC address related operations
>   net/hns3: add support for some misc operations
>   net/hns3: add support for link update operation
>   net/hns3: add support for flow directory of hns3 PMD driver
>   net/hns3: add support for RSS of hns3 PMD driver
>   net/hns3: add support for flow control of hns3 PMD driver
>   net/hns3: add support for VLAN of hns3 PMD driver
>   net/hns3: add support for mailbox of hns3 PMD driver
>   net/hns3: add support for hns3 VF PMD driver
>   net/hns3: add package and queue related operation
>   net/hns3: add start stop configure promiscuous ops
>   net/hns3: add dump register ops for hns3 PMD driver
>   net/hns3: add abnormal interrupt process for hns3 PMD driver
>   net/hns3: add stats related ops for hns3 PMD driver
>   net/hns3: add reset related process for hns3 PMD driver
>   net/hns3: add multiple process support for hns3 PMD driver
>   net/hns3: add hns3 build files
> 

There are some build error for 32-bit [1], I am aware that 32-bit is not in the
supported arch list, but build error are just related to the log format
identifiers, it is good practice to use 'PRIx64' and friends which will also fix
the build issue.

[1]
In file included from .../drivers/net/hns3/hns3_regs.c:35:



.../drivers/net/hns3/hns3_regs.c: In function ‘hns3_get_32_bit_regs’:



.../drivers/net/hns3/hns3_logs.h:16:38: error: format ‘%ld’ expects argument of
type ‘long int’, but argument 6 has type ‘unsigned int’ [-Werror=format=]


   16 |  rte_log(level, hns3_logtype_driver, "%s %s(): " fmt, \



      |                                      ^~~~~~~~~~~



.../drivers/net/hns3/hns3_logs.h:20:2: note: in expansion of macro
‘PMD_DRV_LOG_RAW’


   20 |  PMD_DRV_LOG_RAW(hw, RTE_LOG_ERR, fmt "\n", ## args)



      |  ^~~~~~~~~~~~~~~



.../drivers/net/hns3/hns3_regs.c:177:3: note: in expansion of macro ‘hns3_err’



  177 |   hns3_err(hw, "Failed to allocate %ld bytes needed to "



      |   ^~~~~~~~



.../drivers/net/hns3/hns3_regs.c:177:38: note: format string is defined here



  177 |   hns3_err(hw, "Failed to allocate %ld bytes needed to "



      |                                    ~~^



      |                                      |



      |                                      long int



      |                                    %d


^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30  3:22     ` Wei Hu (Xavier)
@ 2019-08-31  2:10       ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  2:10 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, dev; +Cc: xavier_huwei, linuxarm, forest.zhouchang



On 2019/8/30 11:22, Wei Hu (Xavier) wrote:
> Hi,  Jerin
>
>
> On 2019/8/23 22:08, Jerin Jacob Kollanukkaran wrote:
>>> -----Original Message-----
>>> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
>>> Sent: Friday, August 23, 2019 7:17 PM
>>> To: dev@dpdk.org
>>> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
>>> liudongdong3@huawei.com; forest.zhouchang@huawei.com
>>> Subject: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
>>>
>>> This patch add build related files for hns3 PMD driver.
>>>
>>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>> ---
>>> +# Hisilicon HNS3 PMD driver
>>> +#
>>> +CONFIG_RTE_LIBRTE_HNS3_PMD=y
>> # Please add meson support
> This patch already contains meson support,  thanks
>> # Move build infra to the first patch
>> # See git log drivers/net/octeontx2 as example
> OK, I will  adjust the order of the patches in this series and send V2.
>>
>>> diff --git a/config/common_base b/config/common_base
>>> index 8ef75c2..71a2c33 100644
>>> --- a/config/common_base
>>> +++ b/config/common_base
>>> @@ -282,6 +282,11 @@
>>> CONFIG_RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC=n
>>>  CONFIG_RTE_LIBRTE_HINIC_PMD=n
>>>
>>>  #
>>> +# Compile burst-oriented HNS3 PMD driver
>>> +#
>>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>>> +
>>> +#
>>>  # Compile burst-oriented IXGBE PMD driver
>>>  #
>>>  CONFIG_RTE_LIBRTE_IXGBE_PMD=y
>>> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang
>>> b/config/defconfig_arm64-armv8a-linuxapp-clang
>>> index d3b4dad..c73f5fb 100644
>>> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
>>> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
>>> @@ -6,3 +6,5 @@
>>>
>>>  CONFIG_RTE_TOOLCHAIN="clang"
>>>  CONFIG_RTE_TOOLCHAIN_CLANG=y
>>> +
>>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
>>> diff --git a/doc/guides/nics/features/hns3.ini
>>> b/doc/guides/nics/features/hns3.ini
>>> new file mode 100644
>>> index 0000000..d38d35e
>>> --- /dev/null
>>> +++ b/doc/guides/nics/features/hns3.ini
>>> @@ -0,0 +1,38 @@
>>> +;
>>> +; Supported features of the 'hns3' network poll mode driver.
>> Add doc changes when driver feature gets added.
>> # See git log drivers/net/octeontx2 as example
> OK, I will modify the patches and send V2.
> Thanks
>>> +;
>>> +; Refer to default.ini for the full list of available PMD features.
>>> +;
>>> +[Features]
>>> +Link status          = Y
>>> +MTU update           = Y
>>> +Jumbo frame          = Y
>>> +Promiscuous mode     = Y
>>> +Allmulticast mode    = Y
>>> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
>>> new file mode 100644
>>> index 0000000..c9d0253
>>> --- /dev/null
>>> +++ b/doc/guides/nics/hns3.rst
>>> @@ -0,0 +1,55 @@
>>> +..  SPDX-License-Identifier: BSD-3-Clause
>>> +    Copyright(c) 2018-2019 Hisilicon Limited.
>>> +
>>> +HNS3 Poll Mode Driver
>>> +===============================
>>> +
>>> +The Hisilicon Network Subsystem is a long term evolution IP which is
>>> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
>>> +
>>> +The HNS3 PMD (librte_pmd_hns3) provides poll mode driver support
>>> +for hns3(Hisilicon Network Subsystem 3) network engine.
>>> +
>>> +Features
>>> +--------
>>> +
>>> +Features of the HNS3 PMD are:
>>> +
>>> +- Arch support: ARMv8.
>> Is it an integrated NIC controller? Why it is supported only on ARMv8?
>> The reason why I asking because, Enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y
>> only on arm64 will create a case where build fails for arm64 and passes for
>> x86. I would like to avoid such disparity. If the build is passing on x86 make it
>> enable in the common code, not in arm64 config.
> Currently this network engine is integrated in the SoCs, the SoCs can be
> used
> as a PCIe EP integrated NIC controllers or be used as universal cpus on
> the device,
> such as servers. The network engine is accessed by ARM cores in the SoCs.
> We will enabling CONFIG_RTE_LIBRTE_HNS3_PMD=y in common_linux config in V2.
> Thanks.
Hi,  Jerin

    as a PCIe EP integrated NIC controllers -> as a PCIe EP Intelligent
NIC controllers

    Since it is currently only accessed by ARM cores on SoCs,
    maybe it is also reasonable to compile only on ARMv8, right?

    Regards

Xaiver
>>> +- Multiple queues for TX and RX
>>> +- Receive Side Scaling (RSS)
>>> +- Packet type information
>>> +- Checksum offload
>>> +- Promiscuous mode
>>> +- Multicast mode
>>> +- Port hardware statistics
>>> +- Jumbo frames
>>> +- Link state information
>>> +- VLAN stripping
>>> +cflags += '-DALLOW_EXPERIMENTAL_API'
>>> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map
>>> b/drivers/net/hns3/rte_pmd_hns3_version.map
>>> new file mode 100644
>>> index 0000000..3aef967
>>> --- /dev/null
>>> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
>>> @@ -0,0 +1,3 @@
>>> +DPDK_19.08 {
>> Change to 19.11
> OK, I will modify the patches and send V2. Thanks.
>
> Regards
> Xavier
>>
>>
>
> _______________________________________________
> Linuxarm mailing list
> Linuxarm@huawei.com
> http://hulk.huawei.com/mailman/listinfo/linuxarm
>
> .
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control of hns3 PMD driver
  2019-08-30 15:07   ` Ferruh Yigit
@ 2019-08-31  8:04     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:04 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:07, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for MAC PAUSE flow control and priority flow
>> control of hns3 PMD driver. All user priorities(up) must be mapped to
>> tc0 when MAC PAUSE flow control is enabled. Ups can be mapped to other
>> tcs driver permit when PFC is enabled. Flow control function by default
>> is turned off to ensure that app startup state is the same each time.
> As far as I can see the patch both enable DCB and flow control, can you please
> either split the patch or update the commit log to cover both features?
Hi, Ferruh Yigit

    Thanks for your comments.
    We will modify the commit log in patch V2 as follows:

This patch adds support for MAC PAUSE flow control and priority flow
control(PFC).

MAC PAUSE flow control features:
All user priorities(up) are mapped to tc0. It supports settings of flow
mode and pause time.

DCB features:
Up can be mapped to other tc driver permits according to business
requirement. We can config DCB information and enable PFC by
rte_eth_dev_configure interface. Besides, enabling flow control of a
priority is supported by rte_eth_dev_priority_flow_ctrl_set interface.
we can also set flow mode and pause time by
rte_eth_dev_priority_flow_ctrl_set. we do not support manual setting of
ETS, but driver equally distributes bandwidth for each tc according to
number of used tc.

In addition, flow control function by default is turned off to ensure
that app startup state is the same each time.



>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
> <...>
>
>>  static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.dev_close          = hns3_dev_close,
>>  	.mtu_set            = hns3_dev_mtu_set,
>>  	.dev_infos_get          = hns3_dev_infos_get,
>>  	.fw_version_get         = hns3_fw_version_get,
>> +	.flow_ctrl_get          = hns3_flow_ctrl_get,
>> +	.flow_ctrl_set          = hns3_flow_ctrl_set,
>> +	.priority_flow_ctrl_set = hns3_priority_flow_ctrl_set,
> Can you please update .ini file in this patch and mark following features as
> supported:
> Flow control
OK, We will fix it in patch V2.
>>  	.mac_addr_add           = hns3_add_mac_addr,
>>  	.mac_addr_remove        = hns3_remove_mac_addr,
>>  	.mac_addr_set           = hns3_set_default_mac_addr,
>> @@ -2753,6 +2949,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.reta_update            = hns3_dev_rss_reta_update,
>>  	.reta_query             = hns3_dev_rss_reta_query,
>>  	.filter_ctrl            = hns3_dev_filter_ctrl,
>> +	.get_dcb_info           = hns3_get_dcb_info,
> Can you please update .ini file in this patch and mark following features as
> supported:
> DCB
>
OK, We will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver
  2019-08-30 15:23 ` [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Ferruh Yigit
@ 2019-08-31  8:06   ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:06 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:23, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> The Hisilicon Network Subsystem is a long term evolution IP which is
>> supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
>>
>> This series add DPDK rte_ethdev poll mode driver for hns3(Hisilicon
>> Network Subsystem 3) network engine.
>>
>> Wei Hu (Xavier) (22):
>>   net/hns3: add hardware registers definition
>>   net/hns3: add some definitions for data structure and macro
>>   net/hns3: register hns3 PMD driver
>>   net/hns3: add support for cmd of hns3 PMD driver
>>   net/hns3: add the initialization of hns3 PMD driver
>>   net/hns3: add support for MAC address related operations
>>   net/hns3: add support for some misc operations
>>   net/hns3: add support for link update operation
>>   net/hns3: add support for flow directory of hns3 PMD driver
>>   net/hns3: add support for RSS of hns3 PMD driver
>>   net/hns3: add support for flow control of hns3 PMD driver
>>   net/hns3: add support for VLAN of hns3 PMD driver
>>   net/hns3: add support for mailbox of hns3 PMD driver
>>   net/hns3: add support for hns3 VF PMD driver
>>   net/hns3: add package and queue related operation
>>   net/hns3: add start stop configure promiscuous ops
>>   net/hns3: add dump register ops for hns3 PMD driver
>>   net/hns3: add abnormal interrupt process for hns3 PMD driver
>>   net/hns3: add stats related ops for hns3 PMD driver
>>   net/hns3: add reset related process for hns3 PMD driver
>>   net/hns3: add multiple process support for hns3 PMD driver
>>   net/hns3: add hns3 build files
>>
> There are some build error for 32-bit [1], I am aware that 32-bit is not in the
> supported arch list, but build error are just related to the log format
> identifiers, it is good practice to use 'PRIx64' and friends which will also fix
> the build issue.
>
> [1]
> In file included from .../drivers/net/hns3/hns3_regs.c:35:
>
>
>
> .../drivers/net/hns3/hns3_regs.c: In function ‘hns3_get_32_bit_regs’:
>
>
>
> .../drivers/net/hns3/hns3_logs.h:16:38: error: format ‘%ld’ expects argument of
> type ‘long int’, but argument 6 has type ‘unsigned int’ [-Werror=format=]
>
>
>    16 |  rte_log(level, hns3_logtype_driver, "%s %s(): " fmt, \
>
>
>
>       |                                      ^~~~~~~~~~~
>
>
>
> .../drivers/net/hns3/hns3_logs.h:20:2: note: in expansion of macro
> ‘PMD_DRV_LOG_RAW’
>
>
>    20 |  PMD_DRV_LOG_RAW(hw, RTE_LOG_ERR, fmt "\n", ## args)
>
>
>
>       |  ^~~~~~~~~~~~~~~
>
>
>
> .../drivers/net/hns3/hns3_regs.c:177:3: note: in expansion of macro ‘hns3_err’
>
>
>
>   177 |   hns3_err(hw, "Failed to allocate %ld bytes needed to "
>
>
>
>       |   ^~~~~~~~
>
>
>
> .../drivers/net/hns3/hns3_regs.c:177:38: note: format string is defined here
>
>
>
>   177 |   hns3_err(hw, "Failed to allocate %ld bytes needed to "
>
>
>
>       |                                    ~~^
>
>
>
>       |                                      |
>
>
>
>       |                                      long int
>
>
>
>       |                                    %d
>
>
Hi, Ferruh Yigit

    Thanks for your suggestion.
    We will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30 15:12   ` Ferruh Yigit
@ 2019-08-31  8:07     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:07 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:12, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch add build related files for hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>>  MAINTAINERS                                  |  7 ++++
>>  config/common_armv8a_linux                   |  5 +++
>>  config/common_base                           |  5 +++
>>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
>>  doc/guides/nics/hns3.rst                     | 55 ++++++++++++++++++++++++++++
>>  drivers/net/Makefile                         |  1 +
>>  drivers/net/hns3/Makefile                    | 43 ++++++++++++++++++++++
>>  drivers/net/hns3/meson.build                 | 19 ++++++++++
>>  drivers/net/hns3/rte_pmd_hns3_version.map    |  3 ++
>>  drivers/net/meson.build                      |  1 +
>>  mk/rte.app.mk                                |  1 +
> Can you also update the release notes to announce the new PMD:
> 'doc/guides/rel_notes/release_19_11.rst'
Hi, Ferruh Yigit

    Thanks for your suggestion.
    We will update this file in patch V2.

    Regards
Xavier
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30 15:00   ` Ferruh Yigit
@ 2019-08-31  8:07     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:07 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:00, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch add build related files for hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>>  MAINTAINERS                                  |  7 ++++
>>  config/common_armv8a_linux                   |  5 +++
>>  config/common_base                           |  5 +++
>>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
> There are separate PF and VF drivers in the patchset, this is mostly represent
> by to different .ini files, hns3.ini and hns3_vf.ini, and can you please reflect
> the feature differences into these files?
>
Hi, Ferruh Yigit

    Thanks for your suggestion.
    We will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30  6:17   ` Stephen Hemminger
@ 2019-08-31  8:44     ` Wei Hu (Xavier)
  2019-09-03 15:27     ` Ye Xiaolong
  1 sibling, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:44 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 14:17, Stephen Hemminger wrote:
> On Fri, 23 Aug 2019 21:47:11 +0800
> "Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:
>
>> +Limitations or Known issues
>> +---------------------------
>> +Build with clang is not supported yet.
>> +Currently, only ARMv8 architecture is supported.
>> \ No newline at end of file
> Pleas fix this. You need to add new line at end of
> file. Vi does this by default, and Emacs has an option
> to do this.
>
Hi, Stephen Hemminger
    Thanks for your comment.
    We will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30  6:16   ` Stephen Hemminger
@ 2019-08-31  8:46     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:46 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 14:16, Stephen Hemminger wrote:
> On Fri, 23 Aug 2019 21:47:11 +0800
> "Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:
>
> Thanks for doing documentation on this driver.
>
>> +The Hisilicon Network Subsystem is a long term evolution IP which is
>> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
> This sentence needs some editing to remove marketing speak.
> It is not clear what "long term evolution IP" means when read
> by a DPDK user.  Why not just say that "Hisilcon Network Subsystem
> (HNS) is used in System On Chip (SOC) devices such as the Kunpeng 920".
>
> It is standard practice in technical documents to spell out
> the meaning of acronyms on first use; then use the acronym
> there after.
>
Hi, Stephen Hemminger
    Thanks for your comments,
    We will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops for hns3 PMD driver
  2019-08-30 15:20   ` Ferruh Yigit
@ 2019-08-31  8:49     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  8:49 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:20, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds stats_get, stats_reset, xstats_get, xstats_get_names
>> xstats_reset, xstats_get_by_id and xstats_get_names_by_id related
>> function codes.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> +	for (i = 0; i < size; i++) {
>> +		if (ids[i] >= cnt_stats) {
>> +			PMD_INIT_LOG(ERR, "id value is invalid");
>> +			return -EINVAL;
>> +		}
>> +		strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
>> +			strlen(xstats_names_copy[ids[i]].name));
> Getting following warning from this line:
>
> .../drivers/net/hns3/hns3_stats.c: In function
> ‘hns3_dev_xstats_get_names_by_id’:
>
> .../drivers/net/hns3/hns3_stats.c:825:3: error: ‘strncpy’ output truncated
> before terminating nul copying as many bytes from a string as its length
> [-Werror=stringop-truncation]
>   825 |   strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
>
>       |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   826 |    strlen(xstats_names_copy[ids[i]].name));
>
>       |    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hi, Ferruh Yigit
    Thanks for your comments,
    We will fix it in patch V2.

    Regards
Xavier
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF PMD driver
  2019-08-30 15:11   ` Ferruh Yigit
@ 2019-08-31  9:03     ` Wei Hu (Xavier)
  2019-09-06 11:27     ` Wei Hu (Xavier)
  1 sibling, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  9:03 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:11, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for hns3 VF PMD driver.
>>
>> In current version, we only support VF device is bound to vfio_pci or
>> igb_uio and then taken over by DPDK when PF device is taken over by kernel
>> mode hns3 ethdev driver, VF is not supported when PF devcie is taken over
>> by DPDK.
> I think better to say 'when PF is driven by DPDK driver' than 'when PF device is
> taken over by DPDK'.
>
> Can you please this (VF only supported when PF is driver by kernel) in your
> documentation please?
> And perhaps VF driver support in feature list to highlight it...
Hi, Ferruh Yigit
    We will fix it in patch V2.
    Thanks!

    Regards
Xavier
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN of hns3 PMD driver
  2019-08-30 15:08   ` Ferruh Yigit
@ 2019-08-31  9:04     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  9:04 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:08, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for VLAN related operation of hns3 PMD driver.
>>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -2949,6 +3615,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.reta_update            = hns3_dev_rss_reta_update,
>>  	.reta_query             = hns3_dev_rss_reta_query,
>>  	.filter_ctrl            = hns3_dev_filter_ctrl,
>> +	.vlan_filter_set        = hns3_vlan_filter_set,
>> +	.vlan_tpid_set          = hns3_vlan_tpid_set,
>> +	.vlan_offload_set       = hns3_vlan_offload_set,
>> +	.vlan_pvid_set          = hns3_vlan_pvid_set,
> Can you please update .ini file in this patch and mark following features as
> supported:
> VLAN filter
> VLAN offload
OK, We will fix it in patch V2.

    Regards
Xavier
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS of hns3 PMD driver
  2019-08-30 15:07   ` Ferruh Yigit
@ 2019-08-31  9:16     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-08-31  9:16 UTC (permalink / raw)
  To: Ferruh Yigit, dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang, Thomas Monjalon

Hi, Ferruh Yigit


On 2019/8/30 23:07, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for RSS of hns3 PMD driver.
>> It included the following functions in file hns3_rss.c:
>> 1) Set/query hash key, rss_hf by .rss_hash_update/.rss_hash_conf_get ops
>>    callback functions.
>> 2) Set/query redirection table by .reta_update/.reta_query. ops callback
>>    functions.
>> 3) Set/query hash algorithm by .filter_ctrl ops callback function when
>>    the 'filter_type' is RTE_ETH_FILTER_HASH.
> Legacy filter API is deprecated, there is a recent patch from Thomas to
> deprecate documenting this as feature:
> Commit 030febb6642c ("doc: remove deprecated ethdev features")
We will remove some related feature, and send patch V2.
Thanks
>> And it included the following functions in file hns3_flow.c:
>> 1) Set hash key, rss_hf, redirection table and algorithm by .create ops
>>    callback function.
>> 2) Disable RSS by .destroy or .flush ops callback function.
>> 3) Check the effectiveness of the RSS's configuration by .validate ops
>>    callback function.
>>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -2744,6 +2748,10 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.mac_addr_set           = hns3_set_default_mac_addr,
>>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>>  	.link_update            = hns3_dev_link_update,
>> +	.rss_hash_update        = hns3_dev_rss_hash_update,
>> +	.rss_hash_conf_get      = hns3_dev_rss_hash_conf_get,
>> +	.reta_update            = hns3_dev_rss_reta_update,
>> +	.reta_query             = hns3_dev_rss_reta_query,
> Can you please update .ini file in this patch and mark following features as
> supported:
> RSS key update
> RSS reta update
>
> For 'RSS hash' datapath update is also required, I am not sure in which patch
> that support it added.
>
OK, will fix it in patch V2.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support for hns3 PMD driver
  2019-08-30 15:14   ` Ferruh Yigit
@ 2019-09-02 13:41     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-02 13:41 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang


Hi, Ferruh Yigit

On 2019/8/30 23:14, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds multiple process support for hns3 PMD driver.
>> Multi-process support selection queue by configuring RSS or
>> flow director. The primary process supports various management
>> ops, and the secondary process only supports queries ops.
>> The primary process notifies the secondary processes to start
>> or stop tranceiver.
>>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -1556,6 +1559,25 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
>>  	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
>>  };
>>  
>> +static const struct eth_dev_ops hns3vf_eth_dev_secondary_ops = {
>> +	.stats_get          = hns3_stats_get,
>> +	.stats_reset        = hns3_stats_reset,
>> +	.xstats_get         = hns3_dev_xstats_get,
>> +	.xstats_get_names   = hns3_dev_xstats_get_names,
>> +	.xstats_reset	    = hns3_dev_xstats_reset,
>> +	.xstats_get_by_id   = hns3_dev_xstats_get_by_id,
>> +	.xstats_get_names_by_id = hns3_dev_xstats_get_names_by_id,
>> +	.dev_infos_get      = hns3vf_dev_infos_get,
>> +	.link_update        = hns3vf_dev_link_update,
>> +	.rss_hash_update    = hns3_dev_rss_hash_update,
>> +	.rss_hash_conf_get  = hns3_dev_rss_hash_conf_get,
>> +	.reta_update        = hns3_dev_rss_reta_update,
>> +	.reta_query         = hns3_dev_rss_reta_query,
>> +	.filter_ctrl        = hns3_dev_filter_ctrl,
>> +	.get_reg            = hns3_get_regs,
>> +	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
>> +};
>> +
> There shouldn't need to define separate dev_ops for the secondary processes,
> what is the difference of this one used for primary process, why not use that one?
We limit the slave process to only perform query ops actions, and cannot
perform configuration ops actions to prevent the master and slave
processes from performing configuration ops at the same time, which may
cause conflicts.
>
> <...>
>
>> +/*
>> + * Initialize by secondary process.
>> + */
>> +void hns3_mp_init_secondary(void)
>> +{
>> +	rte_mp_action_register(HNS3_MP_NAME, mp_secondary_handle);
> What is this handler for? Most of the case the MP communication is done in eal
> level and nothing need to be done in the driver level.
>

When the primary process executes dev_stop, it will release the mbuf of
all rx queues. The secondary process may access the mbuf of the rx
queue, and a segmentation error will occur.So MP is used in
multi-process to solve this problem. When primary process executes
dev_stop, it informs the process to stop receiving, and then releases
the mbuf, so that the secondary process will not have a segmentation error.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30  6:17   ` Stephen Hemminger
  2019-08-31  8:44     ` Wei Hu (Xavier)
@ 2019-09-03 15:27     ` Ye Xiaolong
  2019-09-11 11:36       ` Wei Hu (Xavier)
  1 sibling, 1 reply; 75+ messages in thread
From: Ye Xiaolong @ 2019-09-03 15:27 UTC (permalink / raw)
  To: Wei Hu (Xavier)
  Cc: dev, Stephen Hemminger, linuxarm, xavier_huwei, liudongdong3,
	forest.zhouchang

On 08/29, Stephen Hemminger wrote:
>On Fri, 23 Aug 2019 21:47:11 +0800
>"Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:
>
>> +Limitations or Known issues
>> +---------------------------
>> +Build with clang is not supported yet.
>> +Currently, only ARMv8 architecture is supported.
>> \ No newline at end of file
>
>Pleas fix this. You need to add new line at end of
>file. Vi does this by default, and Emacs has an option
>to do this.

Just set below line in .emacs would work, Stephen told me last time :). 
(setq require-final-newline t)

Thanks,
Xiaolong

^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations
  2019-08-30 15:03   ` Ferruh Yigit
@ 2019-09-05  5:40     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-05  5:40 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:03, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds the following mac address related operations defined in
>> struct eth_dev_ops: mac_addr_add, mac_addr_remove, mac_addr_set
>> and set_mc_addr_list.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> +static int
>> +hns3_set_mc_mac_addr_list(struct rte_eth_dev *dev,
>> +			  struct rte_ether_addr *mc_addr_set,
>> +			  uint32_t nb_mc_addr)
>> +{
>> +	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>> +	struct rte_ether_addr reserved_addr_list[HNS3_MC_MACADDR_NUM];
>> +	struct rte_ether_addr add_addr_list[HNS3_MC_MACADDR_NUM];
>> +	struct rte_ether_addr rm_addr_list[HNS3_MC_MACADDR_NUM];
>> +	struct rte_ether_addr *addr;
>> +	int reserved_addr_num;
>> +	int add_addr_num;
>> +	int rm_addr_num;
>> +	int mc_addr_num;
>> +	int num;
>> +	int ret;
>> +	int i;
>> +
>> +	/* Check if input parameters are valid */
>> +	ret = hns3_set_mc_addr_chk_param(hw, mc_addr_set, nb_mc_addr);
>> +	if (ret)
>> +		return ret;
>> +
>> +	rte_spinlock_lock(&hw->lock);
> Is locking required here?
We support reset after exception, and restore the settings to the
pre-reset by the alarm in the interrupt thread. There are two threads
setting mc_mac, so we need to lock.
>
> <...>
>
>> @@ -1582,6 +2394,10 @@ hns3_dev_close(struct rte_eth_dev *eth_dev)
>>  
>>  static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.dev_close          = hns3_dev_close,
>> +	.mac_addr_add           = hns3_add_mac_addr,
>> +	.mac_addr_remove        = hns3_remove_mac_addr,
>> +	.mac_addr_set           = hns3_set_default_mac_addr,
>> +	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>>  };
> Can you please update .ini file in this patch and mark following features as
> supported:
> Unicast MAC filter
> Multicast MAC filter
OK, we will fix it in v2.

    Best Regards
Xavier
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro
  2019-08-30  8:25   ` Gavin Hu (Arm Technology China)
@ 2019-09-05  6:01     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-05  6:01 UTC (permalink / raw)
  To: Gavin Hu (Arm Technology China), dev
  Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Gavin Hu


On 2019/8/30 16:25, Gavin Hu (Arm Technology China) wrote:
> Hi Xavier,
>
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Wei Hu (Xavier)
>> Sent: Friday, August 23, 2019 9:47 PM
>> To: dev@dpdk.org
>> Cc: linuxarm@huawei.com; xavier_huwei@163.com;
>> liudongdong3@huawei.com; forest.zhouchang@huawei.com
>> Subject: [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data
>> structure and macro
>>
>> This patch adds some data structure definitions, macro definitions and
>> inline functions for hns3 PMD drivers.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>>  drivers/net/hns3/hns3_ethdev.h | 609
>> +++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 609 insertions(+)
>>  create mode 100644 drivers/net/hns3/hns3_ethdev.h
>>
>> diff --git a/drivers/net/hns3/hns3_ethdev.h
>> b/drivers/net/hns3/hns3_ethdev.h
>> new file mode 100644
>> index 0000000..bfb54f2
>> --- /dev/null
>> +++ b/drivers/net/hns3/hns3_ethdev.h
>> @@ -0,0 +1,609 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2018-2019 Hisilicon Limited.
>> + */
>> +
>> +#ifndef _HNS3_ETHDEV_H_
>> +#define _HNS3_ETHDEV_H_
>> +
>> +#include <sys/time.h>
>> +#include <rte_alarm.h>
>> +
>> +/* Vendor ID */
>> +#define PCI_VENDOR_ID_HUAWEI                 0x19e5
>> +
>> +/* Device IDs */
>> +#define HNS3_DEV_ID_GE                               0xA220
>> +#define HNS3_DEV_ID_25GE                     0xA221
>> +#define HNS3_DEV_ID_25GE_RDMA                        0xA222
>> +#define HNS3_DEV_ID_50GE_RDMA                        0xA224
>> +#define HNS3_DEV_ID_100G_RDMA_MACSEC         0xA226
>> +#define HNS3_DEV_ID_100G_VF                  0xA22E
>> +#define HNS3_DEV_ID_100G_RDMA_PFC_VF         0xA22F
>> +
>> +#define HNS3_UC_MACADDR_NUM          96
>> +#define HNS3_MC_MACADDR_NUM          128
>> +
>> +#define HNS3_MAX_BD_SIZE             65535
>> +#define HNS3_MAX_TX_BD_PER_PKT               8
>> +#define HNS3_MAX_FRAME_LEN           9728
>> +#define HNS3_MIN_FRAME_LEN           64
>> +#define HNS3_VLAN_TAG_SIZE           4
>> +#define HNS3_DEFAULT_RX_BUF_LEN              2048
>> +
>> +#define HNS3_ETH_OVERHEAD \
>> +     (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
>> HNS3_VLAN_TAG_SIZE * 2)
>> +#define HNS3_PKTLEN_TO_MTU(pktlen)   ((pktlen) -
>> HNS3_ETH_OVERHEAD)
>> +#define HNS3_MAX_MTU (HNS3_MAX_FRAME_LEN -
>> HNS3_ETH_OVERHEAD)
>> +#define HNS3_DEFAULT_MTU             1500UL
>> +#define HNS3_DEFAULT_FRAME_LEN               (HNS3_DEFAULT_MTU +
>> HNS3_ETH_OVERHEAD)
>> +
>> +#define HNS3_4_TCS                   4
>> +#define HNS3_8_TCS                   8
>> +#define HNS3_MAX_TC_NUM                      8
>> +
>> +#define HNS3_MAX_PF_NUM                      8
>> +#define HNS3_UMV_TBL_SIZE            3072
>> +#define HNS3_DEFAULT_UMV_SPACE_PER_PF \
>> +     (HNS3_UMV_TBL_SIZE / HNS3_MAX_PF_NUM)
>> +
>> +#define HNS3_PF_CFG_BLOCK_SIZE               32
>> +#define HNS3_PF_CFG_DESC_NUM \
>> +     (HNS3_PF_CFG_BLOCK_SIZE / HNS3_CFG_RD_LEN_BYTES)
>> +
>> +#define HNS3_DEFAULT_ENABLE_PFC_NUM  0
>> +
>> +#define HNS3_INTR_UNREG_FAIL_RETRY_CNT       5
>> +#define HNS3_INTR_UNREG_FAIL_DELAY_MS        500
>> +
>> +#define HNS3_QUIT_RESET_CNT          10
>> +#define HNS3_QUIT_RESET_DELAY_MS     100
>> +
>> +#define HNS3_POLL_RESPONE_MS         1
>> +
>> +#define HNS3_MAX_USER_PRIO           8
>> +#define HNS3_PG_NUM                  4
>> +enum hns3_fc_mode {
>> +     HNS3_FC_NONE,
>> +     HNS3_FC_RX_PAUSE,
>> +     HNS3_FC_TX_PAUSE,
>> +     HNS3_FC_FULL,
>> +     HNS3_FC_DEFAULT
>> +};
>> +
>> +#define HNS3_SCH_MODE_SP     0
>> +#define HNS3_SCH_MODE_DWRR   1
>> +struct hns3_pg_info {
>> +     uint8_t pg_id;
>> +     uint8_t pg_sch_mode;  /* 0: sp; 1: dwrr */
>> +     uint8_t tc_bit_map;
>> +     uint32_t bw_limit;
>> +     uint8_t tc_dwrr[HNS3_MAX_TC_NUM];
>> +};
>> +
>> +struct hns3_tc_info {
>> +     uint8_t tc_id;
>> +     uint8_t tc_sch_mode;  /* 0: sp; 1: dwrr */
>> +     uint8_t pgid;
>> +     uint32_t bw_limit;
>> +     uint8_t up_to_tc_map; /* user priority maping on the TC */
>> +};
>> +
>> +struct hns3_dcb_info {
>> +     uint8_t num_tc;
>> +     uint8_t num_pg;     /* It must be 1 if vNET-Base schd */
>> +     uint8_t pg_dwrr[HNS3_PG_NUM];
>> +     uint8_t prio_tc[HNS3_MAX_USER_PRIO];
>> +     struct hns3_pg_info pg_info[HNS3_PG_NUM];
>> +     struct hns3_tc_info tc_info[HNS3_MAX_TC_NUM];
>> +     uint8_t hw_pfc_map; /* Allow for packet drop or not on this TC */
>> +     uint8_t pfc_en; /* Pfc enabled or not for user priority */
>> +};
>> +
>> +enum hns3_fc_status {
>> +     HNS3_FC_STATUS_NONE,
>> +     HNS3_FC_STATUS_MAC_PAUSE,
>> +     HNS3_FC_STATUS_PFC,
>> +};
>> +
>> +struct hns3_tc_queue_info {
>> +     uint8_t tqp_offset;     /* TQP offset from base TQP */
>> +     uint8_t tqp_count;      /* Total TQPs */
>> +     uint8_t tc;             /* TC index */
>> +     bool enable;            /* If this TC is enable or not */
>> +};
>> +
>> +struct hns3_cfg {
>> +     uint8_t vmdq_vport_num;
>> +     uint8_t tc_num;
>> +     uint16_t tqp_desc_num;
>> +     uint16_t rx_buf_len;
>> +     uint16_t rss_size_max;
>> +     uint8_t phy_addr;
>> +     uint8_t media_type;
>> +     uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
>> +     uint8_t default_speed;
>> +     uint32_t numa_node_map;
>> +     uint8_t speed_ability;
>> +     uint16_t umv_space;
>> +};
>> +
>> +/* mac media type */
>> +enum hns3_media_type {
>> +     HNS3_MEDIA_TYPE_UNKNOWN,
>> +     HNS3_MEDIA_TYPE_FIBER,
>> +     HNS3_MEDIA_TYPE_COPPER,
>> +     HNS3_MEDIA_TYPE_BACKPLANE,
>> +     HNS3_MEDIA_TYPE_NONE,
>> +};
>> +
>> +struct hns3_mac {
>> +     uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
>> +     bool default_addr_setted; /* whether default addr(mac_addr) is
>> setted */
>> +     uint8_t media_type;
>> +     uint8_t phy_addr;
>> +     uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
>> +     uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
>> +     uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
>> +     uint32_t link_speed;      /* ETH_SPEED_NUM_ */
>> +};
>> +
>> +
>> +/* Primary process maintains driver state in main thread.
>> + *
>> + * +---------------+
>> + * | UNINITIALIZED |<-----------+
>> + * +---------------+         |
>> + *   |.eth_dev_init          |.eth_dev_uninit
>> + *   V                       |
>> + * +---------------+------------+
>> + * |  INITIALIZED  |
>> + * +---------------+<-----------<---------------+
>> + *   |.dev_configure         |               |
>> + *   V                       |failed         |
>> + * +---------------+------------+            |
>> + * |  CONFIGURING  |                         |
>> + * +---------------+----+                    |
>> + *   |success        |                       |
>> + *   |               |               +---------------+
>> + *   |               |               |    CLOSING    |
>> + *   |               |               +---------------+
>> + *   |               |                       ^
>> + *   V               |.dev_configure         |
>> + * +---------------+----+                    |.dev_close
>> + * |  CONFIGURED   |----------------------------+
>> + * +---------------+<-----------+
>> + *   |.dev_start             |
>> + *   V                       |
>> + * +---------------+         |
>> + * |   STARTING    |------------^
>> + * +---------------+ failed  |
>> + *   |success                |
>> + *   |               +---------------+
>> + *   |               |   STOPPING    |
>> + *   |               +---------------+
>> + *   |                       ^
>> + *   V                       |.dev_stop
>> + * +---------------+------------+
>> + * |    STARTED    |
>> + * +---------------+
>> + */
>> +enum hns3_adapter_state {
>> +     HNS3_NIC_UNINITIALIZED = 0,
>> +     HNS3_NIC_INITIALIZED,
>> +     HNS3_NIC_CONFIGURING,
>> +     HNS3_NIC_CONFIGURED,
>> +     HNS3_NIC_STARTING,
>> +     HNS3_NIC_STARTED,
>> +     HNS3_NIC_STOPPING,
>> +     HNS3_NIC_CLOSING,
>> +     HNS3_NIC_CLOSED,
>> +     HNS3_NIC_REMOVED,
>> +     HNS3_NIC_NSTATES
>> +};
>> +
>> +/* Reset various stages, execute in order */
>> +enum hns3_reset_stage {
>> +     /* Stop query services, stop transceiver, disable MAC */
>> +     RESET_STAGE_DOWN,
>> +     /* Clear reset completion flags, disable send command */
>> +     RESET_STAGE_PREWAIT,
>> +     /* Inform IMP to start resetting */
>> +     RESET_STAGE_REQ_HW_RESET,
>> +     /* Waiting for hardware reset to complete */
>> +     RESET_STAGE_WAIT,
>> +     /* Reinitialize hardware */
>> +     RESET_STAGE_DEV_INIT,
>> +     /* Restore user settings and enable MAC */
>> +     RESET_STAGE_RESTORE,
>> +     /* Restart query services, start transceiver */
>> +     RESET_STAGE_DONE,
>> +     /* Not in reset state */
>> +     RESET_STAGE_NONE,
>> +};
>> +
>> +enum hns3_reset_level {
>> +     HNS3_NONE_RESET,
>> +     HNS3_VF_FUNC_RESET, /* A VF function reset */
>> +     /*
>> +      * All VFs under a PF perform function reset.
>> +      * Kernel PF driver use mailbox to inform DPDK VF to do reset, the
>> value
>> +      * of the reset level and the one defined in kernel driver should be
>> +      * same.
>> +      */
>> +     HNS3_VF_PF_FUNC_RESET = 2,
>> +     /*
>> +      * All VFs under a PF perform FLR reset.
>> +      * Kernel PF driver use mailbox to inform DPDK VF to do reset, the
>> value
>> +      * of the reset level and the one defined in kernel driver should be
>> +      * same.
>> +      */
>> +     HNS3_VF_FULL_RESET = 3,
>> +     HNS3_FLR_RESET,     /* A VF perform FLR reset */
>> +     /* All VFs under the rootport perform a global or IMP reset */
>> +     HNS3_VF_RESET,
>> +     HNS3_FUNC_RESET,    /* A PF function reset */
>> +     /* All PFs under the rootport perform a global reset */
>> +     HNS3_GLOBAL_RESET,
>> +     HNS3_IMP_RESET,     /* All PFs under the rootport perform a IMP
>> reset */
>> +     HNS3_MAX_RESET
>> +};
>> +
>> +enum hns3_wait_result {
>> +     HNS3_WAIT_UNKNOWN,
>> +     HNS3_WAIT_REQUEST,
>> +     HNS3_WAIT_SUCCESS,
>> +     HNS3_WAIT_TIMEOUT
>> +};
>> +
>> +#define HNS3_RESET_SYNC_US 100000
>> +
>> +struct hns3_reset_stats {
>> +     uint64_t request_cnt; /* Total request reset times */
>> +     uint64_t global_cnt;  /* Total GLOBAL reset times */
>> +     uint64_t imp_cnt;     /* Total IMP reset times */
>> +     uint64_t exec_cnt;    /* Total reset executive times */
>> +     uint64_t success_cnt; /* Total reset successful times */
>> +     uint64_t fail_cnt;    /* Total reset failed times */
>> +     uint64_t merge_cnt;   /* Total merged in high reset times */
>> +};
>> +
>> +typedef bool (*check_completion_func)(struct hns3_hw *hw);
>> +
>> +struct hns3_wait_data {
>> +     void *hns;
>> +     uint64_t end_ms;
>> +     uint64_t interval;
>> +     int16_t count;
>> +     enum hns3_wait_result result;
>> +     check_completion_func check_completion;
>> +};
>> +
>> +struct hns3_reset_ops {
>> +     void (*reset_service)(void *arg);
>> +     int (*stop_service)(struct hns3_adapter *hns);
>> +     int (*prepare_reset)(struct hns3_adapter *hns);
>> +     int (*wait_hardware_ready)(struct hns3_adapter *hns);
>> +     int (*reinit_dev)(struct hns3_adapter *hns);
>> +     int (*restore_conf)(struct hns3_adapter *hns);
>> +     int (*start_service)(struct hns3_adapter *hns);
>> +};
>> +
>> +enum hns3_schedule {
>> +     SCHEDULE_NONE,
>> +     SCHEDULE_PENDING,
>> +     SCHEDULE_REQUESTED,
>> +     SCHEDULE_DEFERRED,
>> +};
>> +
>> +struct hns3_reset_data {
>> +     enum hns3_reset_stage stage;
>> +     rte_atomic16_t schedule;
>> +     /* Reset flag, covering the entire reset process */
>> +     rte_atomic16_t resetting;
>> +     /* Used to disable sending cmds during reset */
>> +     rte_atomic16_t disable_cmd;
>> +     /* The reset level being processed */
>> +     enum hns3_reset_level level;
>> +     /* Reset level set, each bit represents a reset level */
>> +     uint64_t pending;
>> +     /* Request reset level set, from interrupt or mailbox */
>> +     uint64_t request;
>> +     int attempts; /* Reset failure retry */
>> +     int retries;  /* Timeout failure retry in reset_post */
>> +     /*
>> +      * At the time of global or IMP reset, the command cannot be sent
>> to
>> +      * stop the tx/rx queues. Tx/Rx queues may be access mbuf during
>> the
>> +      * reset process, so the mbuf is required to be released after the
>> reset
>> +      * is completed.The mbuf_deferred_free is used to mark whether
>> mbuf
>> +      * needs to be released.
>> +      */
>> +     bool mbuf_deferred_free;
>> +     struct timeval start_time;
>> +     struct hns3_reset_stats stats;
>> +     const struct hns3_reset_ops *ops;
>> +     struct hns3_wait_data *wait_data;
>> +};
>> +
>> +struct hns3_hw {
>> +     struct rte_eth_dev_data *data;
>> +     void *io_base;
>> +     struct hns3_mac mac;
>> +     unsigned int secondary_cnt; /* Number of secondary processes
>> init'd. */
>> +     uint32_t fw_version;
>> +
>> +     uint16_t num_msi;
>> +     uint16_t total_tqps_num;    /* total task queue pairs of this PF */
>> +     uint16_t tqps_num;          /* num task queue pairs of this function */
>> +     uint16_t rss_size_max;      /* HW defined max RSS task queue */
>> +     uint16_t rx_buf_len;
>> +     uint16_t num_tx_desc;       /* desc num of per tx queue */
>> +     uint16_t num_rx_desc;       /* desc num of per rx queue */
>> +
>> +     struct rte_ether_addr mc_addrs[HNS3_MC_MACADDR_NUM];
>> +     int mc_addrs_num; /* Multicast mac addresses number */
>> +
>> +     uint8_t num_tc;             /* Total number of enabled TCs */
>> +     uint8_t hw_tc_map;
>> +     enum hns3_fc_mode current_mode;
>> +     enum hns3_fc_mode requested_mode;
>> +     struct hns3_dcb_info dcb_info;
>> +     enum hns3_fc_status current_fc_status; /* current flow control
>> status */
>> +     struct hns3_tc_queue_info tc_queue[HNS3_MAX_TC_NUM];
>> +     uint16_t alloc_tqps;
>> +     uint16_t alloc_rss_size;    /* Queue number per TC */
>> +
>> +     uint32_t flag;
>> +     /*
>> +      * PMD setup and configuration is not thread safe. Since it is not
>> +      * performance sensitive, it is better to guarantee thread-safety
>> +      * and add device level lock. Adapter control operations which
>> +      * change its state should acquire the lock.
>> +      */
>> +     rte_spinlock_t lock;
>> +     enum hns3_adapter_state adapter_state;
>> +     struct hns3_reset_data reset;
>> +};
>> +
>> +#define HNS3_FLAG_TC_BASE_SCH_MODE           1
>> +#define HNS3_FLAG_VNET_BASE_SCH_MODE         2
>> +
>> +struct hns3_err_msix_intr_stats {
>> +     uint64_t mac_afifo_tnl_intr_cnt;
>> +     uint64_t ppu_mpf_abnormal_intr_st2_cnt;
>> +     uint64_t ssu_port_based_pf_intr_cnt;
>> +     uint64_t ppp_pf_abnormal_intr_cnt;
>> +     uint64_t ppu_pf_abnormal_intr_cnt;
>> +};
>> +
>> +/* vlan entry information. */
>> +struct hns3_user_vlan_table {
>> +     LIST_ENTRY(hns3_user_vlan_table) next;
>> +     bool hd_tbl_status;
>> +     uint16_t vlan_id;
>> +};
>> +
>> +struct hns3_port_base_vlan_config {
>> +     uint16_t state;
>> +     uint16_t pvid;
>> +};
>> +
>> +/* Vlan tag configuration for RX direction */
>> +struct hns3_rx_vtag_cfg {
>> +     uint8_t rx_vlan_offload_en; /* Whether enable rx vlan offload */
>> +     uint8_t strip_tag1_en;      /* Whether strip inner vlan tag */
>> +     uint8_t strip_tag2_en;      /* Whether strip outer vlan tag */
>> +     uint8_t vlan1_vlan_prionly; /* Inner VLAN Tag up to descriptor
>> Enable */
>> +     uint8_t vlan2_vlan_prionly; /* Outer VLAN Tag up to descriptor
>> Enable */
>> +};
>> +
>> +/* Vlan tag configuration for TX direction */
>> +struct hns3_tx_vtag_cfg {
>> +     bool accept_tag1;           /* Whether accept tag1 packet from host
>> */
>> +     bool accept_untag1;         /* Whether accept untag1 packet from
>> host */
>> +     bool accept_tag2;
>> +     bool accept_untag2;
>> +     bool insert_tag1_en;        /* Whether insert inner vlan tag */
>> +     bool insert_tag2_en;        /* Whether insert outer vlan tag */
>> +     uint16_t default_tag1;      /* The default inner vlan tag to insert */
>> +     uint16_t default_tag2;      /* The default outer vlan tag to insert */
>> +};
>> +
>> +struct hns3_vtag_cfg {
>> +     struct hns3_rx_vtag_cfg rx_vcfg;
>> +     struct hns3_tx_vtag_cfg tx_vcfg;
>> +};
>> +
>> +/* Request types for IPC. */
>> +enum hns3_mp_req_type {
>> +     HNS3_MP_REQ_START_RXTX = 1,
>> +     HNS3_MP_REQ_STOP_RXTX,
>> +     HNS3_MP_REQ_MAX
>> +};
>> +
>> +/* Pameters for IPC. */
>> +struct hns3_mp_param {
>> +     enum hns3_mp_req_type type;
>> +     int port_id;
>> +     int result;
>> +};
>> +
>> +/* Request timeout for IPC. */
>> +#define HNS3_MP_REQ_TIMEOUT_SEC 5
>> +
>> +/* Key string for IPC. */
>> +#define HNS3_MP_NAME "net_hns3_mp"
>> +
>> +struct hns3_pf {
>> +     struct hns3_adapter *adapter;
>> +     bool is_main_pf;
>> +
>> +     uint32_t pkt_buf_size; /* Total pf buf size for tx/rx */
>> +     uint32_t tx_buf_size; /* Tx buffer size for each TC */
>> +     uint32_t dv_buf_size; /* Dv buffer size for each TC */
>> +
>> +     uint16_t mps; /* Max packet size */
>> +
>> +     uint8_t tx_sch_mode;
>> +     uint8_t tc_max; /* max number of tc driver supported */
>> +     uint8_t local_max_tc; /* max number of local tc */
>> +     uint8_t pfc_max;
>> +     uint8_t prio_tc[HNS3_MAX_USER_PRIO]; /* TC indexed by prio */
>> +     uint16_t pause_time;
>> +     bool support_fc_autoneg;       /* support FC autonegotiate */
>> +
>> +     uint16_t wanted_umv_size;
>> +     uint16_t max_umv_size;
>> +     uint16_t used_umv_size;
>> +
>> +     /* Statistics information for abnormal interrupt */
>> +     struct hns3_err_msix_intr_stats abn_int_stats;
>> +
>> +     bool support_sfp_query;
>> +
>> +     struct hns3_vtag_cfg vtag_config;
>> +     struct hns3_port_base_vlan_config port_base_vlan_cfg;
>> +     LIST_HEAD(vlan_tbl, hns3_user_vlan_table) vlan_list;
>> +};
>> +
>> +struct hns3_vf {
>> +     struct hns3_adapter *adapter;
>> +};
>> +
>> +struct hns3_adapter {
>> +     struct hns3_hw hw;
>> +
>> +     /* Specific for PF or VF */
>> +     bool is_vf; /* false - PF, true - VF */
>> +     union {
>> +             struct hns3_pf pf;
>> +             struct hns3_vf vf;
>> +     };
>> +};
>> +
>> +#define HNS3_DEV_SUPPORT_DCB_B                       0x0
>> +
>> +#define hns3_dev_dcb_supported(hw) \
>> +     hns3_get_bit((hw)->flag, HNS3_DEV_SUPPORT_DCB_B)
>> +
>> +#define HNS3_DEV_PRIVATE_TO_HW(adapter) \
>> +     (&((struct hns3_adapter *)adapter)->hw)
>> +#define HNS3_DEV_PRIVATE_TO_ADAPTER(adapter) \
>> +     ((struct hns3_adapter *)adapter)
>> +#define HNS3_DEV_PRIVATE_TO_PF(adapter) \
>> +     (&((struct hns3_adapter *)adapter)->pf)
>> +#define HNS3VF_DEV_PRIVATE_TO_VF(adapter) \
>> +     (&((struct hns3_adapter *)adapter)->vf)
>> +#define HNS3_DEV_HW_TO_ADAPTER(hw) \
>> +     container_of(hw, struct hns3_adapter, hw)
>> +
>> +#define hns3_set_field(origin, mask, shift, val) \
>> +     do { \
>> +             (origin) &= (~(mask)); \
>> +             (origin) |= ((val) << (shift)) & (mask); \
>> +     } while (0)
>> +#define hns3_get_field(origin, mask, shift) \
>> +     (((origin) & (mask)) >> (shift))
>> +#define hns3_set_bit(origin, shift, val) \
>> +     hns3_set_field((origin), (0x1UL << (shift)), (shift), (val))
>> +#define hns3_get_bit(origin, shift) \
>> +     hns3_get_field((origin), (0x1UL << (shift)), (shift))
>> +
>> +/*
>> + * upper_32_bits - return bits 32-63 of a number
>> + * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
>> + * the "right shift count >= width of type" warning when that quantity is
>> + * 32-bits.
>> + */
>> +#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
>> +
>> +/* lower_32_bits - return bits 0-31 of a number */
>> +#define lower_32_bits(n) ((uint32_t)(n))
>> +
>> +#define BIT(nr) (1UL << (nr))
>> +
>> +#define BITS_PER_LONG        (__SIZEOF_LONG__ * 8)
>> +#define GENMASK(h, l) \
>> +     (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
>> +
>> +#define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y))
>> +#define rounddown(x, y) ((x) - ((x) % (y)))
>> +
>> +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
>> +
>> +#define max_t(type, x, y) ({                    \
>> +     type __max1 = (x);                      \
>> +     type __max2 = (y);                      \
>> +     __max1 > __max2 ? __max1 : __max2; })
>> +
>> +static inline void hns3_write_reg(void *base, uint32_t reg, uint32_t value)
>> +{
>> +     rte_write32(value, (volatile void *)((char *)base + reg));
>> +}
>> +
>> +static inline uint32_t hns3_read_reg(void *base, uint32_t reg)
>> +{
>> +     return rte_read32((volatile void *)((char *)base + reg));
>> +}
>> +
>> +#define hns3_write_dev(a, reg, value) \
>> +     hns3_write_reg((a)->io_base, (reg), (value))
>> +
>> +#define hns3_read_dev(a, reg) \
>> +     hns3_read_reg((a)->io_base, (reg))
>> +
>> +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
>> +
>> +#define NEXT_ITEM_OF_ACTION(act, actions, index)                        \
>> +     do {                                                            \
>> +             act = (actions) + (index);                              \
>> +             while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {        \
>> +                     (index)++;                                      \
>> +                     act = actions + index;                          \
>> +             }                                                       \
>> +     } while (0)
>> +
>> +#define MSEC_PER_SEC              1000L
>> +#define USEC_PER_MSEC             1000L
>> +
>> +static inline uint64_t
>> +get_timeofday_ms(void)
>> +{
>> +     struct timeval tv;
>> +
>> +     (void)gettimeofday(&tv, NULL);
>> +
>> +     return (uint64_t)tv.tv_sec * MSEC_PER_SEC + tv.tv_usec /
>> USEC_PER_MSEC;
>> +}
>> +
>> +static inline uint64_t
>> +hns3_atomic_test_bit(unsigned int nr, volatile uint64_t *addr)
>> +{
>> +     uint64_t res;
>> +
>> +     rte_mb();
> Is *addr CIO memory or a MMIO register?
> Looking at the patch 20/22, it should be a MMIO register. Whether a barrier is required depends on preceding accesses, so advise move the barrier out.
>> +     res = ((*addr) & (1UL << nr)) != 0;
>> +     rte_mb();
> If *addr is CIO memory, rte_cio_rmb is enough, rte_mb is overkilling.
> If *addr is MMIO register, rte_rmb one-way barrier is also adequate.
> Whether this barrier is required is depending on the following accesses, also advise to move out.
> This API should not includes barriers inside as a common API.
addr is a shared variable address between threads, we will remove the
memory barrier, and use atomic read __atomic_load_n as below:

static inline uint64_t
hns3_atomic_test_bit(unsigned int nr, volatile uint64_t *addr)
{
    uint64_t res;

    res = (__atomic_load_n(addr, __ATOMIC_RELAXED) & (1UL << nr)) != 0;   

    return res;
}

Thanks for your suggestion.
>> +     return res;
>> +}
>> +
>> +static inline void
>> +hns3_atomic_set_bit(unsigned int nr, volatile uint64_t *addr)
>> +{
>> +     __sync_fetch_and_or(addr, (1UL << nr));
> Gcc/clang provides '__atomic' builtins to replace the legacy '__sync' builtins, new code should always use the '__atomic' builtins rather than the '__sync' builtins.
> This function can be replaced with __atomic_fetch_or(addr, (1UL << nr), __ATOMIC_RELAXED);
> https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gcc/_005f_005fatomic-Builtins.html
OK, we will update and replace it with __atomic_xxx in patch V2.
Thanks for your suggestion.

    Regards
Xavier
>> +}
>> +
>> +static inline void
>> +hns3_atomic_clear_bit(unsigned int nr, volatile uint64_t *addr)
>> +{
>> +     __sync_fetch_and_and(addr, ~(1UL << nr));
>> +}
>> +
>> +static inline int64_t
>> +hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr)
>> +{
>> +     uint64_t mask = (1UL << nr);
>> +
>> +     return __sync_fetch_and_and(addr, ~mask) & mask;
>> +}
>> +
>> +#endif /* _HNS3_ETHDEV_H_ */
>> --
>> 2.7.4
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>
> .
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver
  2019-08-30 15:01   ` Ferruh Yigit
@ 2019-09-06  6:20     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06  6:20 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:01, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch registers hns3 PMD driver and adds the definition for log
>> interfaces.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
>> new file mode 100644
>> index 0000000..0587a9c
>> --- /dev/null
>> +++ b/drivers/net/hns3/hns3_ethdev.c
>> @@ -0,0 +1,141 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2018-2019 Hisilicon Limited.
>> + */
>> +
>> +#include <errno.h>
>> +#include <stdarg.h>
>> +#include <stdbool.h>
>> +#include <stdio.h>
>> +#include <stdint.h>
>> +#include <string.h>
>> +#include <sys/queue.h>
>> +#include <inttypes.h>
>> +#include <unistd.h>
>> +#include <arpa/inet.h>
>> +#include <rte_alarm.h>
>> +#include <rte_atomic.h>
>> +#include <rte_bus_pci.h>
>> +#include <rte_byteorder.h>
>> +#include <rte_common.h>
>> +#include <rte_cycles.h>
>> +#include <rte_debug.h>
>> +#include <rte_dev.h>
>> +#include <rte_eal.h>
>> +#include <rte_ether.h>
>> +#include <rte_ethdev_driver.h>
>> +#include <rte_ethdev_pci.h>
>> +#include <rte_interrupts.h>
>> +#include <rte_io.h>
>> +#include <rte_log.h>
>> +#include <rte_pci.h>
> Are all these headers really used at this stage? Can you please clean them and
> add later patches when they are required?
>
> <...>
>
>> +static int
>> +hns3_dev_init(struct rte_eth_dev *eth_dev)
>> +{
>> +	struct rte_device *dev = eth_dev->device;
>> +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev);
>> +	struct hns3_adapter *hns = eth_dev->data->dev_private;
>> +	struct hns3_hw *hw = &hns->hw;
>> +	uint16_t device_id = pci_dev->id.device_id;
>> +	int ret;
>> +
>> +	PMD_INIT_FUNC_TRACE();
>> +
>> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>> +		return 0;
>> +
>> +	eth_dev->dev_ops = &hns3_eth_dev_ops;
>> +	rte_eth_copy_pci_info(eth_dev, pci_dev);
> I think no need to call 'rte_eth_copy_pci_info()', it is called by
> 'rte_eth_dev_pci_generic_probe()' before 'hns3_dev_init()' called.
I will fix it in patch V2.
>
>> +
>> +	hns->is_vf = false;
> There is a separate VF driver, is this field still needed?
hns3 PMD driver includes PF and VF drivers.
In the struct hns3_adapter data, the memeber named is_vf is used to
distinguish PF from VF devices.
>> +	hw->data = eth_dev->data;
>> +	hw->adapter_state = HNS3_NIC_INITIALIZED;
>> +
>> +	return 0;
> Init should set 'RTE_ETH_DEV_CLOSE_REMOVE' flag, and '.dev_close' should free
> the driver allocated resources, which there is not up until this patch:
>
>  +eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
>
I will fix it in patch V2.
Thanks for your suggestion.

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of hns3 PMD driver
  2019-08-30 15:02   ` Ferruh Yigit
@ 2019-09-06  6:49     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06  6:49 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:02, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for cmd of hns3 PMD driver, driver can interact
>> with firmware through command to complete hardware configuration.
>>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
>> index bfb54f2..84fcf34 100644
>> --- a/drivers/net/hns3/hns3_ethdev.h
>> +++ b/drivers/net/hns3/hns3_ethdev.h
>> @@ -39,7 +39,6 @@
>>  
>>  #define HNS3_4_TCS			4
>>  #define HNS3_8_TCS			8
>> -#define HNS3_MAX_TC_NUM			8
> This definition is used by 'hns3_ethdev.h' but moved to 'hns3_cmd.h', and
> 'hns3_ethdev.h' doesn't include 'hns3_cmd.h', which will force whatever .c file
> include 'hns3_ethdev.h' to include 'hns3_cmd.h' before it and these kind of .h
> order dependencies are easy to break.
> Would it work if 'hns3_ethdev.h' includes 'hns3_cmd.h'
>
We will update it and send patch V2.
Thanks for your suggestion.

    Regards
Xavier
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation
  2019-08-30 15:04   ` Ferruh Yigit
@ 2019-09-06  6:56     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06  6:56 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang



On 2019/8/30 23:04, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds link update operation to hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -2528,6 +2725,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.mac_addr_remove        = hns3_remove_mac_addr,
>>  	.mac_addr_set           = hns3_set_default_mac_addr,
>>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>> +	.link_update            = hns3_dev_link_update,
> Can you please update .ini file in this patch and mark following features as
> supported:
> Link status
>
Hi, Ferruh Yigit

    OK, We will fix it in patch V2.
    Thanks for your suggestion.

    Regards
Xavier.



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver
  2019-08-30 15:06   ` Ferruh Yigit
@ 2019-09-06  8:23     ` Wei Hu (Xavier)
  2019-09-06 11:08     ` Wei Hu (Xavier)
  1 sibling, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06  8:23 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:06, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for flow directory of hns3 PMD driver.
>> Flow directory feature is only supported in hns3 PF driver.
>> It supports the network L2\L3\L4 and tunnel packet creation,
>> deletion, flushing, and querying hit statistics.
> This patch also adds rte_flow support, can you please add this into commit log?
OK,  I will update it and send patch V2.
Thanks.
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -2726,6 +2744,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.mac_addr_set           = hns3_set_default_mac_addr,
>>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>>  	.link_update            = hns3_dev_link_update,
>> +	.filter_ctrl            = hns3_dev_filter_ctrl,
> 'hns3_dev_filter_ctrl()' is not exists up until this patch.
>
> This is the problem of not enabling the driver yet, it is very hard to see these
> kind of issues. When Makefile/meson patch moved to the begging of the patches
> and start to build the driver, these issues will be visible.
I will fix it in patch V2.
>>  };
>>  
>>  static int
>> @@ -2739,6 +2758,16 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
>>  	int ret;
>>  
>>  	PMD_INIT_FUNC_TRACE();
>> +	eth_dev->process_private = (struct hns3_process_private *)
>> +	    rte_zmalloc_socket("hns3_filter_list",
>> +			       sizeof(struct hns3_process_private),
>> +			       RTE_CACHE_LINE_SIZE, eth_dev->device->numa_node);
>> +	if (eth_dev->process_private == NULL) {
>> +		PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
>> +		return -ENOMEM;
>> +	}
>> +	/* initialize flow filter lists */
>> +	hns3_filterlist_init(eth_dev);
> Can you please free 'process_private' in, close dev_ops?
    We will update it and send patch V2.

    Regards
Xavier




^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver
  2019-08-30 15:06   ` Ferruh Yigit
  2019-09-06  8:23     ` Wei Hu (Xavier)
@ 2019-09-06 11:08     ` Wei Hu (Xavier)
  1 sibling, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06 11:08 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:06, Ferruh Yigit wrote:
> On 8/23/2019 2:46 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for flow directory of hns3 PMD driver.
>> Flow directory feature is only supported in hns3 PF driver.
>> It supports the network L2\L3\L4 and tunnel packet creation,
>> deletion, flushing, and querying hit statistics.
> This patch also adds rte_flow support, can you please add this into commit log?
We will update it in patch V2.
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -2726,6 +2744,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.mac_addr_set           = hns3_set_default_mac_addr,
>>  	.set_mc_addr_list       = hns3_set_mc_mac_addr_list,
>>  	.link_update            = hns3_dev_link_update,
>> +	.filter_ctrl            = hns3_dev_filter_ctrl,
> 'hns3_dev_filter_ctrl()' is not exists up until this patch.
>
> This is the problem of not enabling the driver yet, it is very hard to see these
> kind of issues. When Makefile/meson patch moved to the begging of the patches
> and start to build the driver, these issues will be visible.
I will fix it in patch V2.
>>  };
>>  
>>  static int
>> @@ -2739,6 +2758,16 @@ hns3_dev_init(struct rte_eth_dev *eth_dev)
>>  	int ret;
>>  
>>  	PMD_INIT_FUNC_TRACE();
>> +	eth_dev->process_private = (struct hns3_process_private *)
>> +	    rte_zmalloc_socket("hns3_filter_list",
>> +			       sizeof(struct hns3_process_private),
>> +			       RTE_CACHE_LINE_SIZE, eth_dev->device->numa_node);
>> +	if (eth_dev->process_private == NULL) {
>> +		PMD_INIT_LOG(ERR, "Failed to alloc memory for process private");
>> +		return -ENOMEM;
>> +	}
>> +	/* initialize flow filter lists */
>> +	hns3_filterlist_init(eth_dev);
> Can you please free 'process_private' in, close dev_ops?
    We will fix it in patch V2.
    Thanks for your suggestion.

    Regards
Xavier
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox of hns3 PMD driver
  2019-08-30 15:08   ` Ferruh Yigit
@ 2019-09-06 11:25     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06 11:25 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:08, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for mailbox of hns3 PMD driver, mailbox is
>> used for communication between PF and VF driver.
>>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -27,6 +27,7 @@
>>  #include <rte_io.h>
>>  
>>  #include "hns3_cmd.h"
>> +#include "hns3_mbx.h"
> Why need to include the new header if .c file is not using from the header? Same
> for other .c files below.
We will fix it in patch V2.
> <...>
>
>> @@ -0,0 +1,337 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2018-2019 Hisilicon Limited.
>> + */
>> +
>> +#include <errno.h>
>> +#include <stdbool.h>
>> +#include <stdint.h>
>> +#include <stdio.h>
>> +#include <stdlib.h>
>> +#include <string.h>
>> +#include <sys/queue.h>
>> +#include <inttypes.h>
>> +#include <unistd.h>
>> +#include <rte_byteorder.h>
>> +#include <rte_common.h>
>> +#include <rte_cycles.h>
>> +#include <rte_debug.h>
>> +#include <rte_dev.h>
>> +#include <rte_eal.h>
>> +#include <rte_ether.h>
>> +#include <rte_ethdev_driver.h>
>> +#include <rte_io.h>
>> +#include <rte_spinlock.h>
>> +#include <rte_pci.h>
>> +#include <rte_bus_pci.h>
> Same comment for all .c files in the driver, above inclusion list feels like a
> copy/paste, can you please include only necessary headers?
>
    We will fix it in patch V2.
    Thanks for your suggestion.

    Regards
Xavier

    Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF PMD driver
  2019-08-30 15:11   ` Ferruh Yigit
  2019-08-31  9:03     ` Wei Hu (Xavier)
@ 2019-09-06 11:27     ` Wei Hu (Xavier)
  1 sibling, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06 11:27 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:11, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds support for hns3 VF PMD driver.
>>
>> In current version, we only support VF device is bound to vfio_pci or
>> igb_uio and then taken over by DPDK when PF device is taken over by kernel
>> mode hns3 ethdev driver, VF is not supported when PF devcie is taken over
>> by DPDK.
> I think better to say 'when PF is driven by DPDK driver' than 'when PF device is
> taken over by DPDK'.
>
> Can you please this (VF only supported when PF is driver by kernel) in your
> documentation please?
> And perhaps VF driver support in feature list to highlight it...
    I will update it in patch V2.
    Thanks for your suggestion.

    Regards
Xavier
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>
>
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops
  2019-08-30 15:14   ` Ferruh Yigit
@ 2019-09-06 11:51     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-06 11:51 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:14, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds dev_start, dev_stop, dev_configure, promiscuous_enable,
>> promiscuous_disable, allmulticast_enable, allmulticast_disable,
>> dev_infos_get related function codes.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> @@ -3626,6 +4031,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
>>  	.vlan_offload_set       = hns3_vlan_offload_set,
>>  	.vlan_pvid_set          = hns3_vlan_pvid_set,
>>  	.get_dcb_info           = hns3_get_dcb_info,
>> +	.dev_supported_ptypes_get = hns3_dev_supported_ptypes_get,
> 'hns3_dev_supported_ptypes_get' has been defined in previous patch, what do you
> thinks defining and using in same patch?
    We will fix it in patch V2.
    Thanks for your suggestion.

    Regards
Xavier
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-08-30 14:58   ` Ferruh Yigit
@ 2019-09-10 11:43     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-10 11:43 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 22:58, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch add build related files for hns3 PMD driver.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>>  MAINTAINERS                                  |  7 ++++
>>  config/common_armv8a_linux                   |  5 +++
>>  config/common_base                           |  5 +++
>>  config/defconfig_arm64-armv8a-linuxapp-clang |  2 +
>>  doc/guides/nics/features/hns3.ini            | 38 +++++++++++++++++++
>>  doc/guides/nics/hns3.rst                     | 55 ++++++++++++++++++++++++++++
> This file needs to be added to the index file: 'doc/guides/nics/index.rst'
Thanks for your suggestion.
We will fix it in patch V2.
>
> <...>
>
>> diff --git a/config/defconfig_arm64-armv8a-linuxapp-clang b/config/defconfig_arm64-armv8a-linuxapp-clang
>> index d3b4dad..c73f5fb 100644
>> --- a/config/defconfig_arm64-armv8a-linuxapp-clang
>> +++ b/config/defconfig_arm64-armv8a-linuxapp-clang
>> @@ -6,3 +6,5 @@
>>  
>>  CONFIG_RTE_TOOLCHAIN="clang"
>>  CONFIG_RTE_TOOLCHAIN_CLANG=y
>> +
>> +CONFIG_RTE_LIBRTE_HNS3_PMD=n
> I can understand the architecture ones, but why clang is not supported? Can you
> please add this support?
We will fix it in patch V2.
> <...>
>
>> diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
>> new file mode 100644
>> index 0000000..c9d0253
>> --- /dev/null
>> +++ b/doc/guides/nics/hns3.rst
>> @@ -0,0 +1,55 @@
>> +..  SPDX-License-Identifier: BSD-3-Clause
>> +    Copyright(c) 2018-2019 Hisilicon Limited.
>> +
>> +HNS3 Poll Mode Driver
>> +===============================
>> +
>> +The Hisilicon Network Subsystem is a long term evolution IP which is
>> +supposed to be used in Hisilicon ICT SoCs such as Kunpeng 920.
> Can you please add a official link/reference to the product?
>

The official website link of kunpeng920 chip is not available yet,
And we will paste the link to the servers product using kunpeng920 in
patch V2.
Thanks for your suggestion.

> <...>
>
>> @@ -0,0 +1,43 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2018-2019 Hisilicon Limited.
>> +
>> +include $(RTE_SDK)/mk/rte.vars.mk
>> +
>> +#
>> +# library name
>> +#
>> +LIB = librte_pmd_hns3.a
>> +
>> +CFLAGS += -O3
>> +CFLAGS += $(WERROR_FLAGS)
>> +CFLAGS += -DALLOW_EXPERIMENTAL_API -fsigned-char
> Why '-DALLOW_EXPERIMENTAL_API' is required? Can we remove it?
There are some APIs as follows in hns3 PMD driver, 
'-DALLOW_EXPERIMENTAL_API'  can clear warning during compiling.
# Experimantal APIs:
# - rte_mp_action_register
# - rte_mp_action_unregister
# - rte_mp_reply
# - rte_mp_request_sync
>> +
>> +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
>> +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
>> +LDLIBS += -lrte_bus_pci
> Are all these libraries really required, like kvargs? Can you please clean the
> unused ones?
We will fix it in patch V2.
>> +
>> +EXPORT_MAP := rte_pmd_hns3_version.map
>> +
>> +LIBABIVER := 2
> It should be 1.
We will fix it in patch V2.
>
> <...>
>
>> +# install this header file
>> +SYMLINK-$(CONFIG_RTE_LIBRTE_HNS3_PMD)-include := hns3_ethdev.h
> No need to expose the header file, it is not public header.
We will fix it in patch V2.
>
> <...>
>
>> @@ -0,0 +1,19 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2018-2019 Hisilicon Limited
>> +
>> +sources = files('hns3_cmd.c',
>> +	'hns3_dcb.c',
>> +	'hns3_intr.c',
>> +	'hns3_ethdev.c',
>> +	'hns3_ethdev_vf.c',
>> +	'hns3_fdir.c',
>> +	'hns3_flow.c',
>> +	'hns3_mbx.c',
>> +	'hns3_regs.c',
>> +	'hns3_rss.c',
>> +	'hns3_rxtx.c',
>> +	'hns3_stats.c',
>> +	'hns3_mp.c')
>> +deps += ['hash']
>> +
>> +cflags += '-DALLOW_EXPERIMENTAL_API'
> There is better way to do this in meson, please check other samples. But as the
> makefile comment, does it really needed, if so can you please add the
> experimental APIs used as a comment, to both meson and Makefile?
I will update it as follows:

allow_experimental_apis = true
# Experimantal APIs:
# - rte_mp_action_register
# - rte_mp_action_unregister
# - rte_mp_reply
# - rte_mp_request_sync

Thanks for your suggestion.

>> diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
>> new file mode 100644
>> index 0000000..3aef967
>> --- /dev/null
>> +++ b/drivers/net/hns3/rte_pmd_hns3_version.map
>> @@ -0,0 +1,3 @@
>> +DPDK_19.08 {
> DPDK_19.11
We will fix it in patch V2.



   Thanks for your suggestion.

   Regards
Xavier
>
> .
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files
  2019-09-03 15:27     ` Ye Xiaolong
@ 2019-09-11 11:36       ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-11 11:36 UTC (permalink / raw)
  To: Ye Xiaolong
  Cc: dev, Stephen Hemminger, linuxarm, xavier_huwei, liudongdong3,
	forest.zhouchang



On 2019/9/3 23:27, Ye Xiaolong wrote:
> On 08/29, Stephen Hemminger wrote:
>> On Fri, 23 Aug 2019 21:47:11 +0800
>> "Wei Hu (Xavier)" <xavier.huwei@huawei.com> wrote:
>>
>>> +Limitations or Known issues
>>> +---------------------------
>>> +Build with clang is not supported yet.
>>> +Currently, only ARMv8 architecture is supported.
>>> \ No newline at end of file
>> Pleas fix this. You need to add new line at end of
>> file. Vi does this by default, and Emacs has an option
>> to do this.
> Just set below line in .emacs would work, Stephen told me last time :). 
> (setq require-final-newline t)
>
> Thanks,
> Xiaolong
>
> .
Thanks for your suggestion.
Xavier
>



^ permalink raw reply	[flat|nested] 75+ messages in thread

* Re: [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation
  2019-08-30 15:13   ` Ferruh Yigit
@ 2019-09-11 11:40     ` Wei Hu (Xavier)
  0 siblings, 0 replies; 75+ messages in thread
From: Wei Hu (Xavier) @ 2019-09-11 11:40 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: linuxarm, xavier_huwei, liudongdong3, forest.zhouchang

Hi, Ferruh Yigit


On 2019/8/30 23:13, Ferruh Yigit wrote:
> On 8/23/2019 2:47 PM, Wei Hu (Xavier) wrote:
>> This patch adds queue related operation, package sending and
>> receiving function codes.
>>
>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
>> Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> Signed-off-by: Hao Chen <chenhao164@huawei.com>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> <...>
>
>> +
>> +#define __packed __attribute__((packed))
>> +/* hardware spec ring buffer format */
>> +__packed struct hns3_desc {
> Can you use existing '__rte_packed' instead?
>
OK, I will fix it in patch V2.
Thanks for your suggestion.

Regards
Xavier



^ permalink raw reply	[flat|nested] 75+ messages in thread

end of thread, other threads:[~2019-09-11 11:41 UTC | newest]

Thread overview: 75+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-23 13:46 [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 01/22] net/hns3: add hardware registers definition Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 02/22] net/hns3: add some definitions for data structure and macro Wei Hu (Xavier)
2019-08-30  8:25   ` Gavin Hu (Arm Technology China)
2019-09-05  6:01     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 03/22] net/hns3: register hns3 PMD driver Wei Hu (Xavier)
2019-08-30 15:01   ` Ferruh Yigit
2019-09-06  6:20     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 04/22] net/hns3: add support for cmd of " Wei Hu (Xavier)
2019-08-30 15:02   ` Ferruh Yigit
2019-09-06  6:49     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 05/22] net/hns3: add the initialization " Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 06/22] net/hns3: add support for MAC address related operations Wei Hu (Xavier)
2019-08-30 15:03   ` Ferruh Yigit
2019-09-05  5:40     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 07/22] net/hns3: add support for some misc operations Wei Hu (Xavier)
2019-08-30 15:04   ` Ferruh Yigit
2019-08-23 13:46 ` [dpdk-dev] [PATCH 08/22] net/hns3: add support for link update operation Wei Hu (Xavier)
2019-08-30 15:04   ` Ferruh Yigit
2019-09-06  6:56     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 09/22] net/hns3: add support for flow directory of hns3 PMD driver Wei Hu (Xavier)
2019-08-30 15:06   ` Ferruh Yigit
2019-09-06  8:23     ` Wei Hu (Xavier)
2019-09-06 11:08     ` Wei Hu (Xavier)
2019-08-23 13:46 ` [dpdk-dev] [PATCH 10/22] net/hns3: add support for RSS " Wei Hu (Xavier)
2019-08-30 15:07   ` Ferruh Yigit
2019-08-31  9:16     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 11/22] net/hns3: add support for flow control " Wei Hu (Xavier)
2019-08-30 15:07   ` Ferruh Yigit
2019-08-31  8:04     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 12/22] net/hns3: add support for VLAN " Wei Hu (Xavier)
2019-08-30 15:08   ` Ferruh Yigit
2019-08-31  9:04     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 13/22] net/hns3: add support for mailbox " Wei Hu (Xavier)
2019-08-30 15:08   ` Ferruh Yigit
2019-09-06 11:25     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 14/22] net/hns3: add support for hns3 VF " Wei Hu (Xavier)
2019-08-30 15:11   ` Ferruh Yigit
2019-08-31  9:03     ` Wei Hu (Xavier)
2019-09-06 11:27     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 15/22] net/hns3: add package and queue related operation Wei Hu (Xavier)
2019-08-23 15:42   ` Aaron Conole
2019-08-30 15:13   ` Ferruh Yigit
2019-09-11 11:40     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 16/22] net/hns3: add start stop configure promiscuous ops Wei Hu (Xavier)
2019-08-30 15:14   ` Ferruh Yigit
2019-09-06 11:51     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 17/22] net/hns3: add dump register ops for hns3 PMD driver Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 18/22] net/hns3: add abnormal interrupt process " Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 19/22] net/hns3: add stats related ops " Wei Hu (Xavier)
2019-08-30 15:20   ` Ferruh Yigit
2019-08-31  8:49     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 20/22] net/hns3: add reset related process " Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 21/22] net/hns3: add multiple process support " Wei Hu (Xavier)
2019-08-30 15:14   ` Ferruh Yigit
2019-09-02 13:41     ` Wei Hu (Xavier)
2019-08-23 13:47 ` [dpdk-dev] [PATCH 22/22] net/hns3: add hns3 build files Wei Hu (Xavier)
2019-08-23 14:08   ` Jerin Jacob Kollanukkaran
2019-08-30  3:22     ` Wei Hu (Xavier)
2019-08-31  2:10       ` Wei Hu (Xavier)
2019-08-30 14:57     ` Ferruh Yigit
2019-08-30  6:16   ` Stephen Hemminger
2019-08-31  8:46     ` Wei Hu (Xavier)
2019-08-30  6:17   ` Stephen Hemminger
2019-08-31  8:44     ` Wei Hu (Xavier)
2019-09-03 15:27     ` Ye Xiaolong
2019-09-11 11:36       ` Wei Hu (Xavier)
2019-08-30 14:58   ` Ferruh Yigit
2019-09-10 11:43     ` Wei Hu (Xavier)
2019-08-30 15:00   ` Ferruh Yigit
2019-08-31  8:07     ` Wei Hu (Xavier)
2019-08-30 15:12   ` Ferruh Yigit
2019-08-31  8:07     ` Wei Hu (Xavier)
2019-08-30 15:23 ` [dpdk-dev] [PATCH 00/22] add hns3 ethernet PMD driver Ferruh Yigit
2019-08-31  8:06   ` Wei Hu (Xavier)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).