All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/20] DPDK PMD for ThunderX NIC device
@ 2016-05-07 15:16 Jerin Jacob
  2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
                   ` (23 more replies)
  0 siblings, 24 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=BIT_MACRO,CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with today's git HEAD change-set
84c9b5a9fe926f1aa033dc5352be8d4a5e0b789d along with
following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

Jerin Jacob (20):
  thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  thunderx/nicvf: add pmd skeleton
  thunderx/nicvf: add link status and link update support
  thunderx/nicvf: add get_reg and get_reg_length support
  thunderx/nicvf: add dev_configure support
  thunderx/nicvf: add dev_infos_get support
  thunderx/nicvf: add rx_queue_setup/release support
  thunderx/nicvf: add tx_queue_setup/release support
  thunderx/nicvf: add rss and reta query and update support
  thunderx/nicvf: add mtu_set and promiscuous_enable support
  thunderx/nicvf: add stats support
  thunderx/nicvf: add single and multi segment tx functions
  thunderx/nicvf: add single and multi segment rx functions
  thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
    support
  thunderx/nicvf: add rx queue start and stop support
  thunderx/nicvf: add tx queue start and stop support
  thunderx/nicvf: add device start,stop and close support
  thunderx/config: set max numa node to two
  thunderx/nicvf: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   11 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   94 +-
 doc/guides/nics/thunderx.rst                       |  349 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   66 +
 drivers/net/thunderx/base/nicvf_hw.c               |  911 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1217 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  416 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1857 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  103 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  624 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |   66 +
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    2 +
 23 files changed, 6503 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.1.0

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-09 17:38   ` Stephen Hemminger
  2016-05-12 15:40   ` Pattan, Reshma
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
                   ` (22 subsequent siblings)
  23 siblings, 2 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
drivers/net/thunderx/nicvf/base directory.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c      |  911 +++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h      |  240 ++++++
 drivers/net/thunderx/base/nicvf_hw_defs.h | 1217 +++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c    |  416 ++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h    |  232 ++++++
 drivers/net/thunderx/base/nicvf_plat.h    |  132 ++++
 6 files changed, 3148 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..da52a92
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,911 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					 nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+					      NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = 10;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(2000);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = 10;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+					  NIC_QSET_RBDR_0_1_HEAD,
+					  qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+					  NIC_QSET_RBDR_0_1_TAIL,
+					  qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			      NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+					      NIC_QSET_RBDR_0_1_PRFCH_STATUS,
+					      qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(2000);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			      NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= 0);
+	assert(val <= 6);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.reserved_45_63 = 0,
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.reserved_36_41 = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+					    RBDR_SIZE_SHIFT);
+	rbdr_cfg.reserved_25_31 = 0;
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.reserved_12_15 = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+			   RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+			   RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+			   RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < (rbdr->qlen_mask)) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+					  NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1,
+					  SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+			   SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+			   SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+			   SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+			   CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+			   CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+			   CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..32357cc
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,240 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE = 0,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK = 0,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8174 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8173 */
+	NICVF_ERR_BASE_INIT,     /* -8172 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8171 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..45d8ff4
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,1217 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_RBDR_COUNT_MASK      (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS			64
+#define NIC_HW_MAX_FRS			9200 /* 9216 max packet including FCS */
+#define NIC_HW_MAX_SEGS			12
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES	128 /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES	512 /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES	128 /* 7 bits */
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD = 0x0,
+	NIC_SEND_LD_TYPE_E_LDT = 0x1,
+	NIC_SEND_LD_TYPE_E_LDWB = 0x2,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST = 0x3,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE = 0x0,
+	ETYPE_ALG_SKIP = 0x1,
+	ETYPE_ALG_ENDPARSE = 0x2,
+	ETYPE_ALG_VLAN = 0x3,
+	ETYPE_ALG_VLAN_STRIP = 0x4,
+};
+
+enum layer3_type {
+	L3TYPE_NONE = 0x0,
+	L3TYPE_GRH = 0x1,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE = 0x0,
+	L4TYPE_IPSEC_ESP = 0x1,
+	L4TYPE_IPFRAG = 0x2,
+	L4TYPE_IPCOMP = 0x3,
+	L4TYPE_TCP = 0x4,
+	L4TYPE_UDP = 0x5,
+	L4TYPE_SCTP = 0x6,
+	L4TYPE_GRE = 0x7,
+	L4TYPE_ROCE_BTH = 0x8,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE = 0x0,
+	CPI_ALG_VLAN = 0x1,
+	CPI_ALG_VLAN16 = 0x2,
+	CPI_ALG_DIFF = 0x3,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE = 0x00,
+	RSS_ALG_PORT = 0x01,
+	RSS_ALG_IP = 0x02,
+	RSS_ALG_TCP_IP = 0x03,
+	RSS_ALG_UDP_IP = 0x04,
+	RSS_ALG_SCTP_IP = 0x05,
+	RSS_ALG_GRE_IP = 0x06,
+	RSS_ALG_ROCE = 0x07,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC = 0x00,
+	RSS_HASH_IP = 0x01,
+	RSS_HASH_TCP = 0x02,
+	RSS_HASH_TCP_SYN_DIS = 0x03,
+	RSS_HASH_UDP = 0x04,
+	RSS_HASH_L4ETC = 0x05,
+	RSS_HASH_ROCE = 0x06,
+	RSS_L3_BIDI = 0x07,
+	RSS_L4_BIDI = 0x08,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID = 0x0,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT = 0x00,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD = 0x00,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET = 0,
+	CQE_RX_TCP_END_INVALID_FLAG = 1,
+	CQE_RX_TCP_END_TIMEOUT = 2,
+	CQE_RX_TCP_END_OUT_OF_SEQ = 3,
+	CQE_RX_TCP_END_PKT_ERR = 4,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE = 0x0,
+	CQE_RX_ERRLVL_L2 = 0x1,
+	CQE_RX_ERRLVL_L3 = 0x2,
+	CQE_RX_ERRLVL_L4 = 0x3,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE = 0x0,
+	CQE_RX_ERR_RE_PARTIAL = 0x1,
+	CQE_RX_ERR_RE_JABBER = 0x2,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE = 0x00,
+	SEND_L4_CSUM_UDP = 0x01,
+	SEND_L4_CSUM_TCP = 0x02,
+	SEND_L4_CSUM_SCTP = 0x03,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32 = 0x00,
+	SEND_CRCALG_CRC32C = 0x01,
+	SEND_CRCALG_ICRC = 0x02,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD = 0x00,
+	SEND_LD_TYPE_LDT = 0x01,
+	SEND_LD_TYPE_LDWB = 0x02,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET = 0x00,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64 = 0x00,
+	SEND_MEMDSZ_B32 = 0x01,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID = 0x00,
+	SQ_DESC_TYPE_HEADER = 0x01,
+	SQ_DESC_TYPE_CRC = 0x02,
+	SQ_DESC_TYPE_IMMEDIATE = 0x03,
+	SQ_DESC_TYPE_GATHER = 0x04,
+	SQ_DESC_TYPE_MEMORY = 0x05,
+};
+
+enum l3_type_t {
+	L3_NONE		= 0x00,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE		= 0x00,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP = 0x0,
+	STRIP_FIRST_VLAN = 0x1,
+	STRIP_SECOND_VLAN = 0x2,
+	STRIP_RESERV = 0x3
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE = 0,
+	RBDR_FIFO_STATE_ACTIVE   = 1,
+	RBDR_FIFO_STATE_RESET    = 2,
+	RBDR_FIFO_STATE_FAIL     = 3
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF      = 0,
+	RQ_CACHE_ALLOC_ALL      = 1,
+	RQ_CACHE_ALLOC_FIRST    = 2,
+	RQ_CACHE_ALLOC_TWO      = 3,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE = 0x0,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD = 0x0,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS = 0x0,
+	RQ_SQ_STATS_PKTS = 0x1,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS = 0,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS = 0,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Structures */
+
+#define NICVF_CQE_RBPTR_WORD         6
+#define NICVF_CQE_RX2_RBPTR_WORD     7
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserverd1:1;
+	uint64_t reserverd0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserverd0:34;
+	uint64_t reserverd1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..705523a
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,416 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+		    nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(1000);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(1000);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(1000);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+			    mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+			    nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+		    nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+			    mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				  (size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..83c1844
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,132 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#include "nicvf_hw.h"
+#include "nicvf_mbox.h"
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-09 17:40   ` Stephen Hemminger
                     ` (3 more replies)
  2016-05-07 15:16 ` [PATCH 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
                   ` (21 subsequent siblings)
  23 siblings, 4 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  64 +++++
 drivers/net/thunderx/nicvf_ethdev.c                | 274 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  49 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   2 +
 10 files changed, 621 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index 35d38d9..a0e2b50 100644
--- a/config/common_base
+++ b/config/common_base
@@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index fe5e987..7940bbd 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,3 +34,13 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 3386a67..58dafb5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -49,6 +49,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..69bb750
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,64 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..3c545b4
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,274 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static struct itimerspec alarm_time = {
+	.it_interval = {
+		.tv_sec = 0,
+		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
+	},
+	.it_value = {
+		.tv_sec = 0,
+		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
+	},
+};
+
+static void
+nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
+{
+	struct nicvf *nic = (struct nicvf *)arg;
+
+	nicvf_reg_poll_interrupts(nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	int ret = -EBUSY;
+
+	nic->intr_handle.type = RTE_INTR_HANDLE_ALARM;
+	nic->intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
+	if (nic->intr_handle.fd == -1)
+		goto error;
+	ret = rte_intr_callback_register(&nic->intr_handle,
+					 nicvf_interrupt, nic);
+	ret |= timerfd_settime(nic->intr_handle.fd, 0, &alarm_time, NULL);
+error:
+	return ret;
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	int ret;
+
+	ret = rte_intr_callback_unregister(&nic->intr_handle,
+					   nicvf_interrupt, nic);
+	ret |= close(nic->intr_handle.fd);
+	return ret;
+}
+
+
+/* Initialise and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+		     pci_dev->id.vendor_id, pci_dev->id.device_id,
+		     pci_dev->addr.domain, pci_dev->addr.bus,
+		     pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning postive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+		     THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..6431329
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,49 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf*
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return (struct nicvf *)eth_dev->data->dev_private;
+}
+
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..aae898e
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	uint8_t	port_id;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..349c6e1
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.04 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c66e491..f5f2f5c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -101,6 +101,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2)   += -lsze2
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
 # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
 # for calculating HMAC precomputes
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
@@ -148,6 +149,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING)       += -lrte_pmd_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
 
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 03/20] thunderx/nicvf: add link status and link update support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-06-08 16:10   ` Ferruh Yigit
  2016-05-07 15:16 ` [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
                   ` (20 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 55 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3c545b4..e6f0b3e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,35 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+	    *(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static struct itimerspec alarm_time = {
 	.it_interval = {
 		.tv_sec = 0,
@@ -85,7 +114,13 @@ nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
 {
 	struct nicvf *nic = (struct nicvf *)arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+					      RTE_ETH_EVENT_INTR_LSC);
+	}
 }
 
 static int
@@ -115,9 +150,27 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return ret;
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 6431329..cc19da5 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf*
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (2 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-12 15:39   ` Pattan, Reshma
  2016-05-07 15:16 ` [PATCH 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
                   ` (19 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index e6f0b3e..92b08a5 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,6 +70,9 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
+static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
+			      struct rte_dev_reg_info *regs);
 
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
@@ -167,10 +170,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
 
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 05/20] thunderx/nicvf: add dev_configure support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (3 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 79 +++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 92b08a5..6a153e7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,7 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
@@ -195,8 +196,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 06/20] thunderx/nicvf: add dev_infos_get support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (4 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-13 13:52   ` Pattan, Reshma
  2016-05-07 15:16 ` [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
                   ` (17 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 47 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6a153e7..1269672 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,8 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_info_get(struct rte_eth_dev *dev,
+			       struct rte_eth_dev_info *dev_info);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -196,6 +198,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -277,6 +323,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index cc19da5..da6fdcf 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define DEFAULT_RX_FREE_THRESH          224
+#define DEFAULT_TX_FREE_THRESH          224
+#define DEFAULT_TX_FREE_MPOOL_THRESH    16
 
 static inline struct nicvf*
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (5 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-19  9:30   ` Pattan, Reshma
  2016-05-07 15:16 ` [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
                   ` (16 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 141 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   3 +
 2 files changed, 144 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1269672..3b94168 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,11 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_rxconf *rx_conf,
+				    struct rte_mempool *mp);
+static void nicvf_dev_rx_queue_release(void *rx_queue);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -198,6 +203,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+				   NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socked id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+			     socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->pg_num != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				    rx_conf->rx_free_thresh :
+				    DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+			     rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+			   qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+				 RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+		   qidx, rxq, mp->name, nb_desc,
+		   rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -324,6 +463,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index da6fdcf..8ffea8d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -60,6 +60,9 @@
 #define DEFAULT_TX_FREE_THRESH          224
 #define DEFAULT_TX_FREE_MPOOL_THRESH    16
 
+#define MAX_RX_FREE_THRESH              1024
+#define MAX_TX_FREE_THRESH              1024
+
 static inline struct nicvf*
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (6 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-19 12:19   ` Pattan, Reshma
  2016-05-07 15:16 ` [PATCH 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
                   ` (15 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 179 ++++++++++++++++++++++++++++++++++++
 1 file changed, 179 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3b94168..b99b4bb 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -78,6 +78,10 @@ static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    const struct rte_eth_rxconf *rx_conf,
 				    struct rte_mempool *mp);
 static void nicvf_dev_rx_queue_release(void *rx_queue);
+static int nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_txconf *tx_conf);
+static void nicvf_dev_tx_queue_release(void *sq);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -226,6 +230,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				   NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socked id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+			     socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to avilable qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				    tx_conf->tx_free_thresh :
+				    DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) || tx_free_thresh > MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+			   qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+				 RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->port_id = dev->data->port_id;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	txq->is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!txq->is_single_pool)
+		txq->tx_free_thresh =
+		(uint16_t)(tx_conf->tx_free_thresh == DEFAULT_TX_FREE_THRESH ?
+				DEFAULT_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	txq->tail = 0;
+	txq->head = 0;
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+		   qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -465,6 +642,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 09/20] thunderx/nicvf: add rss and reta query and update support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (7 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 182 ++++++++++++++++++++++++++++++++++++
 1 file changed, 182 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b99b4bb..2f4b08e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,16 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_reta_entry64 *reta_conf,
+				 uint16_t reta_size);
+static int nicvf_dev_reta_query(struct rte_eth_dev *dev,
+				struct rte_eth_rss_reta_entry64 *reta_conf,
+				uint16_t reta_size);
+static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -207,6 +217,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				 ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				 ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+	    rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+			    rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -640,6 +818,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (8 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 11/20] thunderx/nicvf: add stats support Jerin Jacob
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 2f4b08e..b1a0077 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,8 +71,10 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
 				 struct rte_eth_rss_reta_entry64 *reta_conf,
 				 uint16_t reta_size);
@@ -193,6 +195,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+	    (frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -217,6 +262,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -817,7 +868,9 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8ffea8d..5937b45 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -63,6 +63,8 @@
 #define MAX_RX_FREE_THRESH              1024
 #define MAX_TX_FREE_THRESH              1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf*
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 11/20] thunderx/nicvf: add stats support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (9 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 69 +++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b1a0077..23325d6 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,9 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
+				struct rte_eth_stats *stat);
+static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
 static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
@@ -262,6 +265,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -868,6 +935,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 12/20] thunderx/nicvf: add single and multi segment tx functions
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (10 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 11/20] thunderx/nicvf: add stats support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile     |   2 +
 drivers/net/thunderx/nicvf_rxtx.c | 279 ++++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h |  48 +++++++
 3 files changed, 329 insertions(+)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index 69bb750..7a7a072 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..504c651
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,279 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_SCTP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_SCTP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+static inline void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmittted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+		if (sq->is_single_pool) {
+			if (unlikely(sq->pool == NULL))
+				sq->pool = tx_pkts[0]->pool;
+
+			nicvf_single_pool_free_xmited_buffers(sq);
+		} else {
+			nicvf_multi_pool_free_xmited_buffers(sq);
+		}
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..9c9bd07
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 13/20] thunderx/nicvf: add single and multi segment rx functions
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (11 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  28 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 318 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  15 ++
 3 files changed, 361 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5937b45..d74b601 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -71,5 +71,33 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return (struct nicvf *)eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	return (uint64_t)(mp->elt_va_start - mp->elt_pa[0]);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
 
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 504c651..8d8442f 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -277,3 +277,321 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+	    (nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+				       __ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						 & NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t  cqe_rx_w0;
+	cqe_rx_word1_t  cqe_rx_w1;
+	cqe_rx_word2_t	cqe_rx_w2;
+	cqe_rx_word3_t	cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		NICVF_RX_ASSERT(cq_entry->type.cqe_type == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t	cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 9c9bd07..0f2dc70 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -41,6 +42,20 @@
 #define __hot	__attribute__((hot))
 #endif
 
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (i & ~3) + 3 - (i & 3);
+#else
+	return i;
+#endif
+}
+
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (12 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 43 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_rxtx.c   |  9 ++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  2 ++
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 23325d6..0c72201 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
@@ -314,6 +314,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+		       sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -939,6 +978,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
@@ -946,6 +986,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 8d8442f..27e5e1c 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -595,3 +595,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 0f2dc70..10f4bf9 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -52,6 +52,8 @@ nicvf_frag_num(uint16_t i)
 #endif
 }
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 15/20] thunderx/nicvf: add rx queue start and stop support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (13 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 16/20] thunderx/nicvf: add tx " Jerin Jacob
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 174 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 193 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 0c72201..9b917c1 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -88,6 +88,8 @@ static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -616,6 +618,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -738,6 +788,32 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts, MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -746,6 +822,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -758,6 +897,39 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -984,6 +1156,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 27e5e1c..7724054 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -604,3 +604,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 10f4bf9..d2f57bb 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -53,6 +53,7 @@ nicvf_frag_num(uint16_t i)
 }
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 16/20] thunderx/nicvf: add tx queue start and stop support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (14 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 68 +++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 9b917c1..d717edc 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -90,6 +90,8 @@ static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -618,6 +620,52 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] ==
+	    RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -930,6 +978,24 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1158,6 +1224,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 17/20] thunderx/nicvf: add device start, stop and close support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (15 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 16/20] thunderx/nicvf: add tx " Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 18/20] thunderx/config: set max numa node to two Jerin Jacob
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 468 ++++++++++++++++++++++++++++++++++++
 1 file changed, 468 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index d717edc..37b1d32 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,7 +70,10 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
+static int nicvf_dev_start(struct rte_eth_dev *dev);
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_close(struct rte_eth_dev *dev);
 static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
 				struct rte_eth_stats *stat);
 static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
@@ -592,6 +595,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -688,6 +767,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -732,6 +836,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->is_single_pool)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1131,6 +1277,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited witout proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1211,7 +1668,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1246,6 +1706,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 18/20] thunderx/config: set max numa node to two
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (16 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-07 15:16 ` [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 config/defconfig_arm64-thunderx-linuxapp-gcc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index 7940bbd..cc12cee 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,6 +34,7 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+CONFIG_RTE_MAX_NUMA_NODES=2
 
 #
 # Compile Cavium Thunderx NICVF PMD driver
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (17 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 18/20] thunderx/config: set max numa node to two Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-09  8:47   ` Thomas Monjalon
  2016-05-17 16:31   ` Mcnamara, John
  2016-05-07 15:16 ` [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
                   ` (4 subsequent siblings)
  23 siblings, 2 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  94 ++++-----
 doc/guides/nics/thunderx.rst           | 349 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 398 insertions(+), 47 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 769f677..58f4873 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -49,6 +49,7 @@ Network Interface Controller Drivers
     mlx5
     nfp
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index f08039e..4eaaa71 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,38 +74,38 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p r s v v v v x
-                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c i z h i i m e
-                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a n e o r r x n
-                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p g d s t t n v
-                        c x x i e 0       . v v   f e e e e k k k k     e         a t i i e i
-                        k   v n           . f f       . v v   . v v               t   o o t r
-                        e   f g           .   .       . f f   . f f               a     . 3 t
-                        t                 v   v       v   v   v   v               2     v
-                                          e   e       e   e   e   e                     e
-                                          c   c       c   c   c   c                     c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p r s t v v v v x
+                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c i z h h i i m e
+                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a n e u o r r x n
+                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p g d d s t t n v
+                        c x x i e 0       . v v   f e e e e k k k k     e         a e t i i e i
+                        k   v n           . f f       . v v   . v v               t r   o o t r
+                        e   f g           .   .       . f f   . f f               a x     . 3 t
+                        t                 v   v       v   v   v   v               2       v
+                                          e   e       e   e   e   e                       e
+                                          c   c       c   c   c   c                       c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y           Y Y Y Y
-   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y             Y
-   Queue status event                                                               Y
+   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y           Y Y Y Y Y
+   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y             Y Y
+   Queue status event                                                                 Y
    Rx interrupt                   Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y           Y   Y Y
-   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y
-   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y           Y   Y
+   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y           Y Y   Y Y
+   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y             Y
+   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y     Y
+   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y           Y Y   Y
    LRO                                              Y Y Y Y
    TSO                          Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y           Y   Y Y
-   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y           Y   Y Y
-   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y               Y Y
-   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y               Y Y
-   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y
-   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y           Y Y   Y Y
+   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y           Y Y   Y Y
+   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y                 Y Y
+   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y                 Y Y
+   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y             Y
+   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y             Y
+   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y             Y
    VMDq                                 Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y
+   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y             Y
    DCB                                  Y Y     Y   Y Y
    VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y               Y Y
    Ethertype filter                     Y Y     Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                 Y Y     Y Y     Y   Y Y
    Rate limitation                                  Y Y
    Traffic mirroring                    Y Y         Y Y
-   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y
-   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y
+   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y             Y
+   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y             P
    QinQ offload                   Y     Y   Y   Y Y Y   Y
-   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y             Y
+   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y             Y
    Inner L3 checksum                Y   Y   Y       Y   Y           Y
    Inner L4 checksum                Y   Y   Y       Y   Y           Y
-   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y             Y
    Timesync                             Y Y     Y   Y Y
-   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y   Y Y Y Y
-   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue              Y                   Y Y     Y Y Y Y Y Y           Y   Y Y
+   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y   Y Y Y Y Y
+   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y Y
+   Stats per queue              Y                   Y Y     Y Y Y Y Y Y           Y Y   Y Y
    EEPROM dump                                  Y   Y Y
-   Registers dump                               Y Y Y Y Y Y
-   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y
-   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
+   Registers dump                               Y Y Y Y Y Y                         Y
+   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y     Y
+   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y Y
+   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y Y
+   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                 Y   Y Y
    Other kdrv                                                       Y Y           Y
-   ARMv7                                                                      Y       Y Y
-   ARMv8                                                                      Y       Y Y
+   ARMv7                                                                      Y         Y Y
+   ARMv8                                                                      Y     Y   Y Y
    Power8                                                           Y Y       Y
    TILE-Gx                                                                    Y
-   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y     Y Y Y
-   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y   Y Y Y Y
-   Usage doc              Y Y   Y     Y                             Y Y       Y   Y   Y
+   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y       Y Y Y
+   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y   Y   Y Y Y
+   Usage doc              Y Y   Y     Y                             Y Y       Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..04257ba
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,349 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table. Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 83c841b..1a91c15 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -54,6 +54,7 @@ EAL
 Drivers
 ~~~~~~~
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Libraries
 ~~~~~~~~~
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (18 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
@ 2016-05-07 15:16 ` Jerin Jacob
  2016-05-09  8:50   ` Thomas Monjalon
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-07 15:16 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 1953ea2..3370f18 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -332,6 +332,12 @@ M: Rasesh Mody <rasesh.mody@qlogic.com>
 F: drivers/net/bnx2x/
 F: doc/guides/nics/bnx2x.rst
 
+Thunderx nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-07 15:16 ` [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
@ 2016-05-09  8:47   ` Thomas Monjalon
  2016-05-09  9:35     ` Jerin Jacob
  2016-05-17 16:31   ` Mcnamara, John
  1 sibling, 1 reply; 204+ messages in thread
From: Thomas Monjalon @ 2016-05-09  8:47 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, bruce.richardson, Slawomir Rosek

2016-05-07 20:46, Jerin Jacob:
> --- a/doc/guides/nics/overview.rst
> +++ b/doc/guides/nics/overview.rst
> +   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
> +   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p r s t v v v v x
> +                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c i z h h i i m e
> +                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a n e u o r r x n
> +                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p g d d s t t n v
> +                        c x x i e 0       . v v   f e e e e k k k k     e         a e t i i e i
> +                        k   v n           . f f       . v v   . v v               t r   o o t r
> +                        e   f g           .   .       . f f   . f f               a x     . 3 t
> +                        t                 v   v       v   v   v   v               2       v
> +                                          e   e       e   e   e   e                       e
> +                                          c   c       c   c   c   c                       c
> +   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

You have forgotten a letter (thuderx).

> --- a/doc/guides/rel_notes/release_16_07.rst
> +++ b/doc/guides/rel_notes/release_16_07.rst
> @@ -54,6 +54,7 @@ EAL
>  Drivers
>  ~~~~~~~
>  
> +* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**

This is the "Resolved Issues" section. Please see "New Features".

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-05-07 15:16 ` [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-05-09  8:50   ` Thomas Monjalon
  0 siblings, 0 replies; 204+ messages in thread
From: Thomas Monjalon @ 2016-05-09  8:50 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, bruce.richardson, Maciej Czekaj

2016-05-07 20:46, Jerin Jacob:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> +Thunderx nicvf

Cavium ThunderX nicvf?

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-09  8:47   ` Thomas Monjalon
@ 2016-05-09  9:35     ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-09  9:35 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, bruce.richardson, Slawomir Rosek

On Mon, May 09, 2016 at 10:47:08AM +0200, Thomas Monjalon wrote:
> 2016-05-07 20:46, Jerin Jacob:
> > --- a/doc/guides/nics/overview.rst
> > +++ b/doc/guides/nics/overview.rst
> > +   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
> > +   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p r s t v v v v x
> > +                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c i z h h i i m e
> > +                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a n e u o r r x n
> > +                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p g d d s t t n v
> > +                        c x x i e 0       . v v   f e e e e k k k k     e         a e t i i e i
> > +                        k   v n           . f f       . v v   . v v               t r   o o t r
> > +                        e   f g           .   .       . f f   . f f               a x     . 3 t
> > +                        t                 v   v       v   v   v   v               2       v
> > +                                          e   e       e   e   e   e                       e
> > +                                          c   c       c   c   c   c                       c
> > +   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
> 
> You have forgotten a letter (thuderx).
> 
> > --- a/doc/guides/rel_notes/release_16_07.rst
> > +++ b/doc/guides/rel_notes/release_16_07.rst
> > @@ -54,6 +54,7 @@ EAL
> >  Drivers
> >  ~~~~~~~
> >  
> > +* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
> 
> This is the "Resolved Issues" section. Please see "New Features".
> 

Will fix both issues in V2.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-05-09 17:38   ` Stephen Hemminger
  2016-05-12 15:40   ` Pattan, Reshma
  1 sibling, 0 replies; 204+ messages in thread
From: Stephen Hemminger @ 2016-05-09 17:38 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Sat, 7 May 2016 20:46:19 +0530
Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:

> +static struct nicvf_reg_info nicvf_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_VF_CFG),
> +	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
> +	NICVF_REG_INFO(NIC_VF_INT),
> +	NICVF_REG_INFO(NIC_VF_INT_W1S),
> +	NICVF_REG_INFO(NIC_VF_ENA_W1C),
> +	NICVF_REG_INFO(NIC_VF_ENA_W1S),
> +	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
> +	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
> +};
> +
> +static struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
> +	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
> +	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
> +	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
> +	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
> +	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
> +	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
> +	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
> +};
> +
> +static struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
> +};
> +
> +static struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
> +};
> +
> +static struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
> +};
> +
> +static struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
> +};

Tables like this should be marked const

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-05-09 17:40   ` Stephen Hemminger
  2016-05-09 17:41   ` Stephen Hemminger
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 204+ messages in thread
From: Stephen Hemminger @ 2016-05-09 17:40 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Sat, 7 May 2016 20:46:20 +0530
Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:

> +
> +static struct rte_pci_id pci_id_nicvf_map[] = {
> +	{

Another table that should be const

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
  2016-05-09 17:40   ` Stephen Hemminger
@ 2016-05-09 17:41   ` Stephen Hemminger
  2016-05-10  7:25     ` Jerin Jacob
  2016-05-11  5:37   ` Panu Matilainen
  2016-05-11 12:23   ` Pattan, Reshma
  3 siblings, 1 reply; 204+ messages in thread
From: Stephen Hemminger @ 2016-05-09 17:41 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Sat, 7 May 2016 20:46:20 +0530
Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:

> +
> +static inline struct nicvf*
> +nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
> +{
> +	return (struct nicvf *)eth_dev->data->dev_private;
> +}

Cast here is unnecessary because dev_private is void *

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-09 17:41   ` Stephen Hemminger
@ 2016-05-10  7:25     ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-10  7:25 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Mon, May 09, 2016 at 10:41:22AM -0700, Stephen Hemminger wrote:
> On Sat, 7 May 2016 20:46:20 +0530
> Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> 
> > +
> > +static inline struct nicvf*
> > +nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
> > +{
> > +	return (struct nicvf *)eth_dev->data->dev_private;
> > +}
> 
> Cast here is unnecessary because dev_private is void *

Agree with all of your review comments in [PATCH 01/20] and [PATCH
02/20]. Will fix it in V2.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
  2016-05-09 17:40   ` Stephen Hemminger
  2016-05-09 17:41   ` Stephen Hemminger
@ 2016-05-11  5:37   ` Panu Matilainen
  2016-05-11 12:23   ` Pattan, Reshma
  3 siblings, 0 replies; 204+ messages in thread
From: Panu Matilainen @ 2016-05-11  5:37 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 05/07/2016 06:16 PM, Jerin Jacob wrote:


> diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
> new file mode 100644
> index 0000000..69bb750
> --- /dev/null
> +++ b/drivers/net/thunderx/Makefile
> @@ -0,0 +1,64 @@
> +#   BSD LICENSE
> +#
> +#   Copyright(c) 2016 Cavium Networks. All rights reserved.
> +#   All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +#     * Redistributions of source code must retain the above copyright
> +#       notice, this list of conditions and the following disclaimer.
> +#     * Redistributions in binary form must reproduce the above copyright
> +#       notice, this list of conditions and the following disclaimer in
> +#       the documentation and/or other materials provided with the
> +#       distribution.
> +#     * Neither the name of Cavium Networks nor the names of its
> +#       contributors may be used to endorse or promote products derived
> +#       from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +#
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_thunderx_nicvf.a
> +
> +CFLAGS += $(WERROR_FLAGS)
> +
> +EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
> +
> +LIBABIVER := 1
> +
> +OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
> +$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
> +
> +VPATH += $(SRCDIR)/base
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
> +
> +
> +# this lib depends upon:
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc

librte_malloc no longer exists for almost a year by now (see commits 
2f9d47013e4d and aace9d0bcf58) but seems to be a popular thing to copy 
to new drivers :)

	- Panu -

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
                     ` (2 preceding siblings ...)
  2016-05-11  5:37   ` Panu Matilainen
@ 2016-05-11 12:23   ` Pattan, Reshma
  3 siblings, 0 replies; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-11 12:23 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 02/20] thunderx/nicvf: add pmd skeleton
> 
> Introduce driver initialization and enable build infrastructure for nicvf pmd
> driver.
> 
> By default, It is enabled only for defconfig_arm64-thunderx-* config as it is an
> inbuilt NIC device.
> 
> ---
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> new file mode 100644
> index 0000000..3c545b4
> +static int
> +nicvf_periodic_alarm_stop(struct nicvf *nic) {
> +	int ret;
> +
> +	ret = rte_intr_callback_unregister(&nic->intr_handle,
> +					   nicvf_interrupt, nic);
> +	ret |= close(nic->intr_handle.fd);
> +	return ret;
> +}
> +
> +
You can remove extra blank line

> +/* Initialise and register driver with DPDK Application */ static const

Typo Initialise.

> +
> +	nic->device_id = pci_dev->id.device_id;
> +	nic->vendor_id = pci_dev->id.vendor_id;
> +	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
> +	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +	nic->eth_dev = eth_dev;
> +
> +	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
> +		     pci_dev->id.vendor_id, pci_dev->id.device_id,
> +		     pci_dev->addr.domain, pci_dev->addr.bus,
> +		     pci_dev->addr.devid, pci_dev->addr.function);

I see some indentation issue mix of tabs + spaces here in LOG.  

> +
> +	if (nic->sqs_mode) {
> +		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected,
> Detaching...");
> +		/* Detach port by returning postive error number */

typo, should be  Positive.

> diff --git a/drivers/net/thunderx/nicvf_ethdev.h
> b/drivers/net/thunderx/nicvf_ethdev.h
> new file mode 100644
> index 0000000..6431329
> --- /dev/null
> +++ b/drivers/net/thunderx/nicvf_ethdev.h
> @@ -0,0 +1,49 @@
> +
> +#ifndef __THUNDERX_NICVF_ETHDEV_H__
> +#define __THUNDERX_NICVF_ETHDEV_H__
> +
> +#include <rte_ethdev.h>
> +
> +#define THUNDERX_NICVF_PMD_VERSION      "1.0"
> +
> +#define NICVF_INTR_POLL_INTERVAL_MS	50
> +
> +static inline struct nicvf*

Should follow  (foo *) not foo*

> +nicvf_pmd_priv(struct rte_eth_dev *eth_dev) {
> +	return (struct nicvf *)eth_dev->data->dev_private; }
> +
> +
> +#endif /* __THUNDERX_NICVF_ETHDEV_H__  */

multiple blank lines before #endif

> +++ b/drivers/net/thunderx/nicvf_struct.h
> +
> +#ifndef _THUNDERX_NICVF_STRUCT_H
> +#define _THUNDERX_NICVF_STRUCT_H
> +
> +#include <stdint.h>
> +#include <rte_spinlock.h>
> +#include <rte_mempool.h>
> +#include <rte_mbuf.h>
> +#include <rte_interrupts.h>
> +#include <rte_ethdev.h>
> +#include <rte_memory.h>

Should leave blank line between standard library headers and rte headers.

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-05-07 15:16 ` [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
@ 2016-05-12 15:39   ` Pattan, Reshma
  2016-05-13  8:14     ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-12 15:39 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 04/20] thunderx/nicvf: add get_reg and
> get_reg_length support
> 
> +
> +static int
> +nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info
> +*regs) {
> +	uint64_t *data = regs->data;
> +	struct nicvf *nic = nicvf_pmd_priv(dev);
> +
> +	if (data == NULL)
> +		return -EINVAL;

nicvf_reg_dump prints to stdout if data in NULL, so do we still want to return here?

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-05-09 17:38   ` Stephen Hemminger
@ 2016-05-12 15:40   ` Pattan, Reshma
  1 sibling, 0 replies; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-12 15:40 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 01/20] thunderx/nicvf/base: add hardware API for
> ThunderX nicvf inbuilt NIC
> 
> +int
> +nicvf_reg_poll_interrupts(struct nicvf *nic)
> +{
> +	int msg = 0;
> +	uint64_t intr;
> +
> +	intr = nicvf_reg_read(nic, NIC_VF_INT);
> +	if (intr & NICVF_INTR_MBOX_MASK) {
> +		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
> +		msg = nicvf_handle_mbx_intr(nic);
> +	}
> +	if (intr & NICVF_INTR_QS_ERR_MASK) {
> +		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
> +		nicvf_handle_qset_err_intr(nic);
> +	}
> +	return msg;
> +}
> +
> +
[Reshma]: Multiple blank lines

> +int
> +nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
> +{
> +	uint64_t status;
> +	int timeout = 10;
> +	struct nicvf_rbdr *rbdr = nic->rbdr;
> +
> +	/* Save head and tail pointers for freeing up buffers */
> +	if (rbdr) {
> +		rbdr->head = nicvf_queue_reg_read(nic,
> +					  NIC_QSET_RBDR_0_1_HEAD,
> +					  qidx) >> 3;
> +		rbdr->tail = nicvf_queue_reg_read(nic,
> +					  NIC_QSET_RBDR_0_1_TAIL,
> +					  qidx) >> 3;

 [Reshma]: Mix of tabs + spaces, u can use all tabs, U can correct this for other parts of  the file and other files too. 

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-05-12 15:39   ` Pattan, Reshma
@ 2016-05-13  8:14     ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-13  8:14 UTC (permalink / raw)
  To: Pattan, Reshma
  Cc: dev, thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Thu, May 12, 2016 at 03:39:56PM +0000, Pattan, Reshma wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Saturday, May 7, 2016 4:16 PM
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> > <bruce.richardson@intel.com>; Jerin Jacob
> > <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> > <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> > <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> > <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> > Radoslaw Biernacki <rad@semihalf.com>
> > Subject: [dpdk-dev] [PATCH 04/20] thunderx/nicvf: add get_reg and
> > get_reg_length support
> > 
> > +
> > +static int
> > +nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info
> > +*regs) {
> > +	uint64_t *data = regs->data;
> > +	struct nicvf *nic = nicvf_pmd_priv(dev);
> > +
> > +	if (data == NULL)
> > +		return -EINVAL;
> 
> nicvf_reg_dump prints to stdout if data in NULL, so do we still want to return here?

Yes as base is code common for other data plane libraries and I think in
DPDK get_regs callback perspective it makes sense to add this check as
PMD driver expected to get the data in the buffer.

Thanks for the review.
I agree with your all other review comments in another thread. Will fix it V2.
> 
> Thanks,
> Reshma
> 
> 
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 06/20] thunderx/nicvf: add dev_infos_get support
  2016-05-07 15:16 ` [PATCH 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
@ 2016-05-13 13:52   ` Pattan, Reshma
  0 siblings, 0 replies; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-13 13:52 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 06/20] thunderx/nicvf: add dev_infos_get support
> 
> diff --git a/drivers/net/thunderx/nicvf_ethdev.h
> b/drivers/net/thunderx/nicvf_ethdev.h
> index cc19da5..da6fdcf 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.h
> +++ b/drivers/net/thunderx/nicvf_ethdev.h
> @@ -42,6 +42,23 @@
>  #define NICVF_FULL_DUPLEX		0x01
>  #define NICVF_UNKNOWN_DUPLEX		0xff
> 
> +#define NICVF_RSS_OFFLOAD_PASS1 ( \
> +	ETH_RSS_PORT | \
> +	ETH_RSS_IPV4 | \
> +	ETH_RSS_NONFRAG_IPV4_TCP | \
> +	ETH_RSS_NONFRAG_IPV4_UDP | \
> +	ETH_RSS_IPV6 | \
> +	ETH_RSS_NONFRAG_IPV6_TCP | \
> +	ETH_RSS_NONFRAG_IPV6_UDP)
> +
> +#define NICVF_RSS_OFFLOAD_TUNNEL ( \
> +	ETH_RSS_VXLAN | \
> +	ETH_RSS_GENEVE | \
> +	ETH_RSS_NVGRE)
> +
> +#define DEFAULT_RX_FREE_THRESH          224
> +#define DEFAULT_TX_FREE_THRESH          224
> +#define DEFAULT_TX_FREE_MPOOL_THRESH    16
> 
How about prefixing these 3 macronames with NICVF? Like the previous ones.

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-07 15:16 ` [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
  2016-05-09  8:47   ` Thomas Monjalon
@ 2016-05-17 16:31   ` Mcnamara, John
  2016-05-19  6:19     ` Jerin Jacob
  1 sibling, 1 reply; 204+ messages in thread
From: Mcnamara, John @ 2016-05-17 16:31 UTC (permalink / raw)
  To: Jerin Jacob, dev; +Cc: thomas.monjalon, Richardson, Bruce, Slawomir Rosek

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:17 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Slawomir Rosek
> <slawomir.rosek@semihalf.com>
> Subject: [dpdk-dev] [PATCH 19/20] thunderx/nicvf: updated driver
> documentation and release notes

Hi,

Very good documentation. The content is quite clear and almost no RST issues.
The only comment is on some of the long lines. In general console blocks
have to be wrapped at 80 chars or else they go off the page in the PDF docs.
I see that you did that in some places but not in others.

It is worth building the pdf docs to check for that:

    make doc-guides-pdf
    mupdf build/doc/pdf/guides/nics.pdf &

Some minor comments below:


> +
> +#. Start ``testpmd`` with basic parameters:
> +
> +   .. code-block:: console
> +
> +      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w
> + 0002:01:00.2 -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx
> + --port-topology=loop

Would be better wrapped as something like this:

   .. code-block:: console

      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
          -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx
             --port-topology=loop


> +
> +   Example output:
> +
> +   .. code-block:: console
> +
> +      ...
> +
> +      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
> +
> +      ...
> +      EAL:   probe driver: 177d:11 rte_nicvf_pmd
> +      EAL:   using IOMMU type 1 (Type 1)
> +      EAL:   PCI memory mapped at 0x3ffade50000
> +      EAL: Trying to map BAR 4 that contains the MSI-X table. Trying
> offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
> +      EAL:   PCI memory mapped at 0x3ffadc60000
> +      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
> +      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
> loopback_supported=true
> +      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
> +      Interactive-mode selected
> +      Configuring Port 0 (socket 0)


Also, this should be wrapped (even though it is the actual output):

      ...
      EAL:   probe driver: 177d:11 rte_nicvf_pmd
      EAL:   using IOMMU type 1 (Type 1)
      EAL:   PCI memory mapped at 0x3ffade50000
      EAL: Trying to map BAR 4 that contains the MSI-X table.
           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
      EAL:   PCI memory mapped at 0x3ffadc60000
      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
           loopback_supported=true
      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
      Interactive-mode selected
      Configuring Port 0 (socket 0)
      ...


> +SR-IOV: Prerequisites and sample Application Notes
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet
> +port automatically to virtual function (VF) and presented as PCIe-like
> SR-IOV device.


Slightly better as:

Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
automatically to virtual functions (VF) and presents them as PCIe-like SR-IOV device.


> +   Example qemu guest launch command:
> +
> +   .. code-block:: console
> +
> +      sudo qemu-system-aarch64 -name vm1 -machine
> virt,gic_version=3,accel=kvm,usb=off \
> +      -cpu host -m 4096 \
> +      -smp 4,sockets=1,cores=8,threads=1 \
> +      -nographic -nodefaults \
> +      -kernel <kernel image> \

Also wrap the first line:

   .. code-block:: console

      sudo qemu-system-aarch64 -name vm1 \
      -machine virt,gic_version=3,accel=kvm,usb=off \
      -cpu host -m 4096 \
      ...


Apart from those small changes:

Acked-by: John McNamara <john.mcnamara@intel.com>





^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-17 16:31   ` Mcnamara, John
@ 2016-05-19  6:19     ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-19  6:19 UTC (permalink / raw)
  To: Mcnamara, John; +Cc: dev, thomas.monjalon, Richardson, Bruce, Slawomir Rosek

On Tue, May 17, 2016 at 04:31:58PM +0000, Mcnamara, John wrote:
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Saturday, May 7, 2016 4:17 PM
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> > <bruce.richardson@intel.com>; Jerin Jacob
> > <jerin.jacob@caviumnetworks.com>; Slawomir Rosek
> > <slawomir.rosek@semihalf.com>
> > Subject: [dpdk-dev] [PATCH 19/20] thunderx/nicvf: updated driver
> > documentation and release notes
> 
> Hi,
> 
> Very good documentation. The content is quite clear and almost no RST issues.
> The only comment is on some of the long lines. In general console blocks
> have to be wrapped at 80 chars or else they go off the page in the PDF docs.
> I see that you did that in some places but not in others.
> 
> It is worth building the pdf docs to check for that:
> 
>     make doc-guides-pdf
>     mupdf build/doc/pdf/guides/nics.pdf &
> 
> Some minor comments below:

Thanks John for the review. Will fix it in v2.

> 
> 
> > +
> > +#. Start ``testpmd`` with basic parameters:
> > +
> > +   .. code-block:: console
> > +
> > +      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w
> > + 0002:01:00.2 -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx
> > + --port-topology=loop
> 
> Would be better wrapped as something like this:
> 
>    .. code-block:: console
> 
>       ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
>           -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx
>              --port-topology=loop
> 
> 
> > +
> > +   Example output:
> > +
> > +   .. code-block:: console
> > +
> > +      ...
> > +
> > +      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
> > +
> > +      ...
> > +      EAL:   probe driver: 177d:11 rte_nicvf_pmd
> > +      EAL:   using IOMMU type 1 (Type 1)
> > +      EAL:   PCI memory mapped at 0x3ffade50000
> > +      EAL: Trying to map BAR 4 that contains the MSI-X table. Trying
> > offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
> > +      EAL:   PCI memory mapped at 0x3ffadc60000
> > +      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
> > +      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
> > loopback_supported=true
> > +      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
> > +      Interactive-mode selected
> > +      Configuring Port 0 (socket 0)
> 
> 
> Also, this should be wrapped (even though it is the actual output):
> 
>       ...
>       EAL:   probe driver: 177d:11 rte_nicvf_pmd
>       EAL:   using IOMMU type 1 (Type 1)
>       EAL:   PCI memory mapped at 0x3ffade50000
>       EAL: Trying to map BAR 4 that contains the MSI-X table.
>            Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
>       EAL:   PCI memory mapped at 0x3ffadc60000
>       PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
>       PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
>            loopback_supported=true
>       PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
>       Interactive-mode selected
>       Configuring Port 0 (socket 0)
>       ...
> 
> 
> > +SR-IOV: Prerequisites and sample Application Notes
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet
> > +port automatically to virtual function (VF) and presented as PCIe-like
> > SR-IOV device.
> 
> 
> Slightly better as:
> 
> Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
> automatically to virtual functions (VF) and presents them as PCIe-like SR-IOV device.
> 
> 
> > +   Example qemu guest launch command:
> > +
> > +   .. code-block:: console
> > +
> > +      sudo qemu-system-aarch64 -name vm1 -machine
> > virt,gic_version=3,accel=kvm,usb=off \
> > +      -cpu host -m 4096 \
> > +      -smp 4,sockets=1,cores=8,threads=1 \
> > +      -nographic -nodefaults \
> > +      -kernel <kernel image> \
> 
> Also wrap the first line:
> 
>    .. code-block:: console
> 
>       sudo qemu-system-aarch64 -name vm1 \
>       -machine virt,gic_version=3,accel=kvm,usb=off \
>       -cpu host -m 4096 \
>       ...
> 
> 
> Apart from those small changes:
> 
> Acked-by: John McNamara <john.mcnamara@intel.com>
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support
  2016-05-07 15:16 ` [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
@ 2016-05-19  9:30   ` Pattan, Reshma
  0 siblings, 0 replies; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-19  9:30 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release
> support
> 
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> index 1269672..3b94168 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> 
> +static int
> +nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
> +		    uint32_t desc_cnt)
> +{
> +	const struct rte_memzone *rz;
> +	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
> +
> +	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
> +				   NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
> +	if (rz == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed allocate mem for cq hw ring");

Typo "Failed to"?

> +static int
> +nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
> +			 uint16_t nb_desc, unsigned int socket_id,
> +			 const struct rte_eth_rxconf *rx_conf,
> +			 struct rte_mempool *mp)
> +{
> +	uint16_t rx_free_thresh;
> +	struct nicvf_rxq *rxq;
> +	struct nicvf *nic = nicvf_pmd_priv(dev);
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	/* Socked id check */

Typo  "Socket"?

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support
  2016-05-07 15:16 ` [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
@ 2016-05-19 12:19   ` Pattan, Reshma
  0 siblings, 0 replies; 204+ messages in thread
From: Pattan, Reshma @ 2016-05-19 12:19 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Saturday, May 7, 2016 4:16 PM
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>; Maciej Czekaj
> <maciej.czekaj@caviumnetworks.com>; Kamil Rytarowski
> <Kamil.Rytarowski@caviumnetworks.com>; Zyta Szpak
> <zyta.szpak@semihalf.com>; Slawomir Rosek <slawomir.rosek@semihalf.com>;
> Radoslaw Biernacki <rad@semihalf.com>
> Subject: [dpdk-dev] [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release
> support
> +				txq->txq_flags &
> ETH_TXQ_FLAGS_NOMULTMEMP);
> +
> +	/* Choose optimum free threshold value for multipool case */
> +	if (!txq->is_single_pool)
> +		txq->tx_free_thresh =
> +		(uint16_t)(tx_conf->tx_free_thresh ==
> DEFAULT_TX_FREE_THRESH ?
> +				DEFAULT_TX_FREE_MPOOL_THRESH :
> +				tx_conf->tx_free_thresh);
> +	txq->tail = 0;
> +	txq->head = 0;
> +

txq->tail and txq->head are set to 0 in nicvf_tx_queue_reset().  So will that be ok to remove here?

Thanks,
Reshma

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v2 00/20] DPDK PMD for ThunderX NIC device
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (19 preceding siblings ...)
  2016-05-07 15:16 ` [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-05-29 16:46 ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
                     ` (9 more replies)
  2016-05-29 16:53 ` [PATCH v2 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
                   ` (2 subsequent siblings)
  23 siblings, 10 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with today's git HEAD change-set
c8c33ad7f94c59d1c0676af0cfd61207b3e808db along with
following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

V1->V2

http://dpdk.org/dev/patchwork/patch/12609/
-- added const for the const struct tables
-- remove multiple blank lines
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12610/
-- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
-- add const for table structs
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12614/
-- s/DEFAULT_*/NICVF_DEFAULT_*/gc
http://dpdk.org/dev/patchwork/patch/12615/
-- Fix typos
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12616/
-- removed redundant txq->tail = 0 and txq->head = 0
http://dpdk.org/dev/patchwork/patch/12627/
-- fixed the documentation changes

-- fixed TAB+space occurrences in functions
-- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db

Jerin Jacob (20):
  thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  thunderx/nicvf: add pmd skeleton
  thunderx/nicvf: add link status and link update support
  thunderx/nicvf: add get_reg and get_reg_length support
  thunderx/nicvf: add dev_configure support
  thunderx/nicvf: add dev_infos_get support
  thunderx/nicvf: add rx_queue_setup/release support
  thunderx/nicvf: add tx_queue_setup/release support
  thunderx/nicvf: add rss and reta query and update support
  thunderx/nicvf: add mtu_set and promiscuous_enable support
  thunderx/nicvf: add stats support
  thunderx/nicvf: add single and multi segment tx functions
  thunderx/nicvf: add single and multi segment rx functions
  thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
    support
  thunderx/nicvf: add rx queue start and stop support
  thunderx/nicvf: add tx queue start and stop support
  thunderx/nicvf: add device start,stop and close support
  thunderx/config: set max numa node to two
  thunderx/nicvf: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   11 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   96 +-
 doc/guides/nics/thunderx.rst                       |  354 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   65 +
 drivers/net/thunderx/base/nicvf_hw.c               |  908 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1216 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  416 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1861 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  106 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  600 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |  101 ++
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    2 +
 23 files changed, 6522 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v2 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
drivers/net/thunderx/nicvf/base directory.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c      |  908 +++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h      |  240 ++++++
 drivers/net/thunderx/base/nicvf_hw_defs.h | 1216 +++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c    |  416 ++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h    |  232 ++++++
 drivers/net/thunderx/base/nicvf_plat.h    |  132 ++++
 6 files changed, 3144 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..24fe77d
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,908 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static const struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = 10;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(2000);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = 10;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_HEAD,
+					qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_TAIL,
+					qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+				NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(2000);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= 0);
+	assert(val <= 6);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.reserved_45_63 = 0,
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.reserved_36_41 = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+					RBDR_SIZE_SHIFT);
+	rbdr_cfg.reserved_25_31 = 0;
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.reserved_12_15 = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+				RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+				RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+				RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < (rbdr->qlen_mask)) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+				SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+				SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+				SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+				CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+				CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+				CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+				NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..32357cc
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,240 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE = 0,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK = 0,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8174 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8173 */
+	NICVF_ERR_BASE_INIT,     /* -8172 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8171 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..ef9354b
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,1216 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS			64
+#define NIC_HW_MAX_FRS			9200 /* 9216 max packet including FCS */
+#define NIC_HW_MAX_SEGS			12
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES	128 /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES	512 /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES	128 /* 7 bits */
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD = 0x0,
+	NIC_SEND_LD_TYPE_E_LDT = 0x1,
+	NIC_SEND_LD_TYPE_E_LDWB = 0x2,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST = 0x3,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE = 0x0,
+	ETYPE_ALG_SKIP = 0x1,
+	ETYPE_ALG_ENDPARSE = 0x2,
+	ETYPE_ALG_VLAN = 0x3,
+	ETYPE_ALG_VLAN_STRIP = 0x4,
+};
+
+enum layer3_type {
+	L3TYPE_NONE = 0x0,
+	L3TYPE_GRH = 0x1,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE = 0x0,
+	L4TYPE_IPSEC_ESP = 0x1,
+	L4TYPE_IPFRAG = 0x2,
+	L4TYPE_IPCOMP = 0x3,
+	L4TYPE_TCP = 0x4,
+	L4TYPE_UDP = 0x5,
+	L4TYPE_SCTP = 0x6,
+	L4TYPE_GRE = 0x7,
+	L4TYPE_ROCE_BTH = 0x8,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE = 0x0,
+	CPI_ALG_VLAN = 0x1,
+	CPI_ALG_VLAN16 = 0x2,
+	CPI_ALG_DIFF = 0x3,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE = 0x00,
+	RSS_ALG_PORT = 0x01,
+	RSS_ALG_IP = 0x02,
+	RSS_ALG_TCP_IP = 0x03,
+	RSS_ALG_UDP_IP = 0x04,
+	RSS_ALG_SCTP_IP = 0x05,
+	RSS_ALG_GRE_IP = 0x06,
+	RSS_ALG_ROCE = 0x07,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC = 0x00,
+	RSS_HASH_IP = 0x01,
+	RSS_HASH_TCP = 0x02,
+	RSS_HASH_TCP_SYN_DIS = 0x03,
+	RSS_HASH_UDP = 0x04,
+	RSS_HASH_L4ETC = 0x05,
+	RSS_HASH_ROCE = 0x06,
+	RSS_L3_BIDI = 0x07,
+	RSS_L4_BIDI = 0x08,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID = 0x0,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT = 0x00,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD = 0x00,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET = 0,
+	CQE_RX_TCP_END_INVALID_FLAG = 1,
+	CQE_RX_TCP_END_TIMEOUT = 2,
+	CQE_RX_TCP_END_OUT_OF_SEQ = 3,
+	CQE_RX_TCP_END_PKT_ERR = 4,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE = 0x0,
+	CQE_RX_ERRLVL_L2 = 0x1,
+	CQE_RX_ERRLVL_L3 = 0x2,
+	CQE_RX_ERRLVL_L4 = 0x3,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE = 0x0,
+	CQE_RX_ERR_RE_PARTIAL = 0x1,
+	CQE_RX_ERR_RE_JABBER = 0x2,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE = 0x00,
+	SEND_L4_CSUM_UDP = 0x01,
+	SEND_L4_CSUM_TCP = 0x02,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32 = 0x00,
+	SEND_CRCALG_CRC32C = 0x01,
+	SEND_CRCALG_ICRC = 0x02,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD = 0x00,
+	SEND_LD_TYPE_LDT = 0x01,
+	SEND_LD_TYPE_LDWB = 0x02,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET = 0x00,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64 = 0x00,
+	SEND_MEMDSZ_B32 = 0x01,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID = 0x00,
+	SQ_DESC_TYPE_HEADER = 0x01,
+	SQ_DESC_TYPE_CRC = 0x02,
+	SQ_DESC_TYPE_IMMEDIATE = 0x03,
+	SQ_DESC_TYPE_GATHER = 0x04,
+	SQ_DESC_TYPE_MEMORY = 0x05,
+};
+
+enum l3_type_t {
+	L3_NONE		= 0x00,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE		= 0x00,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP = 0x0,
+	STRIP_FIRST_VLAN = 0x1,
+	STRIP_SECOND_VLAN = 0x2,
+	STRIP_RESERV = 0x3
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE = 0,
+	RBDR_FIFO_STATE_ACTIVE   = 1,
+	RBDR_FIFO_STATE_RESET    = 2,
+	RBDR_FIFO_STATE_FAIL     = 3
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF      = 0,
+	RQ_CACHE_ALLOC_ALL      = 1,
+	RQ_CACHE_ALLOC_FIRST    = 2,
+	RQ_CACHE_ALLOC_TWO      = 3,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE = 0x0,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD = 0x0,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS = 0x0,
+	RQ_SQ_STATS_PKTS = 0x1,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS = 0,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS = 0,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Structures */
+
+#define NICVF_CQE_RBPTR_WORD         6
+#define NICVF_CQE_RX2_RBPTR_WORD     7
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserverd1:1;
+	uint64_t reserverd0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserverd0:34;
+	uint64_t reserverd1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..715c7c3
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,416 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(1000);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(1000);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(1000);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+				nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..83c1844
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,132 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#include "nicvf_hw.h"
+#include "nicvf_mbox.h"
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-31 16:53     ` Stephen Hemminger
  2016-05-29 16:46   ` [PATCH v2 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
                     ` (7 subsequent siblings)
  9 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  63 +++++
 drivers/net/thunderx/nicvf_ethdev.c                | 273 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   2 +
 10 files changed, 618 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index 47c26f6..ad5686b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index fe5e987..7940bbd 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,3 +34,13 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 6ba7658..0e29a33 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -50,6 +50,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..eb9f100
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..7b9b6f6
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,273 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static struct itimerspec alarm_time = {
+	.it_interval = {
+		.tv_sec = 0,
+		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
+	},
+	.it_value = {
+		.tv_sec = 0,
+		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
+	},
+};
+
+static void
+nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
+{
+	struct nicvf *nic = (struct nicvf *)arg;
+
+	nicvf_reg_poll_interrupts(nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	int ret = -EBUSY;
+
+	nic->intr_handle.type = RTE_INTR_HANDLE_ALARM;
+	nic->intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
+	if (nic->intr_handle.fd == -1)
+		goto error;
+	ret = rte_intr_callback_register(&nic->intr_handle,
+				nicvf_interrupt, nic);
+	ret |= timerfd_settime(nic->intr_handle.fd, 0, &alarm_time, NULL);
+error:
+	return ret;
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	int ret;
+
+	ret = rte_intr_callback_unregister(&nic->intr_handle,
+				nicvf_interrupt, nic);
+	ret |= close(nic->intr_handle.fd);
+	return ret;
+}
+
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning Postive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+			THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..d4d2071
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf *
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return eth_dev->data->dev_private;
+}
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..c52545d
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	void (*pool_free)(struct nicvf_txq *sq);
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..349c6e1
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.04 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index b84b56d..1d8d8cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
 # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
 # for calculating HMAC precomputes
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
@@ -150,6 +151,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
 
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 03/20] thunderx/nicvf: add link status and link update support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
                     ` (6 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 55 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 7b9b6f6..9f4ca65 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,35 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+		*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static struct itimerspec alarm_time = {
 	.it_interval = {
 		.tv_sec = 0,
@@ -85,7 +114,13 @@ nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
 {
 	struct nicvf *nic = (struct nicvf *)arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+				RTE_ETH_EVENT_INTR_LSC);
+	}
 }
 
 static int
@@ -115,8 +150,26 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return ret;
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index d4d2071..8189856 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (2 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 9f4ca65..96ab1b6 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,6 +70,9 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
+static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
+			      struct rte_dev_reg_info *regs);
 
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
@@ -167,9 +170,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 05/20] thunderx/nicvf: add dev_configure support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (3 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 79 +++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 96ab1b6..887cf8e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,7 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
@@ -195,8 +196,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 06/20] thunderx/nicvf: add dev_infos_get support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (4 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 47 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 887cf8e..f5858af 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,8 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_info_get(struct rte_eth_dev *dev,
+			       struct rte_eth_dev_info *dev_info);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -196,6 +198,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -277,6 +323,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8189856..e31657d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define NICVF_DEFAULT_RX_FREE_THRESH    224
+#define NICVF_DEFAULT_TX_FREE_THRESH    224
+#define NICVF_TX_FREE_MPOOL_THRESH      16
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 07/20] thunderx/nicvf: add rx_queue_setup/release support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (5 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 141 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   2 +
 2 files changed, 143 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index f5858af..8fa3256 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,11 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_rxconf *rx_conf,
+				    struct rte_mempool *mp);
+static void nicvf_dev_rx_queue_release(void *rx_queue);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -198,6 +203,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+					NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->nb_mem_chunks != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				rx_conf->rx_free_thresh :
+				NICVF_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > NICVF_MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+				rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+			qidx, rxq, mp->name, nb_desc,
+			rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -324,6 +463,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index e31657d..afb875a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -59,6 +59,8 @@
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
 #define NICVF_TX_FREE_MPOOL_THRESH      16
+#define NICVF_MAX_RX_FREE_THRESH        1024
+#define NICVF_MAX_TX_FREE_THRESH        1024
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 08/20] thunderx/nicvf: add tx_queue_setup/release support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (6 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-05-29 16:46   ` [PATCH v2 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 179 ++++++++++++++++++++++++++++++++++++
 1 file changed, 179 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8fa3256..3b7cdde 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -78,6 +78,10 @@ static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    const struct rte_eth_rxconf *rx_conf,
 				    struct rte_mempool *mp);
 static void nicvf_dev_rx_queue_release(void *rx_queue);
+static int nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_txconf *tx_conf);
+static void nicvf_dev_tx_queue_release(void *sq);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -226,6 +230,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to avilable qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				tx_conf->tx_free_thresh :
+				NICVF_DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) ||
+		tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!is_single_pool) {
+		txq->tx_free_thresh = (uint16_t)
+		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
+				NICVF_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	}
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+			qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -465,6 +642,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 09/20] thunderx/nicvf: add rss and reta query and update support
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (7 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
@ 2016-05-29 16:46   ` Jerin Jacob
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  9 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:46 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 182 ++++++++++++++++++++++++++++++++++++
 1 file changed, 182 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3b7cdde..87286fe 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,16 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_reta_entry64 *reta_conf,
+				 uint16_t reta_size);
+static int nicvf_dev_reta_query(struct rte_eth_dev *dev,
+				struct rte_eth_rss_reta_entry64 *reta_conf,
+				uint16_t reta_size);
+static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -207,6 +217,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+		rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+				rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -640,6 +818,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (20 preceding siblings ...)
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
@ 2016-05-29 16:53 ` Jerin Jacob
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
  2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
  23 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:53 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 87286fe..b0f3f5d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,8 +71,10 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
 				 struct rte_eth_rss_reta_entry64 *reta_conf,
 				 uint16_t reta_size);
@@ -193,6 +195,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -217,6 +262,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -817,7 +868,9 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index afb875a..b1af468 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -62,6 +62,8 @@
 #define NICVF_MAX_RX_FREE_THRESH        1024
 #define NICVF_MAX_TX_FREE_THRESH        1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 11/20] thunderx/nicvf: add stats support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (21 preceding siblings ...)
  2016-05-29 16:53 ` [PATCH v2 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
@ 2016-05-29 16:54 ` Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
                     ` (4 more replies)
  2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
  23 siblings, 5 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 69 +++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b0f3f5d..817ad37 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,9 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
+				struct rte_eth_stats *stat);
+static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
 static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
@@ -262,6 +265,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -868,6 +935,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 12/20] thunderx/nicvf: add single and multi segment tx functions
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
@ 2016-05-29 16:54   ` Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile       |   2 +
 drivers/net/thunderx/nicvf_ethdev.c |   5 +-
 drivers/net/thunderx/nicvf_rxtx.c   | 256 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
 4 files changed, 355 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index eb9f100..9079b5b 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 817ad37..4c53cb9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
@@ -671,6 +671,9 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
 				NICVF_TX_FREE_MPOOL_THRESH :
 				tx_conf->tx_free_thresh);
+		txq->pool_free = nicvf_multi_pool_free_xmited_buffers;
+	} else {
+		txq->pool_free = nicvf_single_pool_free_xmited_buffers;
 	}
 
 	/* Allocate software ring */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..3cf7193
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,256 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+}
+
+void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[NICVF_MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmittted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+
+		if (unlikely(sq->pool == NULL))
+			sq->pool = tx_pkts[0]->pool;
+
+		sq->pool_free(sq);
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..3c51432
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return (i & ~3) + 3 - (i & 3);
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+#else
+
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return i;
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+        entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |
+			 (uint64_t)NIC_SEND_LD_TYPE_E_LDT << 58 |
+			 pkt->data_len;
+        entry->buff[1] = rte_mbuf_data_dma_addr(pkt);
+}
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+void nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq);
+void nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 13/20] thunderx/nicvf: add single and multi segment rx functions
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
@ 2016-05-29 16:54   ` Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  33 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 317 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |   5 +
 3 files changed, 355 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b1af468..59fa19c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -70,4 +70,37 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	struct rte_mempool_memhdr *hdr;
+
+	hdr = STAILQ_FIRST(&mp->mem_list);
+	assert(hdr != NULL);
+	return (uint64_t)((uintptr_t)hdr->addr - hdr->phys_addr);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
+
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 3cf7193..80c0018 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -254,3 +254,320 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+		(nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+					__ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						& NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	cqe_rx_word3_t cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cqe_rx = (struct cqe_rx_t *)&desc[cqe_head];
+		NICVF_RX_ASSERT(((struct cq_entry_type_t *)cqe_rx)->cqe_type
+						 == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	const uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 3c51432..1e355b6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -83,6 +84,10 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
@ 2016-05-29 16:54   ` Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 16/20] thunderx/nicvf: add tx " Jerin Jacob
  4 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  9 ++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  2 ++
 3 files changed, 52 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 4c53cb9..56e0b5c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -314,6 +314,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+			sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -942,6 +981,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
@@ -949,6 +989,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 80c0018..8031685 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -571,3 +571,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 1e355b6..44cef06 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -84,6 +84,8 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 15/20] thunderx/nicvf: add rx queue start and stop support
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
                     ` (2 preceding siblings ...)
  2016-05-29 16:54   ` [PATCH v2 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
@ 2016-05-29 16:54   ` Jerin Jacob
  2016-05-29 16:54   ` [PATCH v2 16/20] thunderx/nicvf: add tx " Jerin Jacob
  4 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 194 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 56e0b5c..6a3c01b 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -88,6 +88,8 @@ static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -616,6 +618,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -741,6 +791,33 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
+					NICVF_MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -749,6 +826,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -761,6 +901,39 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -987,6 +1160,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 8031685..e8c605d 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -580,3 +580,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, NICVF_MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 44cef06..3484928 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -85,6 +85,7 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 #endif
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 16/20] thunderx/nicvf: add tx queue start and stop support
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
                     ` (3 preceding siblings ...)
  2016-05-29 16:54   ` [PATCH v2 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
@ 2016-05-29 16:54   ` Jerin Jacob
  4 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:54 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 68 +++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6a3c01b..3d45986 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -90,6 +90,8 @@ static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -618,6 +620,52 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] ==
+	    RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -934,6 +982,24 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1162,6 +1228,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support
  2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                   ` (22 preceding siblings ...)
  2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
@ 2016-05-29 16:57 ` Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 18/20] thunderx/config: set max numa node to two Jerin Jacob
                     ` (2 more replies)
  23 siblings, 3 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 468 ++++++++++++++++++++++++++++++++++++
 1 file changed, 468 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3d45986..dd04fd2 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,7 +70,10 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
+static int nicvf_dev_start(struct rte_eth_dev *dev);
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_close(struct rte_eth_dev *dev);
 static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
 				struct rte_eth_stats *stat);
 static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
@@ -592,6 +595,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -688,6 +767,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -732,6 +836,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->pool_free == nicvf_single_pool_free_xmited_buffers)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1135,6 +1281,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited witout proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1215,7 +1672,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1250,6 +1710,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 18/20] thunderx/config: set max numa node to two
  2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
@ 2016-05-29 16:57   ` Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
  2 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:57 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 config/defconfig_arm64-thunderx-linuxapp-gcc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index 7940bbd..cc12cee 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,6 +34,7 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+CONFIG_RTE_MAX_NUMA_NODES=2
 
 #
 # Compile Cavium Thunderx NICVF PMD driver
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 18/20] thunderx/config: set max numa node to two Jerin Jacob
@ 2016-05-29 16:57   ` Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
  2 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  96 ++++-----
 doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 404 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 0b13698..ddf75f4 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     nfp
     qede
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index 0bd8fae..df28510 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,40 +74,40 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s v v v v x
-                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h i i m e
-                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e o r r x n
-                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d s t t n v
-                        c x x i e 0       . v v   f e e e e k k k k     e         v   a t i i e i
-                        k   v n           . f f       . v v   . v v               f   t   o o t r
-                        e   f g           .   .       . f f   . f f                   a     . 3 t
-                        t                 v   v       v   v   v   v                   2     v
-                                          e   e       e   e   e   e                         e
-                                          c   c       c   c   c   c                         c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s t v v v v x
+                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h h i i m e
+                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e u o r r x n
+                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d n s t t n v
+                        c x x i e 0       . v v   f e e e e k k k k     e         v   a d t i i e i
+                        k   v n           . f f       . v v   . v v               f   t e   o o t r
+                        e   f g           .   .       . f f   . f f                   a r     . 3 t
+                        t                 v   v       v   v   v   v                   2 x     v
+                                          e   e       e   e   e   e                           e
+                                          c   c       c   c   c   c                           c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y
-   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y
-   Queue status event                                                                   Y
+   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y Y
+   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y Y
+   Queue status event                                                                     Y
    Rx interrupt                   Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y   Y Y
-   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y   Y
+   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y Y   Y Y
+   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y                 Y
+   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y Y   Y
    LRO                                              Y Y Y Y
    TSO                          Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y   Y Y
-   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y   Y Y
-   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
-   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y       Y Y
-   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y
-   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y Y   Y Y
+   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y Y   Y Y
+   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
+   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y         Y Y
+   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y     Y
+   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
+   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
    VMDq                                 Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y
+   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y     Y
    DCB                                  Y Y     Y   Y Y
-   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
+   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
    Ethertype filter                     Y Y     Y   Y Y
    N-tuple filter                               Y   Y Y
    SYN filter                                   Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                 Y Y     Y Y     Y   Y Y                         Y Y
    Rate limitation                                  Y Y
    Traffic mirroring                    Y Y         Y Y
-   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
-   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
+   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     Y
+   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     P
    QinQ offload                   Y     Y   Y   Y Y Y   Y
-   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
+   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
    Inner L3 checksum                Y   Y   Y       Y   Y           Y
    Inner L4 checksum                Y   Y   Y       Y   Y           Y
-   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y
+   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y     Y
    Timesync                             Y Y     Y   Y Y
-   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y   Y Y
+   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y Y
+   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y   Y
+   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y Y   Y Y
    EEPROM dump                                  Y   Y Y
-   Registers dump                               Y Y Y Y Y Y
-   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
-   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y       Y Y
-   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
+   Registers dump                               Y Y Y Y Y Y                             Y
+   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                         Y Y
+   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y         Y Y
+   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y   Y Y
    Other kdrv                                                       Y Y               Y
-   ARMv7                                                                      Y           Y Y
-   ARMv8                                                                      Y           Y Y
+   ARMv7                                                                      Y             Y Y
+   ARMv8                                                                      Y         Y   Y Y
    Power8                                                           Y Y       Y
    TILE-Gx                                                                    Y
-   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y         Y Y Y
-   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y   Y
+   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y           Y Y Y
+   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y   Y Y Y
+   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..e38f260
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,354 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
+        -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx \
+        --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table.
+           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
+           loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 \
+      -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 30e78d4..29b8b52 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -47,6 +47,7 @@ New Features
   * Dropped specific Xen Dom0 code.
   * Dropped specific anonymous mempool code in testpmd.
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Resolved Issues
 ---------------
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v2 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 18/20] thunderx/config: set max numa node to two Jerin Jacob
  2016-05-29 16:57   ` [PATCH v2 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
@ 2016-05-29 16:57   ` Jerin Jacob
  2 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-05-29 16:57 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, john.mcnamara, Jerin Jacob,
	Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e8558f..625423f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -336,6 +336,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 
+Cavium ThunderX nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-29 16:46   ` [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-05-31 16:53     ` Stephen Hemminger
  2016-06-01  9:14       ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Stephen Hemminger @ 2016-05-31 16:53 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, bruce.richardson, john.mcnamara,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

On Sun, 29 May 2016 22:16:46 +0530
Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:

> +
> +static struct itimerspec alarm_time = {
> +	.it_interval = {
> +		.tv_sec = 0,
> +		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
> +	},
> +	.it_value = {
> +		.tv_sec = 0,
> +		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
> +	},
> +};
> +
> +static void
> +nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
> +{
> +	struct nicvf *nic = (struct nicvf *)arg;
> +
> +	nicvf_reg_poll_interrupts(nic);
> +}
> +
> +static int
> +nicvf_periodic_alarm_start(struct nicvf *nic)
> +{
> +	int ret = -EBUSY;
> +
> +	nic->intr_handle.type = RTE_INTR_HANDLE_ALARM;
> +	nic->intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
> +	if (nic->intr_handle.fd == -1)
> +		goto error;
> +	ret = rte_intr_callback_register(&nic->intr_handle,
> +				nicvf_interrupt, nic);
> +	ret |= timerfd_settime(nic->intr_handle.fd, 0, &alarm_time, NULL);
> +error:
> +	return ret;
> +}
> +
> +static int
> +nicvf_periodic_alarm_stop(struct nicvf *nic)
> +{
> +	int ret;
> +
> +	ret = rte_intr_callback_unregister(&nic->intr_handle,
> +				nicvf_interrupt, nic);
> +	ret |= close(nic->intr_handle.fd);
> +	return ret;
> +}

It would be good to have real link status interrupts or just report that
device does not support Link State interrupt and let application poll.  Having another
thing going on (timerfd callback) seems like it would add more complexity to
the environment of an already complex thread structure with DPDK.

Also, timerfd() doesn't exist on all OS's.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton
  2016-05-31 16:53     ` Stephen Hemminger
@ 2016-06-01  9:14       ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-01  9:14 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, thomas.monjalon, bruce.richardson, john.mcnamara,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

On Tue, May 31, 2016 at 09:53:55AM -0700, Stephen Hemminger wrote:
> On Sun, 29 May 2016 22:16:46 +0530
> Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> 
> > +
> > +static struct itimerspec alarm_time = {
> > +	.it_interval = {
> > +		.tv_sec = 0,
> > +		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
> > +	},
> > +	.it_value = {
> > +		.tv_sec = 0,
> > +		.tv_nsec = NICVF_INTR_POLL_INTERVAL_MS * 1000000,
> > +	},
> > +};
> > +
> > +static void
> > +nicvf_interrupt(struct rte_intr_handle *hdl __rte_unused, void *arg)
> > +{
> > +	struct nicvf *nic = (struct nicvf *)arg;
> > +
> > +	nicvf_reg_poll_interrupts(nic);
> > +}
> > +
> > +static int
> > +nicvf_periodic_alarm_start(struct nicvf *nic)
> > +{
> > +	int ret = -EBUSY;
> > +
> > +	nic->intr_handle.type = RTE_INTR_HANDLE_ALARM;
> > +	nic->intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
> > +	if (nic->intr_handle.fd == -1)
> > +		goto error;
> > +	ret = rte_intr_callback_register(&nic->intr_handle,
> > +				nicvf_interrupt, nic);
> > +	ret |= timerfd_settime(nic->intr_handle.fd, 0, &alarm_time, NULL);
> > +error:
> > +	return ret;
> > +}
> > +
> > +static int
> > +nicvf_periodic_alarm_stop(struct nicvf *nic)
> > +{
> > +	int ret;
> > +
> > +	ret = rte_intr_callback_unregister(&nic->intr_handle,
> > +				nicvf_interrupt, nic);
> > +	ret |= close(nic->intr_handle.fd);
> > +	return ret;
> > +}
> 
> It would be good to have real link status interrupts or just report that
> device does not support Link State interrupt and let application poll.  Having another
> thing going on (timerfd callback) seems like it would add more complexity to
> the environment of an already complex thread structure with DPDK.

But we would still need some polling infrastructure for some 'async'
mbox events and error events from PF.So, I think I can change to
rte_eal_alarm* infrastructure like bond pmd driver. Will fix it in V3.

> 
> Also, timerfd() doesn't exist on all OS's.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                     ` (8 preceding siblings ...)
  2016-05-29 16:46   ` [PATCH v2 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
@ 2016-06-07 16:40   ` Jerin Jacob
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
                       ` (20 more replies)
  9 siblings, 21 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with today's git HEAD change-set
ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

V1->V2

http://dpdk.org/dev/patchwork/patch/12609/
-- added const for the const struct tables
-- remove multiple blank lines
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12610/
-- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
-- add const for table structs
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12614/
-- s/DEFAULT_*/NICVF_DEFAULT_*/gc
http://dpdk.org/dev/patchwork/patch/12615/
-- Fix typos
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12616/
-- removed redundant txq->tail = 0 and txq->head = 0
http://dpdk.org/dev/patchwork/patch/12627/
-- fixed the documentation changes

-- fixed TAB+space occurrences in functions
-- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db

V2->V3

http://dpdk.org/dev/patchwork/patch/13060/
-- Changed polling infrastructure to use rte_eal_alarm* instead of timerfd_create API
-- rebased to ca173a909538a2f1082cd0dcb4d778a97dab69c3

Jerin Jacob (20):
  thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  thunderx/nicvf: add pmd skeleton
  thunderx/nicvf: add link status and link update support
  thunderx/nicvf: add get_reg and get_reg_length support
  thunderx/nicvf: add dev_configure support
  thunderx/nicvf: add dev_infos_get support
  thunderx/nicvf: add rx_queue_setup/release support
  thunderx/nicvf: add tx_queue_setup/release support
  thunderx/nicvf: add rss and reta query and update support
  thunderx/nicvf: add mtu_set and promiscuous_enable support
  thunderx/nicvf: add stats support
  thunderx/nicvf: add single and multi segment tx functions
  thunderx/nicvf: add single and multi segment rx functions
  thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
    support
  thunderx/nicvf: add rx queue start and stop support
  thunderx/nicvf: add tx queue start and stop support
  thunderx/nicvf: add device start,stop and close support
  thunderx/config: set max numa node to two
  thunderx/nicvf: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   11 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   96 +-
 doc/guides/nics/thunderx.rst                       |  354 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   65 +
 drivers/net/thunderx/base/nicvf_hw.c               |  908 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1216 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  416 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1839 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  106 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  600 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |  101 ++
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    2 +
 23 files changed, 6500 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:18       ` Ferruh Yigit
                         ` (2 more replies)
  2016-06-07 16:40     ` [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
                       ` (19 subsequent siblings)
  20 siblings, 3 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
drivers/net/thunderx/nicvf/base directory.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c      |  908 +++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h      |  240 ++++++
 drivers/net/thunderx/base/nicvf_hw_defs.h | 1216 +++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c    |  416 ++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h    |  232 ++++++
 drivers/net/thunderx/base/nicvf_plat.h    |  132 ++++
 6 files changed, 3144 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..24fe77d
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,908 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static const struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = 10;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(2000);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = 10;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_HEAD,
+					qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+					NIC_QSET_RBDR_0_1_TAIL,
+					qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+				NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(2000);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= 0);
+	assert(val <= 6);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.reserved_45_63 = 0,
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.reserved_36_41 = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+					RBDR_SIZE_SHIFT);
+	rbdr_cfg.reserved_25_31 = 0;
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.reserved_12_15 = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+				RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+				RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+				RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < (rbdr->qlen_mask)) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+				SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+				SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+				SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+				CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+				CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+				CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+				NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..32357cc
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,240 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE = 0,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK = 0,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8174 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8173 */
+	NICVF_ERR_BASE_INIT,     /* -8172 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8171 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..ef9354b
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,1216 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS			64
+#define NIC_HW_MAX_FRS			9200 /* 9216 max packet including FCS */
+#define NIC_HW_MAX_SEGS			12
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES	128 /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES	512 /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES	128 /* 7 bits */
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD = 0x0,
+	NIC_SEND_LD_TYPE_E_LDT = 0x1,
+	NIC_SEND_LD_TYPE_E_LDWB = 0x2,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST = 0x3,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE = 0x0,
+	ETYPE_ALG_SKIP = 0x1,
+	ETYPE_ALG_ENDPARSE = 0x2,
+	ETYPE_ALG_VLAN = 0x3,
+	ETYPE_ALG_VLAN_STRIP = 0x4,
+};
+
+enum layer3_type {
+	L3TYPE_NONE = 0x0,
+	L3TYPE_GRH = 0x1,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE = 0x0,
+	L4TYPE_IPSEC_ESP = 0x1,
+	L4TYPE_IPFRAG = 0x2,
+	L4TYPE_IPCOMP = 0x3,
+	L4TYPE_TCP = 0x4,
+	L4TYPE_UDP = 0x5,
+	L4TYPE_SCTP = 0x6,
+	L4TYPE_GRE = 0x7,
+	L4TYPE_ROCE_BTH = 0x8,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE = 0x0,
+	CPI_ALG_VLAN = 0x1,
+	CPI_ALG_VLAN16 = 0x2,
+	CPI_ALG_DIFF = 0x3,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE = 0x00,
+	RSS_ALG_PORT = 0x01,
+	RSS_ALG_IP = 0x02,
+	RSS_ALG_TCP_IP = 0x03,
+	RSS_ALG_UDP_IP = 0x04,
+	RSS_ALG_SCTP_IP = 0x05,
+	RSS_ALG_GRE_IP = 0x06,
+	RSS_ALG_ROCE = 0x07,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC = 0x00,
+	RSS_HASH_IP = 0x01,
+	RSS_HASH_TCP = 0x02,
+	RSS_HASH_TCP_SYN_DIS = 0x03,
+	RSS_HASH_UDP = 0x04,
+	RSS_HASH_L4ETC = 0x05,
+	RSS_HASH_ROCE = 0x06,
+	RSS_L3_BIDI = 0x07,
+	RSS_L4_BIDI = 0x08,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID = 0x0,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT = 0x00,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD = 0x00,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET = 0,
+	CQE_RX_TCP_END_INVALID_FLAG = 1,
+	CQE_RX_TCP_END_TIMEOUT = 2,
+	CQE_RX_TCP_END_OUT_OF_SEQ = 3,
+	CQE_RX_TCP_END_PKT_ERR = 4,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE = 0x0,
+	CQE_RX_ERRLVL_L2 = 0x1,
+	CQE_RX_ERRLVL_L3 = 0x2,
+	CQE_RX_ERRLVL_L4 = 0x3,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE = 0x0,
+	CQE_RX_ERR_RE_PARTIAL = 0x1,
+	CQE_RX_ERR_RE_JABBER = 0x2,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE = 0x00,
+	SEND_L4_CSUM_UDP = 0x01,
+	SEND_L4_CSUM_TCP = 0x02,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32 = 0x00,
+	SEND_CRCALG_CRC32C = 0x01,
+	SEND_CRCALG_ICRC = 0x02,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD = 0x00,
+	SEND_LD_TYPE_LDT = 0x01,
+	SEND_LD_TYPE_LDWB = 0x02,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET = 0x00,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64 = 0x00,
+	SEND_MEMDSZ_B32 = 0x01,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID = 0x00,
+	SQ_DESC_TYPE_HEADER = 0x01,
+	SQ_DESC_TYPE_CRC = 0x02,
+	SQ_DESC_TYPE_IMMEDIATE = 0x03,
+	SQ_DESC_TYPE_GATHER = 0x04,
+	SQ_DESC_TYPE_MEMORY = 0x05,
+};
+
+enum l3_type_t {
+	L3_NONE		= 0x00,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE		= 0x00,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP = 0x0,
+	STRIP_FIRST_VLAN = 0x1,
+	STRIP_SECOND_VLAN = 0x2,
+	STRIP_RESERV = 0x3
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE = 0,
+	RBDR_FIFO_STATE_ACTIVE   = 1,
+	RBDR_FIFO_STATE_RESET    = 2,
+	RBDR_FIFO_STATE_FAIL     = 3
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF      = 0,
+	RQ_CACHE_ALLOC_ALL      = 1,
+	RQ_CACHE_ALLOC_FIRST    = 2,
+	RQ_CACHE_ALLOC_TWO      = 3,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE = 0x0,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD = 0x0,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS = 0x0,
+	RQ_SQ_STATS_PKTS = 0x1,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS = 0,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS = 0,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Structures */
+
+#define NICVF_CQE_RBPTR_WORD         6
+#define NICVF_CQE_RX2_RBPTR_WORD     7
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserverd1:1;
+	uint64_t reserverd0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserverd0:34;
+	uint64_t reserverd1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..715c7c3
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,416 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(1000);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(1000);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(1000);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+				nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..83c1844
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,132 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#include "nicvf_hw.h"
+#include "nicvf_mbox.h"
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:18       ` Ferruh Yigit
  2016-06-08 16:06       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
                       ` (18 subsequent siblings)
  20 siblings, 2 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  63 ++++++
 drivers/net/thunderx/nicvf_ethdev.c                | 251 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   2 +
 10 files changed, 596 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index 47c26f6..ad5686b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index fe5e987..7940bbd 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,3 +34,13 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 6ba7658..0e29a33 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -50,6 +50,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..eb9f100
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..45bfc13
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,251 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static void
+nicvf_interrupt(void *arg)
+{
+	struct nicvf *nic = (struct nicvf *)arg;
+
+	nicvf_reg_poll_interrupts(nic);
+
+	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+				nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	return rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+					nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
+}
+
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning Postive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+			THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..d4d2071
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf *
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return eth_dev->data->dev_private;
+}
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..c52545d
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	void (*pool_free)(struct nicvf_txq *sq);
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..349c6e1
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.04 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index b84b56d..1d8d8cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
 # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
 # for calculating HMAC precomputes
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
@@ -150,6 +151,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
 
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 03/20] thunderx/nicvf: add link status and link update support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-06-07 16:40     ` [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-07 16:40     ` [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
                       ` (17 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 55 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 45bfc13..5d28eea 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,12 +69,47 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+		*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static void
 nicvf_interrupt(void *arg)
 {
 	struct nicvf *nic = (struct nicvf *)arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+				RTE_ETH_EVENT_INTR_LSC);
+	}
 
 	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
 				nicvf_interrupt, nic);
@@ -93,8 +128,26 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index d4d2071..8189856 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (2 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:16       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
                       ` (16 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5d28eea..34b4735 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,6 +70,9 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
+static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
+			      struct rte_dev_reg_info *regs);
 
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
@@ -145,9 +148,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 05/20] thunderx/nicvf: add dev_configure support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (3 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:21       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
                       ` (15 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 79 +++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 34b4735..37cc7f4 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,7 @@
 
 #include "nicvf_logs.h"
 
+static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
@@ -173,8 +174,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (4 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:23       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
                       ` (14 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 47 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 37cc7f4..5d91cfd 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,8 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_info_get(struct rte_eth_dev *dev,
+			       struct rte_eth_dev_info *dev_info);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -174,6 +176,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -255,6 +301,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8189856..e31657d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define NICVF_DEFAULT_RX_FREE_THRESH    224
+#define NICVF_DEFAULT_TX_FREE_THRESH    224
+#define NICVF_TX_FREE_MPOOL_THRESH      16
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (5 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:42       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
                       ` (13 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 141 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   2 +
 2 files changed, 143 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5d91cfd..6150ef0 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,11 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_rxconf *rx_conf,
+				    struct rte_mempool *mp);
+static void nicvf_dev_rx_queue_release(void *rx_queue);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -176,6 +181,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+					NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->nb_mem_chunks != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				rx_conf->rx_free_thresh :
+				NICVF_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > NICVF_MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+				rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+			qidx, rxq, mp->name, nb_desc,
+			rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -302,6 +441,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index e31657d..afb875a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -59,6 +59,8 @@
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
 #define NICVF_TX_FREE_MPOOL_THRESH      16
+#define NICVF_MAX_RX_FREE_THRESH        1024
+#define NICVF_MAX_TX_FREE_THRESH        1024
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (6 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:24       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
                       ` (12 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 179 ++++++++++++++++++++++++++++++++++++
 1 file changed, 179 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6150ef0..0891a26 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -78,6 +78,10 @@ static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    const struct rte_eth_rxconf *rx_conf,
 				    struct rte_mempool *mp);
 static void nicvf_dev_rx_queue_release(void *rx_queue);
+static int nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+				    uint16_t nb_desc, unsigned int socket_id,
+				    const struct rte_eth_txconf *tx_conf);
+static void nicvf_dev_tx_queue_release(void *sq);
 static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
 static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
 			      struct rte_dev_reg_info *regs);
@@ -204,6 +208,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to avilable qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				tx_conf->tx_free_thresh :
+				NICVF_DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) ||
+		tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!is_single_pool) {
+		txq->tx_free_thresh = (uint16_t)
+		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
+				NICVF_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	}
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+			qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -443,6 +620,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (7 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:45       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
                       ` (11 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 182 ++++++++++++++++++++++++++++++++++++
 1 file changed, 182 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 0891a26..efe0e05 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -73,6 +73,16 @@ static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_reta_entry64 *reta_conf,
+				 uint16_t reta_size);
+static int nicvf_dev_reta_query(struct rte_eth_dev *dev,
+				struct rte_eth_rss_reta_entry64 *reta_conf,
+				uint16_t reta_size);
+static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -185,6 +195,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+		rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+				rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -618,6 +796,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (8 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:48       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 11/20] thunderx/nicvf: add stats support Jerin Jacob
                       ` (10 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index efe0e05..7a931ec 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,8 +71,10 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
+static int nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int nicvf_dev_reta_update(struct rte_eth_dev *dev,
 				 struct rte_eth_rss_reta_entry64 *reta_conf,
 				 uint16_t reta_size);
@@ -171,6 +173,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -195,6 +240,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -795,7 +846,9 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index afb875a..b1af468 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -62,6 +62,8 @@
 #define NICVF_MAX_RX_FREE_THRESH        1024
 #define NICVF_MAX_TX_FREE_THRESH        1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 11/20] thunderx/nicvf: add stats support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (9 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 16:53       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
                       ` (9 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 69 +++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 7a931ec..35fad4c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -71,6 +71,9 @@
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
+				struct rte_eth_stats *stat);
+static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
 static void nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused);
 static void nicvf_dev_info_get(struct rte_eth_dev *dev,
 			       struct rte_eth_dev_info *dev_info);
@@ -240,6 +243,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -846,6 +913,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (10 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 11/20] thunderx/nicvf: add stats support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:11       ` Ferruh Yigit
  2016-06-08 12:51       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
                       ` (8 subsequent siblings)
  20 siblings, 2 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile       |   2 +
 drivers/net/thunderx/nicvf_ethdev.c |   5 +-
 drivers/net/thunderx/nicvf_rxtx.c   | 256 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
 4 files changed, 355 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index eb9f100..9079b5b 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 35fad4c..b273149 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
@@ -649,6 +649,9 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
 				NICVF_TX_FREE_MPOOL_THRESH :
 				tx_conf->tx_free_thresh);
+		txq->pool_free = nicvf_multi_pool_free_xmited_buffers;
+	} else {
+		txq->pool_free = nicvf_single_pool_free_xmited_buffers;
 	}
 
 	/* Allocate software ring */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..3cf7193
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,256 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+}
+
+void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[NICVF_MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmittted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+
+		if (unlikely(sq->pool == NULL))
+			sq->pool = tx_pkts[0]->pool;
+
+		sq->pool_free(sq);
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmittted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..3c51432
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return (i & ~3) + 3 - (i & 3);
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+#else
+
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return i;
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+        entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |
+			 (uint64_t)NIC_SEND_LD_TYPE_E_LDT << 58 |
+			 pkt->data_len;
+        entry->buff[1] = rte_mbuf_data_dma_addr(pkt);
+}
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+void nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq);
+void nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (11 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 17:04       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
                       ` (7 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  33 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 317 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |   5 +
 3 files changed, 355 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b1af468..59fa19c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -70,4 +70,37 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	struct rte_mempool_memhdr *hdr;
+
+	hdr = STAILQ_FIRST(&mp->mem_list);
+	assert(hdr != NULL);
+	return (uint64_t)((uintptr_t)hdr->addr - hdr->phys_addr);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
+
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 3cf7193..80c0018 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -254,3 +254,320 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+		(nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+					__ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						& NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	cqe_rx_word3_t cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cqe_rx = (struct cqe_rx_t *)&desc[cqe_head];
+		NICVF_RX_ASSERT(((struct cq_entry_type_t *)cqe_rx)->cqe_type
+						 == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	const uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 3c51432..1e355b6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -83,6 +84,10 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (12 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 17:17       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
                       ` (6 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  9 ++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  2 ++
 3 files changed, 52 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b273149..5da07da 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -292,6 +292,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+			sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -920,6 +959,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
@@ -927,6 +967,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 80c0018..8031685 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -571,3 +571,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 1e355b6..44cef06 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -84,6 +84,8 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (13 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 17:42       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 16/20] thunderx/nicvf: add tx " Jerin Jacob
                       ` (5 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 194 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5da07da..ba32803 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -88,6 +88,8 @@ static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
+static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -594,6 +596,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -719,6 +769,33 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
+					NICVF_MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -727,6 +804,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -739,6 +879,39 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -965,6 +1138,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 8031685..e8c605d 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -580,3 +580,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, NICVF_MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index 44cef06..3484928 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -85,6 +85,7 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 #endif
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 16/20] thunderx/nicvf: add tx queue start and stop support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (14 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 17:46       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
                       ` (4 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 68 +++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ba32803..baa2e7a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -90,6 +90,8 @@ static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				       struct rte_eth_rss_conf *rss_conf);
 static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+static int nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
 static int nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 				    uint16_t nb_desc, unsigned int socket_id,
 				    const struct rte_eth_rxconf *rx_conf,
@@ -596,6 +598,52 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] ==
+	    RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -912,6 +960,24 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1140,6 +1206,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (15 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 16/20] thunderx/nicvf: add tx " Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:25       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 18/20] thunderx/config: set max numa node to two Jerin Jacob
                       ` (3 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 468 ++++++++++++++++++++++++++++++++++++
 1 file changed, 468 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index baa2e7a..2a9ac77 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -70,7 +70,10 @@
 #include "nicvf_logs.h"
 
 static int nicvf_dev_configure(struct rte_eth_dev *dev);
+static int nicvf_dev_start(struct rte_eth_dev *dev);
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
 static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+static void nicvf_dev_close(struct rte_eth_dev *dev);
 static void nicvf_dev_stats_get(struct rte_eth_dev *dev,
 				struct rte_eth_stats *stat);
 static void nicvf_dev_stats_reset(struct rte_eth_dev *dev);
@@ -570,6 +573,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -666,6 +745,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -710,6 +814,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->pool_free == nicvf_single_pool_free_xmited_buffers)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1113,6 +1259,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited witout proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1193,7 +1650,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1228,6 +1688,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 18/20] thunderx/config: set max numa node to two
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (16 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 17:54       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
                       ` (2 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 config/defconfig_arm64-thunderx-linuxapp-gcc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index 7940bbd..cc12cee 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,6 +34,7 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+CONFIG_RTE_MAX_NUMA_NODES=2
 
 #
 # Compile Cavium Thunderx NICVF PMD driver
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (17 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 18/20] thunderx/config: set max numa node to two Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:08       ` Ferruh Yigit
  2016-06-07 16:40     ` [PATCH v3 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
  2016-06-08 12:30     ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Ferruh Yigit
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  96 ++++-----
 doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 404 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 0b13698..ddf75f4 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     nfp
     qede
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index 0bd8fae..df28510 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,40 +74,40 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s v v v v x
-                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h i i m e
-                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e o r r x n
-                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d s t t n v
-                        c x x i e 0       . v v   f e e e e k k k k     e         v   a t i i e i
-                        k   v n           . f f       . v v   . v v               f   t   o o t r
-                        e   f g           .   .       . f f   . f f                   a     . 3 t
-                        t                 v   v       v   v   v   v                   2     v
-                                          e   e       e   e   e   e                         e
-                                          c   c       c   c   c   c                         c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s t v v v v x
+                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h h i i m e
+                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e u o r r x n
+                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d n s t t n v
+                        c x x i e 0       . v v   f e e e e k k k k     e         v   a d t i i e i
+                        k   v n           . f f       . v v   . v v               f   t e   o o t r
+                        e   f g           .   .       . f f   . f f                   a r     . 3 t
+                        t                 v   v       v   v   v   v                   2 x     v
+                                          e   e       e   e   e   e                           e
+                                          c   c       c   c   c   c                           c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y
-   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y
-   Queue status event                                                                   Y
+   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y Y
+   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y Y
+   Queue status event                                                                     Y
    Rx interrupt                   Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y   Y Y
-   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y   Y
+   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y Y   Y Y
+   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y                 Y
+   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y Y   Y
    LRO                                              Y Y Y Y
    TSO                          Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y   Y Y
-   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y   Y Y
-   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
-   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y       Y Y
-   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y
-   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y Y   Y Y
+   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y Y   Y Y
+   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
+   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y         Y Y
+   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y     Y
+   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
+   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
    VMDq                                 Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y
+   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y     Y
    DCB                                  Y Y     Y   Y Y
-   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
+   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
    Ethertype filter                     Y Y     Y   Y Y
    N-tuple filter                               Y   Y Y
    SYN filter                                   Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                 Y Y     Y Y     Y   Y Y                         Y Y
    Rate limitation                                  Y Y
    Traffic mirroring                    Y Y         Y Y
-   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
-   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
+   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     Y
+   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     P
    QinQ offload                   Y     Y   Y   Y Y Y   Y
-   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
+   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
    Inner L3 checksum                Y   Y   Y       Y   Y           Y
    Inner L4 checksum                Y   Y   Y       Y   Y           Y
-   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y
+   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y     Y
    Timesync                             Y Y     Y   Y Y
-   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y   Y Y
+   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y Y
+   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y   Y
+   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y Y   Y Y
    EEPROM dump                                  Y   Y Y
-   Registers dump                               Y Y Y Y Y Y
-   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
-   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y       Y Y
-   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
+   Registers dump                               Y Y Y Y Y Y                             Y
+   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                         Y Y
+   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y         Y Y
+   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y   Y Y
    Other kdrv                                                       Y Y               Y
-   ARMv7                                                                      Y           Y Y
-   ARMv8                                                                      Y           Y Y
+   ARMv7                                                                      Y             Y Y
+   ARMv8                                                                      Y         Y   Y Y
    Power8                                                           Y Y       Y
    TILE-Gx                                                                    Y
-   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y         Y Y Y
-   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y   Y
+   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y           Y Y Y
+   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y   Y Y Y
+   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..e38f260
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,354 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
+        -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx \
+        --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table.
+           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
+           loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 \
+      -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 30e78d4..29b8b52 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -47,6 +47,7 @@ New Features
   * Dropped specific Xen Dom0 code.
   * Dropped specific anonymous mempool code in testpmd.
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Resolved Issues
 ---------------
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v3 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (18 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
@ 2016-06-07 16:40     ` Jerin Jacob
  2016-06-08 12:30     ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Ferruh Yigit
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-07 16:40 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, Jerin Jacob, Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e8558f..625423f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -336,6 +336,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 
+Cavium ThunderX nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-06-07 16:40     ` [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
@ 2016-06-08 12:08       ` Ferruh Yigit
  2016-06-08 12:27         ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:08 UTC (permalink / raw)
  To: Jerin Jacob, dev; +Cc: thomas.monjalon, bruce.richardson, Slawomir Rosek

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
> and release notes
> 
> Changed "*" to "P" in overview.rst to capture the partially supported
> feature as "*" creating alignment issues with Sphinx table
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>
> ---
>  doc/guides/nics/index.rst              |   1 +
>  doc/guides/nics/overview.rst           |  96 ++++-----
>  doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
>  doc/guides/rel_notes/release_16_07.rst |   1 +
>  4 files changed, 404 insertions(+), 48 deletions(-)
>  create mode 100644 doc/guides/nics/thunderx.rst

Hi Jerin,

This patch doesn't apply on top of origin/rel_16_07:

Applying: thunderx/nicvf: updated driver documentation and release notes
Using index info to reconstruct a base tree...
M	doc/guides/nics/overview.rst
Falling back to patching base and 3-way merge...
Auto-merging doc/guides/nics/overview.rst
CONFLICT (content): Merge conflict in doc/guides/nics/overview.rst

Regards,
ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions
  2016-06-07 16:40     ` [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
@ 2016-06-08 12:11       ` Ferruh Yigit
  2016-06-08 12:51       ` Ferruh Yigit
  1 sibling, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:11 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  drivers/net/thunderx/Makefile       |   2 +
>  drivers/net/thunderx/nicvf_ethdev.c |   5 +-
>  drivers/net/thunderx/nicvf_rxtx.c   | 256 ++++++++++++++++++++++++++++++++++++
>  drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
>  4 files changed, 355 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
>  create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

Patch is generating following checkpatch warnings:

CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#234: FILE: drivers/net/thunderx/nicvf_rxtx.c:154:
+			sq->xmit_bufs > sq->tx_free_thresh) {
+

ERROR:CODE_INDENT: code indent should use tabs where possible
#421: FILE: drivers/net/thunderx/nicvf_rxtx.h:79:
+        entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |$

WARNING:LEADING_SPACE: please, no spaces at the start of a line
#421: FILE: drivers/net/thunderx/nicvf_rxtx.h:79:
+        entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |$

ERROR:CODE_INDENT: code indent should use tabs where possible
#424: FILE: drivers/net/thunderx/nicvf_rxtx.h:82:
+        entry->buff[1] = rte_mbuf_data_dma_addr(pkt);$

WARNING:LEADING_SPACE: please, no spaces at the start of a line
#424: FILE: drivers/net/thunderx/nicvf_rxtx.h:82:
+        entry->buff[1] = rte_mbuf_data_dma_addr(pkt);$

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-06-08 12:18       ` Ferruh Yigit
  2016-06-08 15:45       ` Ferruh Yigit
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
  2 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:18 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
> drivers/net/thunderx/nicvf/base directory.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---

...

> +
> +struct pf_rq_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserverd1:1;
doesn't really matter but, as a detail, s/reserved/reserverd ? A few
more occurrence below.

> +	uint64_t reserverd0:34;
> +	uint64_t strip_pre_l2:1;
> +	uint64_t caching:2;
> +	uint64_t cq_qs:7;
> +	uint64_t cq_idx:3;
> +	uint64_t rbdr_cont_qs:7;
> +	uint64_t rbdr_cont_idx:1;
> +	uint64_t rbdr_strt_qs:7;
> +	uint64_t rbdr_strt_idx:1;
> +#else
> +	uint64_t rbdr_strt_idx:1;
> +	uint64_t rbdr_strt_qs:7;
> +	uint64_t rbdr_cont_idx:1;
> +	uint64_t rbdr_cont_qs:7;
> +	uint64_t cq_idx:3;
> +	uint64_t cq_qs:7;
> +	uint64_t caching:2;
> +	uint64_t strip_pre_l2:1;
> +	uint64_t reserverd0:34;
> +	uint64_t reserverd1:1;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +

...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton
  2016-06-07 16:40     ` [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
@ 2016-06-08 12:18       ` Ferruh Yigit
  2016-06-08 16:06       ` Ferruh Yigit
  1 sibling, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:18 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Introduce driver initialization and enable build infrastructure for
> nicvf pmd driver.
> 
> By default, It is enabled only for defconfig_arm64-thunderx-*
> config as it is an inbuilt NIC device.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  config/common_base                                 |  10 +
>  config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
>  drivers/net/Makefile                               |   1 +
>  drivers/net/thunderx/Makefile                      |  63 ++++++
>  drivers/net/thunderx/nicvf_ethdev.c                | 251 +++++++++++++++++++++
>  drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
>  drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
>  drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
>  .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
>  mk/rte.app.mk                                      |   2 +
>  10 files changed, 596 insertions(+)
>  create mode 100644 drivers/net/thunderx/Makefile
>  create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
>  create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
>  create mode 100644 drivers/net/thunderx/nicvf_logs.h
>  create mode 100644 drivers/net/thunderx/nicvf_struct.h
>  create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
> 

...

> +
> +	if (nic->sqs_mode) {
> +		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
> +		/* Detach port by returning Postive error number */

s/Postive/Positive ?

...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support
  2016-06-07 16:40     ` [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
@ 2016-06-08 12:24       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:24 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
...

> +
> +	/* Roundup nb_desc to avilable qsize and validate max number of desc */
s/avilable/available ?

> +	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
> +	if (nb_desc == 0) {
> +		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
> +		return -EINVAL;
> +	}
...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support
  2016-06-07 16:40     ` [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
@ 2016-06-08 12:25       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:25 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
...
> +
> +	/* Userspace process exited witout proper shutdown in last run */
s/witout/without

> +	if (nicvf_qset_rbdr_active(nic, 0))
> +		nicvf_dev_stop(dev);
> +
...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-06-08 12:08       ` Ferruh Yigit
@ 2016-06-08 12:27         ` Jerin Jacob
  2016-06-08 13:18           ` Bruce Richardson
  0 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-08 12:27 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, thomas.monjalon, bruce.richardson, Slawomir Rosek

On Wed, Jun 08, 2016 at 01:08:35PM +0100, Ferruh Yigit wrote:
> On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
> > and release notes
> > 
> > Changed "*" to "P" in overview.rst to capture the partially supported
> > feature as "*" creating alignment issues with Sphinx table
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> > Acked-by: John McNamara <john.mcnamara@intel.com>
> > ---
> >  doc/guides/nics/index.rst              |   1 +
> >  doc/guides/nics/overview.rst           |  96 ++++-----
> >  doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
> >  doc/guides/rel_notes/release_16_07.rst |   1 +
> >  4 files changed, 404 insertions(+), 48 deletions(-)
> >  create mode 100644 doc/guides/nics/thunderx.rst
> 
> Hi Jerin,
> 
> This patch doesn't apply on top of origin/rel_16_07:
> 
> Applying: thunderx/nicvf: updated driver documentation and release notes
> Using index info to reconstruct a base tree...
> M	doc/guides/nics/overview.rst
> Falling back to patching base and 3-way merge...
> Auto-merging doc/guides/nics/overview.rst
> CONFLICT (content): Merge conflict in doc/guides/nics/overview.rst

Hi Ferruh,

Since docs files are files keep changing, this patch set
has been re-based on latest change-set i.e ca173a909538a2f1082cd0dcb4d778a97dab69c3
not origin/rel_16_07.

> 
> Regards,
> ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
                       ` (19 preceding siblings ...)
  2016-06-07 16:40     ` [PATCH v3 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-06-08 12:30     ` Ferruh Yigit
  2016-06-08 12:43       ` Jerin Jacob
  20 siblings, 1 reply; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:30 UTC (permalink / raw)
  To: Jerin Jacob, dev; +Cc: thomas.monjalon, bruce.richardson

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> This patch set provides the initial version of DPDK PMD for the
> built-in NIC device in Cavium ThunderX SoC family.
> 
> Implemented features and ThunderX nicvf PMD documentation added
> in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> respectively in this patch set.
> 
> These patches are checked using checkpatch.sh with following
> additional ignore option:
>     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> CAMELCASE - To accommodate PRIx64
> BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> 
> This patch set is based on DPDK 16.07-RC1
> and tested with today's git HEAD change-set
> ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> following depended patch
> 
> http://dpdk.org/dev/patchwork/patch/11826/
> ethdev: add tunnel and port RSS offload types
> 
> V1->V2
> 
> http://dpdk.org/dev/patchwork/patch/12609/
> -- added const for the const struct tables
> -- remove multiple blank lines
> -- addressed style comments
> http://dpdk.org/dev/patchwork/patch/12610/
> -- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
> -- add const for table structs
> -- addressed style comments
> http://dpdk.org/dev/patchwork/patch/12614/
> -- s/DEFAULT_*/NICVF_DEFAULT_*/gc
> http://dpdk.org/dev/patchwork/patch/12615/
> -- Fix typos
> -- addressed style comments
> http://dpdk.org/dev/patchwork/patch/12616/
> -- removed redundant txq->tail = 0 and txq->head = 0
> http://dpdk.org/dev/patchwork/patch/12627/
> -- fixed the documentation changes
> 
> -- fixed TAB+space occurrences in functions
> -- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db
> 
> V2->V3
> 
> http://dpdk.org/dev/patchwork/patch/13060/
> -- Changed polling infrastructure to use rte_eal_alarm* instead of timerfd_create API
> -- rebased to ca173a909538a2f1082cd0dcb4d778a97dab69c3
> 
> Jerin Jacob (20):
>   thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
>   thunderx/nicvf: add pmd skeleton
>   thunderx/nicvf: add link status and link update support
>   thunderx/nicvf: add get_reg and get_reg_length support
>   thunderx/nicvf: add dev_configure support
>   thunderx/nicvf: add dev_infos_get support
>   thunderx/nicvf: add rx_queue_setup/release support
>   thunderx/nicvf: add tx_queue_setup/release support
>   thunderx/nicvf: add rss and reta query and update support
>   thunderx/nicvf: add mtu_set and promiscuous_enable support
>   thunderx/nicvf: add stats support
>   thunderx/nicvf: add single and multi segment tx functions
>   thunderx/nicvf: add single and multi segment rx functions
>   thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
>     support
>   thunderx/nicvf: add rx queue start and stop support
>   thunderx/nicvf: add tx queue start and stop support
>   thunderx/nicvf: add device start,stop and close support
>   thunderx/config: set max numa node to two
>   thunderx/nicvf: updated driver documentation and release notes
>   maintainers: claim responsibility for the ThunderX nicvf PMD
> 

Hi Jerin,

In patch subject, as tag, other drivers are using only driver name, and
Intel drivers also has "driver/base", since base code has some special
case. For thunderx, what do you think about keeping subject as:
 "thunderx: ...."

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 12:30     ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Ferruh Yigit
@ 2016-06-08 12:43       ` Jerin Jacob
  2016-06-08 13:15         ` Ferruh Yigit
                           ` (2 more replies)
  0 siblings, 3 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-08 12:43 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, thomas.monjalon, bruce.richardson

On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > Jerin Jacob (20):
> >   thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
> >   thunderx/nicvf: add pmd skeleton
> >   thunderx/nicvf: add link status and link update support
> >   thunderx/nicvf: add get_reg and get_reg_length support
> >   thunderx/nicvf: add dev_configure support
> >   thunderx/nicvf: add dev_infos_get support
> >   thunderx/nicvf: add rx_queue_setup/release support
> >   thunderx/nicvf: add tx_queue_setup/release support
> >   thunderx/nicvf: add rss and reta query and update support
> >   thunderx/nicvf: add mtu_set and promiscuous_enable support
> >   thunderx/nicvf: add stats support
> >   thunderx/nicvf: add single and multi segment tx functions
> >   thunderx/nicvf: add single and multi segment rx functions
> >   thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
> >     support
> >   thunderx/nicvf: add rx queue start and stop support
> >   thunderx/nicvf: add tx queue start and stop support
> >   thunderx/nicvf: add device start,stop and close support
> >   thunderx/config: set max numa node to two
> >   thunderx/nicvf: updated driver documentation and release notes
> >   maintainers: claim responsibility for the ThunderX nicvf PMD
> > 
> 
> Hi Jerin,
> 
> In patch subject, as tag, other drivers are using only driver name, and
> Intel drivers also has "driver/base", since base code has some special
> case. For thunderx, what do you think about keeping subject as:
>  "thunderx: ...."
> 

Hi Ferruh,

We may add crypto or other builtin ThunderX HW accelerated block drivers
in future to DPDK.
So that is the reason why I thought of keeping the subject as thunderx/nicvf.
If you don't have any objection then I would like to keep it as
thunderx/nicvf or just nicvf.

> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions
  2016-06-07 16:40     ` [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
  2016-06-08 12:11       ` Ferruh Yigit
@ 2016-06-08 12:51       ` Ferruh Yigit
  1 sibling, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 12:51 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  drivers/net/thunderx/Makefile       |   2 +
>  drivers/net/thunderx/nicvf_ethdev.c |   5 +-
>  drivers/net/thunderx/nicvf_rxtx.c   | 256 ++++++++++++++++++++++++++++++++++++
>  drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
>  4 files changed, 355 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
>  create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
> 
> diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile

...

> +
> +static inline uint32_t __hot
> +nicvf_free_xmittted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,

again, although this is perfectly fine, any intention to say xmitted
instead of xmittted?

...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 12:43       ` Jerin Jacob
@ 2016-06-08 13:15         ` Ferruh Yigit
  2016-06-08 13:22         ` Bruce Richardson
  2016-06-08 13:42         ` Thomas Monjalon
  2 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 13:15 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, bruce.richardson

On 6/8/2016 1:43 PM, Jerin Jacob wrote:
> On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
>> On 6/7/2016 5:40 PM, Jerin Jacob wrote:
>>> Jerin Jacob (20):
>>>   thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
>>>   thunderx/nicvf: add pmd skeleton
>>>   thunderx/nicvf: add link status and link update support
>>>   thunderx/nicvf: add get_reg and get_reg_length support
>>>   thunderx/nicvf: add dev_configure support
>>>   thunderx/nicvf: add dev_infos_get support
>>>   thunderx/nicvf: add rx_queue_setup/release support
>>>   thunderx/nicvf: add tx_queue_setup/release support
>>>   thunderx/nicvf: add rss and reta query and update support
>>>   thunderx/nicvf: add mtu_set and promiscuous_enable support
>>>   thunderx/nicvf: add stats support
>>>   thunderx/nicvf: add single and multi segment tx functions
>>>   thunderx/nicvf: add single and multi segment rx functions
>>>   thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
>>>     support
>>>   thunderx/nicvf: add rx queue start and stop support
>>>   thunderx/nicvf: add tx queue start and stop support
>>>   thunderx/nicvf: add device start,stop and close support
>>>   thunderx/config: set max numa node to two
>>>   thunderx/nicvf: updated driver documentation and release notes
>>>   maintainers: claim responsibility for the ThunderX nicvf PMD
>>>
>>
>> Hi Jerin,
>>
>> In patch subject, as tag, other drivers are using only driver name, and
>> Intel drivers also has "driver/base", since base code has some special
>> case. For thunderx, what do you think about keeping subject as:
>>  "thunderx: ...."
>>
> 
> Hi Ferruh,
> 
> We may add crypto or other builtin ThunderX HW accelerated block drivers
> in future to DPDK.
> So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> If you don't have any objection then I would like to keep it as
> thunderx/nicvf or just nicvf.
> 

Ring has similar problem, but we are using same tag "ring:" for both
ring_pmd and ring library.

For this case perhaps we can use net/thunderx, crypto/thunderx ?

I am not aware of any defined convention for the case.

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes
  2016-06-08 12:27         ` Jerin Jacob
@ 2016-06-08 13:18           ` Bruce Richardson
  0 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-08 13:18 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Ferruh Yigit, dev, thomas.monjalon, Slawomir Rosek

On Wed, Jun 08, 2016 at 05:57:16PM +0530, Jerin Jacob wrote:
> On Wed, Jun 08, 2016 at 01:08:35PM +0100, Ferruh Yigit wrote:
> > On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > > Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
> > > and release notes
> > > 
> > > Changed "*" to "P" in overview.rst to capture the partially supported
> > > feature as "*" creating alignment issues with Sphinx table
> > > 
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> > > Acked-by: John McNamara <john.mcnamara@intel.com>
> > > ---
> > >  doc/guides/nics/index.rst              |   1 +
> > >  doc/guides/nics/overview.rst           |  96 ++++-----
> > >  doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
> > >  doc/guides/rel_notes/release_16_07.rst |   1 +
> > >  4 files changed, 404 insertions(+), 48 deletions(-)
> > >  create mode 100644 doc/guides/nics/thunderx.rst
> > 
> > Hi Jerin,
> > 
> > This patch doesn't apply on top of origin/rel_16_07:
> > 
> > Applying: thunderx/nicvf: updated driver documentation and release notes
> > Using index info to reconstruct a base tree...
> > M	doc/guides/nics/overview.rst
> > Falling back to patching base and 3-way merge...
> > Auto-merging doc/guides/nics/overview.rst
> > CONFLICT (content): Merge conflict in doc/guides/nics/overview.rst
> 
> Hi Ferruh,
> 
> Since docs files are files keep changing, this patch set
> has been re-based on latest change-set i.e ca173a909538a2f1082cd0dcb4d778a97dab69c3
> not origin/rel_16_07.
>

The nic overview.rst doc causes lots of conflicts when merging, and those
I just fix on apply. There is no need to do a new patch revision solely for that.

So nothing to see here, move along... :-)

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 12:43       ` Jerin Jacob
  2016-06-08 13:15         ` Ferruh Yigit
@ 2016-06-08 13:22         ` Bruce Richardson
  2016-06-08 13:32           ` Jerin Jacob
  2016-06-08 13:42         ` Thomas Monjalon
  2 siblings, 1 reply; 204+ messages in thread
From: Bruce Richardson @ 2016-06-08 13:22 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Ferruh Yigit, dev, thomas.monjalon

On Wed, Jun 08, 2016 at 06:13:21PM +0530, Jerin Jacob wrote:
> On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > > Jerin Jacob (20):
> > >   thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
> > >   thunderx/nicvf: add pmd skeleton
> > >   thunderx/nicvf: add link status and link update support
> > >   thunderx/nicvf: add get_reg and get_reg_length support
> > >   thunderx/nicvf: add dev_configure support
> > >   thunderx/nicvf: add dev_infos_get support
> > >   thunderx/nicvf: add rx_queue_setup/release support
> > >   thunderx/nicvf: add tx_queue_setup/release support
> > >   thunderx/nicvf: add rss and reta query and update support
> > >   thunderx/nicvf: add mtu_set and promiscuous_enable support
> > >   thunderx/nicvf: add stats support
> > >   thunderx/nicvf: add single and multi segment tx functions
> > >   thunderx/nicvf: add single and multi segment rx functions
> > >   thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
> > >     support
> > >   thunderx/nicvf: add rx queue start and stop support
> > >   thunderx/nicvf: add tx queue start and stop support
> > >   thunderx/nicvf: add device start,stop and close support
> > >   thunderx/config: set max numa node to two
> > >   thunderx/nicvf: updated driver documentation and release notes
> > >   maintainers: claim responsibility for the ThunderX nicvf PMD
> > > 
> > 
> > Hi Jerin,
> > 
> > In patch subject, as tag, other drivers are using only driver name, and
> > Intel drivers also has "driver/base", since base code has some special
> > case. For thunderx, what do you think about keeping subject as:
> >  "thunderx: ...."
> > 
> 
> Hi Ferruh,
> 
> We may add crypto or other builtin ThunderX HW accelerated block drivers
> in future to DPDK.
> So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> If you don't have any objection then I would like to keep it as
> thunderx/nicvf or just nicvf.

Are you upstreaming kernel modules for this device? If so, what is the Linux
kernel module-name for this device going to be, as perhaps that can help us
here?

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 13:22         ` Bruce Richardson
@ 2016-06-08 13:32           ` Jerin Jacob
  2016-06-08 13:51             ` Thomas Monjalon
  0 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-08 13:32 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: Ferruh Yigit, dev, thomas.monjalon

On Wed, Jun 08, 2016 at 02:22:55PM +0100, Bruce Richardson wrote:
> On Wed, Jun 08, 2016 at 06:13:21PM +0530, Jerin Jacob wrote:
> > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > > > Jerin Jacob (20):
> > > >   thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
> > > >   thunderx/nicvf: add pmd skeleton
> > > >   thunderx/nicvf: add link status and link update support
> > > >   thunderx/nicvf: add get_reg and get_reg_length support
> > > >   thunderx/nicvf: add dev_configure support
> > > >   thunderx/nicvf: add dev_infos_get support
> > > >   thunderx/nicvf: add rx_queue_setup/release support
> > > >   thunderx/nicvf: add tx_queue_setup/release support
> > > >   thunderx/nicvf: add rss and reta query and update support
> > > >   thunderx/nicvf: add mtu_set and promiscuous_enable support
> > > >   thunderx/nicvf: add stats support
> > > >   thunderx/nicvf: add single and multi segment tx functions
> > > >   thunderx/nicvf: add single and multi segment rx functions
> > > >   thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count
> > > >     support
> > > >   thunderx/nicvf: add rx queue start and stop support
> > > >   thunderx/nicvf: add tx queue start and stop support
> > > >   thunderx/nicvf: add device start,stop and close support
> > > >   thunderx/config: set max numa node to two
> > > >   thunderx/nicvf: updated driver documentation and release notes
> > > >   maintainers: claim responsibility for the ThunderX nicvf PMD
> > > > 
> > > 
> > > Hi Jerin,
> > > 
> > > In patch subject, as tag, other drivers are using only driver name, and
> > > Intel drivers also has "driver/base", since base code has some special
> > > case. For thunderx, what do you think about keeping subject as:
> > >  "thunderx: ...."
> > > 
> > 
> > Hi Ferruh,
> > 
> > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > in future to DPDK.
> > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > If you don't have any objection then I would like to keep it as
> > thunderx/nicvf or just nicvf.
> 
> Are you upstreaming kernel modules for this device? If so, what is the Linux
> kernel module-name for this device going to be, as perhaps that can help us
> here?

Yes, Kernel module has been upstreamed.
the commit log in linux kernel is "net: thunderx: ......."

> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 12:43       ` Jerin Jacob
  2016-06-08 13:15         ` Ferruh Yigit
  2016-06-08 13:22         ` Bruce Richardson
@ 2016-06-08 13:42         ` Thomas Monjalon
  2016-06-08 15:08           ` Bruce Richardson
  2 siblings, 1 reply; 204+ messages in thread
From: Thomas Monjalon @ 2016-06-08 13:42 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Ferruh Yigit, dev, bruce.richardson

2016-06-08 18:13, Jerin Jacob:
> On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > Hi Jerin,
> > 
> > In patch subject, as tag, other drivers are using only driver name, and
> > Intel drivers also has "driver/base", since base code has some special
> > case. For thunderx, what do you think about keeping subject as:
> >  "thunderx: ...."
> > 
> 
> Hi Ferruh,
> 
> We may add crypto or other builtin ThunderX HW accelerated block drivers
> in future to DPDK.
> So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> If you don't have any objection then I would like to keep it as
> thunderx/nicvf or just nicvf.

I don't like the name nicvf but I guess that's the official name?

Thus I agree the title should be thunderx/nicvf or thunderx_nicvf.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 13:32           ` Jerin Jacob
@ 2016-06-08 13:51             ` Thomas Monjalon
  0 siblings, 0 replies; 204+ messages in thread
From: Thomas Monjalon @ 2016-06-08 13:51 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Bruce Richardson, Ferruh Yigit, dev

2016-06-08 19:02, Jerin Jacob:
> On Wed, Jun 08, 2016 at 02:22:55PM +0100, Bruce Richardson wrote:
> > On Wed, Jun 08, 2016 at 06:13:21PM +0530, Jerin Jacob wrote:
> > > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > > On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > > > Hi Jerin,
> > > > 
> > > > In patch subject, as tag, other drivers are using only driver name, and
> > > > Intel drivers also has "driver/base", since base code has some special
> > > > case. For thunderx, what do you think about keeping subject as:
> > > >  "thunderx: ...."
> > > > 
> > > 
> > > Hi Ferruh,
> > > 
> > > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > > in future to DPDK.
> > > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > > If you don't have any objection then I would like to keep it as
> > > thunderx/nicvf or just nicvf.
> > 
> > Are you upstreaming kernel modules for this device? If so, what is the Linux
> > kernel module-name for this device going to be, as perhaps that can help us
> > here?
> 
> Yes, Kernel module has been upstreamed.
> the commit log in linux kernel is "net: thunderx: ......."

If you want to modify the conventions, we just need to agree on a patch
modifying the guidelines.
We can think about the proposal of Ferruh to use net/ and crypto/ prefixes.

The most important is to have something short and easy to parse when
quick browsing the git history.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 13:42         ` Thomas Monjalon
@ 2016-06-08 15:08           ` Bruce Richardson
  2016-06-09 10:49             ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Bruce Richardson @ 2016-06-08 15:08 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Jerin Jacob, Ferruh Yigit, dev

On Wed, Jun 08, 2016 at 03:42:14PM +0200, Thomas Monjalon wrote:
> 2016-06-08 18:13, Jerin Jacob:
> > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > Hi Jerin,
> > > 
> > > In patch subject, as tag, other drivers are using only driver name, and
> > > Intel drivers also has "driver/base", since base code has some special
> > > case. For thunderx, what do you think about keeping subject as:
> > >  "thunderx: ...."
> > > 
> > 
> > Hi Ferruh,
> > 
> > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > in future to DPDK.
> > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > If you don't have any objection then I would like to keep it as
> > thunderx/nicvf or just nicvf.
> 
> I don't like the name nicvf but I guess that's the official name?
> 
> Thus I agree the title should be thunderx/nicvf or thunderx_nicvf.

I think I'd prefer the underscore version.

/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-06-08 12:18       ` Ferruh Yigit
@ 2016-06-08 15:45       ` Ferruh Yigit
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
  2 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 15:45 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
> drivers/net/thunderx/nicvf/base directory.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  drivers/net/thunderx/base/nicvf_hw.c      |  908 +++++++++++++++++++++
>  drivers/net/thunderx/base/nicvf_hw.h      |  240 ++++++
>  drivers/net/thunderx/base/nicvf_hw_defs.h | 1216 +++++++++++++++++++++++++++++
>  drivers/net/thunderx/base/nicvf_mbox.c    |  416 ++++++++++
>  drivers/net/thunderx/base/nicvf_mbox.h    |  232 ++++++
>  drivers/net/thunderx/base/nicvf_plat.h    |  132 ++++
>  6 files changed, 3144 insertions(+)
>  create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
>  create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
>  create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
>  create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
>  create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
>  create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
> 
> diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
> new file mode 100644
> index 0000000..24fe77d
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_hw.c
> @@ -0,0 +1,908 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <unistd.h>
> +#include <math.h>
> +#include <errno.h>
> +#include <stdarg.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <assert.h>
> +
> +#include "nicvf_plat.h"
> +
> +struct nicvf_reg_info {
> +	uint32_t offset;
> +	const char *name;
> +};
> +
> +#define NICVF_REG_INFO(reg) {reg, #reg}
> +
> +static const struct nicvf_reg_info nicvf_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_VF_CFG),
> +	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
> +	NICVF_REG_INFO(NIC_VF_INT),
> +	NICVF_REG_INFO(NIC_VF_INT_W1S),
> +	NICVF_REG_INFO(NIC_VF_ENA_W1C),
> +	NICVF_REG_INFO(NIC_VF_ENA_W1S),
> +	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
> +	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
> +};
> +
> +static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
> +	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
> +	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
> +	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
> +	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
> +	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
> +	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
> +	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
> +	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
> +	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
> +	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
> +	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
> +};
> +
> +static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
> +	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
> +};
> +
> +static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
> +};
> +
> +static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
> +};
> +
> +static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
> +	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
> +};
> +
> +int
> +nicvf_base_init(struct nicvf *nic)
> +{
> +	nic->hwcap = 0;
> +	if (nic->subsystem_device_id == 0)
> +		return NICVF_ERR_BASE_INIT;
> +
> +	if (nicvf_hw_version(nic) == NICVF_PASS2)
> +		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
> +
> +	return NICVF_OK;
> +}
> +
> +/* dump on stdout if data is NULL */
> +int
> +nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
> +{
> +	uint32_t i, q;
> +	bool dump_stdout;
> +
> +	dump_stdout = data ? 0 : 1;
> +
> +	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
> +		if (dump_stdout)
> +			nicvf_log("%24s  = 0x%" PRIx64 "\n",
> +				nicvf_reg_tbl[i].name,
> +				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
> +		else
> +			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
> +
> +	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
> +		if (dump_stdout)
> +			nicvf_log("%24s  = 0x%" PRIx64 "\n",
> +				nicvf_multi_reg_tbl[i].name,
> +				nicvf_reg_read(nic,
> +					nicvf_multi_reg_tbl[i].offset));
> +		else
> +			*data++ = nicvf_reg_read(nic,
> +					nicvf_multi_reg_tbl[i].offset);
> +
> +	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
> +		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
> +			if (dump_stdout)
> +				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
> +					nicvf_qset_cq_reg_tbl[i].name, q,
> +					nicvf_queue_reg_read(nic,
> +					nicvf_qset_cq_reg_tbl[i].offset, q));
> +			else
> +				*data++ = nicvf_queue_reg_read(nic,
> +					nicvf_qset_cq_reg_tbl[i].offset, q);
> +
> +	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
> +		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
> +			if (dump_stdout)
> +				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
> +					nicvf_qset_rq_reg_tbl[i].name, q,
> +					nicvf_queue_reg_read(nic,
> +					nicvf_qset_rq_reg_tbl[i].offset, q));
> +			else
> +				*data++ = nicvf_queue_reg_read(nic,
> +					nicvf_qset_rq_reg_tbl[i].offset, q);
> +
> +	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
> +		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
> +			if (dump_stdout)
> +				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
> +					nicvf_qset_sq_reg_tbl[i].name, q,
> +					nicvf_queue_reg_read(nic,
> +					nicvf_qset_sq_reg_tbl[i].offset, q));
> +			else
> +				*data++ = nicvf_queue_reg_read(nic,
> +					nicvf_qset_sq_reg_tbl[i].offset, q);
> +
> +	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
> +		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
> +			if (dump_stdout)
> +				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
> +					nicvf_qset_rbdr_reg_tbl[i].name, q,
> +					nicvf_queue_reg_read(nic,
> +					nicvf_qset_rbdr_reg_tbl[i].offset, q));
> +			else
> +				*data++ = nicvf_queue_reg_read(nic,
> +					nicvf_qset_rbdr_reg_tbl[i].offset, q);
> +	return 0;
> +}
> +
> +int
> +nicvf_reg_get_count(void)
> +{
> +	int nr_regs;
> +
> +	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
> +	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
> +	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
> +			MAX_CMP_QUEUES_PER_QS;
> +	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
> +			MAX_RCV_QUEUES_PER_QS;
> +	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
> +			MAX_SND_QUEUES_PER_QS;
> +	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
> +			MAX_RCV_BUF_DESC_RINGS_PER_QS;
> +
> +	return nr_regs;
> +}
> +
> +static int
> +nicvf_qset_config_internal(struct nicvf *nic, bool enable)
> +{
> +	int ret;
> +	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
> +
> +	pf_qs_cfg.ena = enable ? 1 : 0;
> +	pf_qs_cfg.vnic = nic->vf_id;
> +	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
> +	return ret ? NICVF_ERR_SET_QS : 0;
> +}
> +
> +/* Requests PF to assign and enable Qset */
> +int
> +nicvf_qset_config(struct nicvf *nic)
> +{
> +	/* Enable Qset */
> +	return nicvf_qset_config_internal(nic, true);
> +}
> +
> +int
> +nicvf_qset_reclaim(struct nicvf *nic)
> +{
> +	/* Disable Qset */
> +	return nicvf_qset_config_internal(nic, false);
> +}
> +
> +static int
> +cmpfunc(const void *a, const void *b)
> +{
> +	return (*(const uint32_t *)a - *(const uint32_t *)b);
> +}
> +
> +static uint32_t
> +nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
> +{
> +	uint32_t i;
> +
> +	qsort(list, entries, sizeof(uint32_t), cmpfunc);
> +	for (i = 0; i < entries; i++)
> +		if (val <= list[i])
> +			break;
> +	/* Not in the list */
> +	if (i >= entries)
> +		return 0;
> +	else
> +		return list[i];
> +}
> +
> +static void
> +nicvf_handle_qset_err_intr(struct nicvf *nic)
> +{
> +	uint16_t qidx;
> +	uint64_t status;
> +
> +	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
> +	nicvf_reg_dump(nic, NULL);
> +
> +	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
> +		status = nicvf_queue_reg_read(
> +				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
> +		if (!(status & NICVF_CQ_ERR_MASK))
> +			continue;
> +
> +		if (status & NICVF_CQ_WR_FULL)
> +			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
> +		if (status & NICVF_CQ_WR_DISABLE)
> +			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
> +		if (status & NICVF_CQ_WR_FAULT)
> +			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
> +		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
> +	}
> +
> +	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
> +		status = nicvf_queue_reg_read(
> +				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
> +		if (!(status & NICVF_SQ_ERR_MASK))
> +			continue;
> +
> +		if (status & NICVF_SQ_ERR_STOPPED)
> +			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
> +		if (status & NICVF_SQ_ERR_SEND)
> +			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
> +		if (status & NICVF_SQ_ERR_DPE)
> +			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
> +		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
> +	}
> +
> +	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
> +		status = nicvf_queue_reg_read(nic,
> +					NIC_QSET_RBDR_0_1_STATUS0, qidx);
extra tab ?

> +		status &= NICVF_RBDR_FIFO_STATE_MASK;
> +		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
> +
> +		if (status == RBDR_FIFO_STATE_FAIL)
> +			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
> +		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
> +	}
> +
> +	nicvf_disable_all_interrupts(nic);
> +	abort();
> +}
> +
> +/*
> + * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
> + * This function is not re-entrant.
> + * The caller should provide proper serialization.
> + */
> +int
> +nicvf_reg_poll_interrupts(struct nicvf *nic)
> +{
> +	int msg = 0;
> +	uint64_t intr;
> +
> +	intr = nicvf_reg_read(nic, NIC_VF_INT);
> +	if (intr & NICVF_INTR_MBOX_MASK) {
> +		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
> +		msg = nicvf_handle_mbx_intr(nic);
> +	}
> +	if (intr & NICVF_INTR_QS_ERR_MASK) {
> +		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
> +		nicvf_handle_qset_err_intr(nic);
> +	}
> +	return msg;
> +}
> +
> +static int
> +nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
> +		    uint32_t bit_pos, uint32_t bits, uint64_t val)
> +{
> +	uint64_t bit_mask;
> +	uint64_t reg_val;
> +	int timeout = 10;
Does it make sense to convert hardcoded value to a macro

> +
> +	bit_mask = (1ULL << bits) - 1;
> +	bit_mask = (bit_mask << bit_pos);
> +
> +	while (timeout) {
> +		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
> +		if (((reg_val & bit_mask) >> bit_pos) == val)
> +			return NICVF_OK;
> +		nicvf_delay_us(2000);
hardcoded value

> +		timeout--;
> +	}
> +	return NICVF_ERR_REG_POLL;
> +}
> +
> +int
> +nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
> +{
> +	uint64_t status;
> +	int timeout = 10;
hardcoded value

> +	struct nicvf_rbdr *rbdr = nic->rbdr;
> +
> +	/* Save head and tail pointers for freeing up buffers */
> +	if (rbdr) {
> +		rbdr->head = nicvf_queue_reg_read(nic,
> +					NIC_QSET_RBDR_0_1_HEAD,
> +					qidx) >> 3;
extra tabs, there are more this kind of usage, I won't notify further

> +		rbdr->tail = nicvf_queue_reg_read(nic,
> +					NIC_QSET_RBDR_0_1_TAIL,
> +					qidx) >> 3;
> +		rbdr->next_tail = rbdr->tail;
> +	}
> +
> +	/* Reset RBDR */
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
> +				NICVF_RBDR_RESET);
> +
> +	/* Disable RBDR */
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
> +	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
> +				62, 2, 0x00))
> +		return NICVF_ERR_RBDR_DISABLE;
> +
> +	while (1) {
> +		status = nicvf_queue_reg_read(nic,
> +				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
> +		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
> +			break;
> +		nicvf_delay_us(2000);
hardcoded sleep value

> +		timeout--;
> +		if (!timeout)
> +			return NICVF_ERR_RBDR_PREFETCH;
> +	}
> +
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
> +			NICVF_RBDR_RESET);
> +	if (nicvf_qset_poll_reg(nic, qidx,
> +				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
> +		return NICVF_ERR_RBDR_RESET1;
> +
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
> +	if (nicvf_qset_poll_reg(nic, qidx,
> +				NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
> +		return NICVF_ERR_RBDR_RESET2;
> +
> +	return NICVF_OK;
> +}
> +
> +static int
> +nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
> +{
> +	int val;
> +
> +	val = ((uint32_t)log2(len) - len_shift);
> +	assert(val >= 0);
> +	assert(val <= 6);
hardcoded values for assertion

> +	return val;
> +}
> +
> +int
> +nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
> +{
> +	int ret;
> +	uint64_t head, tail;
> +	struct nicvf_rbdr *rbdr = nic->rbdr;
> +	struct rbdr_cfg rbdr_cfg = {.value = 0};
> +
> +	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
> +	if (ret)
> +		return ret;
> +
> +	/* Set descriptor base address */
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
> +
> +	/* Enable RBDR  & set queue size */
> +	rbdr_cfg.reserved_45_63 = 0,

Intended to ";" ?

> +	rbdr_cfg.ena = 1;
> +	rbdr_cfg.reset = 0;
> +	rbdr_cfg.ldwb = 0;
> +	rbdr_cfg.reserved_36_41 = 0;

No need these 0 assignments, assignment in deceleration does this.

> +	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
> +					RBDR_SIZE_SHIFT);
> +	rbdr_cfg.reserved_25_31 = 0;
> +	rbdr_cfg.avg_con = 0;
> +	rbdr_cfg.reserved_12_15 = 0;
> +	rbdr_cfg.lines = rbdr->buffsz / 128;
> +
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
> +
> +	/* Verify proper RBDR reset */
> +	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
> +	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
> +
> +	if (head | tail)
> +		return NICVF_ERR_RBDR_RESET;
> +
> +	return NICVF_OK;
> +}
> +
> +uint32_t
> +nicvf_qsize_rbdr_roundup(uint32_t val)
> +{
> +	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
> +				RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
> +				RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
> +				RBDR_QUEUE_SZ_512K};
> +	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
> +}
> +
> +int
> +nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
> +			  rbdr_pool_get_handler handler,
> +			  void *opaque, uint32_t max_buffs)
> +{
> +	struct rbdr_entry_t *desc, *desc0;
> +	struct nicvf_rbdr *rbdr = nic->rbdr;
> +	uint32_t count;
> +	nicvf_phys_addr_t phy;
> +
> +	assert(rbdr != NULL);
> +	desc = rbdr->desc;
> +	count = 0;
> +	/* Don't fill beyond max numbers of desc */
> +	while (count < (rbdr->qlen_mask)) {
extra paranthesis

> +		if (count >= max_buffs)
> +			break;
> +		desc0 = desc + count;
> +		phy = handler(opaque);
> +		if (phy) {
> +			desc0->full_addr = phy;
> +			count++;
> +		} else {
> +			break;
> +		}
> +	}
> +	nicvf_smp_wmb();
> +	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
> +	rbdr->tail = nicvf_queue_reg_read(nic,
> +				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
> +	rbdr->next_tail = rbdr->tail;
> +	nicvf_smp_rmb();
> +	return 0;
> +}
> +
> +int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
return type should be one line above

> +{
> +	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
> +}
> +
> +int
> +nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
> +{
> +	uint64_t head, tail;
> +	struct sq_cfg sq_cfg;
> +
> +	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
> +
> +	/* Disable send queue */
> +	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
> +
> +	/* Check if SQ is stopped */
> +	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
> +				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
> +		return NICVF_ERR_SQ_DISABLE;
> +
> +	/* Reset send queue */
> +	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
> +	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
> +	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
> +	if (head | tail)
> +		return  NICVF_ERR_SQ_RESET;
> +
> +	return 0;
> +}
> +
> +int
> +nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
> +{
> +	int ret;
> +	struct sq_cfg sq_cfg = {.value = 0};
> +
> +	ret = nicvf_qset_sq_reclaim(nic, qidx);
> +	if (ret)
> +		return ret;
> +
> +	/* Send a mailbox msg to PF to config SQ */
> +	if (nicvf_mbox_sq_config(nic, qidx))
> +		return  NICVF_ERR_SQ_PF_CFG;
> +
> +	/* Set queue base address */
> +	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
> +
> +	/* Enable send queue  & set queue size */
> +	sq_cfg.ena = 1;
> +	sq_cfg.reset = 0;
> +	sq_cfg.ldwb = 0;
> +	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
> +	sq_cfg.tstmp_bgx_intf = 0;
> +	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
> +
> +	/* Ring doorbell so that H/W restarts processing SQEs */
> +	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
> +
> +	return 0;
> +}
> +
> +uint32_t
> +nicvf_qsize_sq_roundup(uint32_t val)
> +{
> +	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
> +				SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
> +				SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
> +				SND_QUEUE_SZ_64K};
> +	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
> +}
> +
> +int
> +nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
> +{
> +	/* Disable receive queue */
> +	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
> +	return nicvf_mbox_rq_sync(nic);
> +}
> +
> +int
> +nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
> +{
> +	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
> +	struct rq_cfg rq_cfg = {.value = 0};
> +
> +	if (nicvf_qset_rq_reclaim(nic, qidx))
> +		return NICVF_ERR_RQ_CLAIM;
> +
> +	pf_rq_cfg.strip_pre_l2 = 0;
> +	/* First cache line of RBDR data will be allocated into L2C */
> +	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
> +	pf_rq_cfg.cq_qs = nic->vf_id;
> +	pf_rq_cfg.cq_idx = qidx;
> +	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
> +	pf_rq_cfg.rbdr_cont_idx = 0;
> +	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
> +	pf_rq_cfg.rbdr_strt_idx = 0;
> +
> +	/* Send a mailbox msg to PF to config RQ */
> +	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
> +		return NICVF_ERR_RQ_PF_CFG;
> +
> +	/* Select Rx backpressure */
> +	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
> +		return NICVF_ERR_RQ_BP_CFG;
> +
> +	/* Send a mailbox msg to PF to config RQ drop */
> +	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
> +		return NICVF_ERR_RQ_DROP_CFG;
> +
> +	/* Enable Receive queue */
> +	rq_cfg.ena = 1;
> +	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
> +
> +	return 0;
> +}
> +
> +int
> +nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
> +{
> +	uint64_t tail, head;
> +
> +	/* Disable completion queue */
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
> +	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
> +		return NICVF_ERR_CQ_DISABLE;
> +
> +	/* Reset completion queue */
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
> +	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
> +	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
> +	if (head | tail)
> +		return  NICVF_ERR_CQ_RESET;
> +
> +	/* Disable timer threshold (doesn't get reset upon CQ reset) */
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
> +	return 0;
> +}
> +
> +int
> +nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
> +{
> +	int ret;
> +	struct cq_cfg cq_cfg = {.value = 0};
> +
> +	ret = nicvf_qset_cq_reclaim(nic, qidx);
> +	if (ret)
> +		return ret;
> +
> +	/* Set completion queue base address */
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
> +
> +	cq_cfg.ena = 1;
> +	cq_cfg.reset = 0;
> +	/* Writes of CQE will be allocated into L2C */
> +	cq_cfg.caching = 1;
> +	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
> +	cq_cfg.avg_con = 0;
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
> +
> +	/* Set threshold value for interrupt generation */
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
> +	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
> +	return 0;
> +}
> +
> +uint32_t
> +nicvf_qsize_cq_roundup(uint32_t val)
> +{
> +	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
> +				CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
> +				CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
> +				CMP_QUEUE_SZ_64K};
> +	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
> +}
> +
> +
> +void
> +nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
> +{
> +	uint64_t val;
> +
> +	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
> +	if (enable)
> +		val |= (STRIP_FIRST_VLAN << 25);
> +	else
> +		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
> +
> +	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
> +}
> +
> +void
> +nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
> +{
> +	int idx;
> +	uint64_t addr, val;
> +	uint64_t *keyptr = (uint64_t *)key;
> +
> +	addr = NIC_VNIC_RSS_KEY_0_4;
> +	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
> +		val = nicvf_cpu_to_be_64(*keyptr);
> +		nicvf_reg_write(nic, addr, val);
> +		addr += sizeof(uint64_t);
> +		keyptr++;
> +	}
> +}
> +
> +void
> +nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
> +{
> +	int idx;
> +	uint64_t addr, val;
> +	uint64_t *keyptr = (uint64_t *)key;
> +
> +	addr = NIC_VNIC_RSS_KEY_0_4;
> +	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
> +		val = nicvf_reg_read(nic, addr);
> +		*keyptr = nicvf_be_to_cpu_64(val);
> +		addr += sizeof(uint64_t);
> +		keyptr++;
> +	}
> +}
> +
> +void
> +nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
> +{
> +	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
> +}
> +
> +uint64_t
> +nicvf_rss_get_cfg(struct nicvf *nic)
> +{
> +	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
> +}
> +
> +int
> +nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
> +{
> +	uint32_t idx;
> +	struct nicvf_rss_reta_info *rss = &nic->rss_info;
> +
> +	/* result will be stored in nic->rss_info.rss_size */
> +	if (nicvf_mbox_get_rss_size(nic))
> +		return NICVF_ERR_RSS_GET_SZ;
> +
> +	assert(rss->rss_size > 0);
> +	rss->hash_bits = (uint8_t)log2(rss->rss_size);
> +	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
> +		rss->ind_tbl[idx] = tbl[idx];
> +
> +	if (nicvf_mbox_config_rss(nic))
> +		return NICVF_ERR_RSS_TBL_UPDATE;
> +
> +	return NICVF_OK;
> +}
> +
> +int
> +nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
> +{
> +	uint32_t idx;
> +	struct nicvf_rss_reta_info *rss = &nic->rss_info;
> +
> +	/* result will be stored in nic->rss_info.rss_size */
> +	if (nicvf_mbox_get_rss_size(nic))
> +		return NICVF_ERR_RSS_GET_SZ;
> +
> +	assert(rss->rss_size > 0);
> +	rss->hash_bits = (uint8_t)log2(rss->rss_size);
> +	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
> +		tbl[idx] = rss->ind_tbl[idx];
> +
> +	return NICVF_OK;
> +}
> +
> +int
> +nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
> +{
> +	uint32_t idx;
> +	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
> +	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
> +		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
> +		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
> +		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
> +		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
> +		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
> +	};
> +
> +	if (nic->cpi_alg != CPI_ALG_NONE)
> +		return -EINVAL;
> +
> +	if (cfg == 0)
> +		return -EINVAL;
> +
> +	/* Update default RSS key and cfg */
> +	nicvf_rss_set_key(nic, default_key);
> +	nicvf_rss_set_cfg(nic, cfg);
> +
> +	/* Update default RSS RETA */
> +	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
> +		default_reta[idx] = idx % qcnt;
> +
> +	return nicvf_rss_reta_update(nic, default_reta,
> +				NIC_MAX_RSS_IDR_TBL_SIZE);
> +}
> +
> +int
> +nicvf_rss_term(struct nicvf *nic)
> +{
> +	uint32_t idx;
> +	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
> +
> +	nicvf_rss_set_cfg(nic, 0);
> +	/* Redirect the output to 0th queue  */
> +	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
> +		disable_rss[idx] = 0;
> +
> +	return nicvf_rss_reta_update(nic, disable_rss,
> +				NIC_MAX_RSS_IDR_TBL_SIZE);
> +}
> +
> +int
> +nicvf_loopback_config(struct nicvf *nic, bool enable)
> +{
> +	if (enable && nic->loopback_supported == 0)
> +		return NICVF_ERR_LOOPBACK_CFG;
> +
> +	return nicvf_mbox_loopback_config(nic, enable);
> +}
> +
> +void
> +nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
> +{
> +	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
> +	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
> +	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
> +	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
> +	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
> +	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
> +	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
> +	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
> +	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
> +	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
> +	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
> +	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
> +	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
> +	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
> +
> +	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
> +	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
> +	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
> +	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
> +	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
> +}
> +
> +void
> +nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
> +		       uint16_t qidx)
> +{
> +	qstats->q_rx_bytes =
> +		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
> +	qstats->q_rx_packets =
> +		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
> +}
> +
> +void
> +nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
> +		       uint16_t qidx)
> +{
> +	qstats->q_tx_bytes =
> +		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
> +	qstats->q_tx_packets =
> +		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
> +}
> diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
> new file mode 100644
> index 0000000..32357cc
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_hw.h
> @@ -0,0 +1,240 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _THUNDERX_NICVF_HW_H
> +#define _THUNDERX_NICVF_HW_H
> +
> +#include <stdint.h>
> +
> +#include "nicvf_hw_defs.h"
> +
> +#define	PCI_VENDOR_ID_CAVIUM			0x177D
> +#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
> +#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
> +#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
> +#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
> +
> +#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
> +
> +#define NICVF_GET_RX_STATS(reg) \
> +	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
> +#define NICVF_GET_TX_STATS(reg) \
> +	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
> +
> +#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
> +#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
> +
> +#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
> +
> +enum nicvf_tns_mode {
> +	NIC_TNS_BYPASS_MODE = 0,
unnecessary assignment

> +	NIC_TNS_MODE,
> +};
> +
> +enum nicvf_err_e {
> +	NICVF_OK = 0,
unnecessary assignment

> +	NICVF_ERR_SET_QS = -8191,/* -8191 */
> +	NICVF_ERR_RESET_QS,      /* -8190 */
> +	NICVF_ERR_REG_POLL,      /* -8189 */
> +	NICVF_ERR_RBDR_RESET,    /* -8188 */
> +	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
> +	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
> +	NICVF_ERR_RBDR_RESET1,   /* -8185 */
> +	NICVF_ERR_RBDR_RESET2,   /* -8184 */
> +	NICVF_ERR_RQ_CLAIM,      /* -8183 */
> +	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
> +	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
> +	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
> +	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
> +	NICVF_ERR_CQ_RESET,	 /* -8178 */
> +	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
> +	NICVF_ERR_SQ_RESET,	 /* -8176 */
> +	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
> +	NICVF_ERR_RSS_TBL_UPDATE,/* -8174 */
> +	NICVF_ERR_RSS_GET_SZ,    /* -8173 */
> +	NICVF_ERR_BASE_INIT,     /* -8172 */
> +	NICVF_ERR_LOOPBACK_CFG,  /* -8171 */
> +};
> +
> +typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
> +
> +struct nicvf_hw_rx_qstats {
> +	uint64_t q_rx_bytes;
> +	uint64_t q_rx_packets;
> +};
> +
> +struct nicvf_hw_tx_qstats {
> +	uint64_t q_tx_bytes;
> +	uint64_t q_tx_packets;
> +};
> +
> +struct nicvf_hw_stats {
> +	uint64_t rx_bytes;
> +	uint64_t rx_ucast_frames;
> +	uint64_t rx_bcast_frames;
> +	uint64_t rx_mcast_frames;
> +	uint64_t rx_fcs_errors;
> +	uint64_t rx_l2_errors;
> +	uint64_t rx_drop_red;
> +	uint64_t rx_drop_red_bytes;
> +	uint64_t rx_drop_overrun;
> +	uint64_t rx_drop_overrun_bytes;
> +	uint64_t rx_drop_bcast;
> +	uint64_t rx_drop_mcast;
> +	uint64_t rx_drop_l3_bcast;
> +	uint64_t rx_drop_l3_mcast;
> +
> +	uint64_t tx_bytes_ok;
> +	uint64_t tx_ucast_frames_ok;
> +	uint64_t tx_bcast_frames_ok;
> +	uint64_t tx_mcast_frames_ok;
> +	uint64_t tx_drops;
> +};
> +
> +struct nicvf_rss_reta_info {
> +	uint8_t hash_bits;
> +	uint16_t rss_size;
> +	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
> +};
> +
> +/* Common structs used in DPDK and base layer are defined in DPDK layer */
> +#include "../nicvf_struct.h"
> +
> +NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
> +NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
> +NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
> +
> +static inline void
> +nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
> +{
> +	nicvf_addr_write(nic->reg_base + offset, val);
> +}
> +
> +static inline uint64_t
> +nicvf_reg_read(struct nicvf *nic, uint32_t offset)
> +{
> +	return nicvf_addr_read(nic->reg_base + offset);
> +}
> +
> +static inline uintptr_t
> +nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
> +{
> +	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
> +}
> +
> +static inline void
> +nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
> +		      uint64_t val)
> +{
> +	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
> +}
> +
> +static inline uint64_t
> +nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
> +{
> +	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
> +}
> +
> +static inline void
> +nicvf_disable_all_interrupts(struct nicvf *nic)
> +{
> +	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
> +	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
> +}
> +
> +static inline uint32_t
> +nicvf_hw_version(struct nicvf *nic)
> +{
> +	return nic->subsystem_device_id;
> +}
> +
> +static inline uint64_t
> +nicvf_hw_cap(struct nicvf *nic)
> +{
> +	return nic->hwcap;
> +}
> +
> +int nicvf_base_init(struct nicvf *nic);
> +
> +int nicvf_reg_get_count(void);
> +int nicvf_reg_poll_interrupts(struct nicvf *nic);
> +int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
> +
> +int nicvf_qset_config(struct nicvf *nic);
> +int nicvf_qset_reclaim(struct nicvf *nic);
> +
> +int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
> +int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
> +int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
> +			      rbdr_pool_get_handler handler, void *opaque,
> +			      uint32_t max_buffs);
> +int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
> +
> +int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
> +			 struct nicvf_rxq *rxq);
> +int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
> +
> +int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
> +			 struct nicvf_rxq *rxq);
> +int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
> +
> +int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
> +			 struct nicvf_txq *txq);
> +int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
> +
> +uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
> +uint32_t nicvf_qsize_cq_roundup(uint32_t val);
> +uint32_t nicvf_qsize_sq_roundup(uint32_t val);
> +
> +void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
> +
> +int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
> +int nicvf_rss_term(struct nicvf *nic);
> +
> +int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
> +int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
> +
> +void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
> +void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
> +
> +void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
> +uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
> +
> +int nicvf_loopback_config(struct nicvf *nic, bool enable);
> +
> +void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
> +void nicvf_hw_get_rx_qstats(struct nicvf *nic,
> +			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
> +void nicvf_hw_get_tx_qstats(struct nicvf *nic,
> +			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
> +
> +#endif /* _THUNDERX_NICVF_HW_H */
> diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
> new file mode 100644
> index 0000000..ef9354b
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
> @@ -0,0 +1,1216 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _THUNDERX_NICVF_HW_DEFS_H
> +#define _THUNDERX_NICVF_HW_DEFS_H
> +
> +#include <stdint.h>
> +#include <stdbool.h>
> +
> +/* Virtual function register offsets */
> +
> +#define NIC_VF_CFG                      (0x000020)
> +#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
> +#define NIC_VF_INT                      (0x000200)
> +#define NIC_VF_INT_W1S                  (0x000220)
> +#define NIC_VF_ENA_W1C                  (0x000240)
> +#define NIC_VF_ENA_W1S                  (0x000260)
> +
> +#define NIC_VNIC_RSS_CFG                (0x0020E0)
> +#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
> +#define NIC_VNIC_TX_STAT_0_4            (0x004000)
> +#define NIC_VNIC_RX_STAT_0_13           (0x004100)
> +#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
> +
> +#define NIC_QSET_CQ_0_7_CFG             (0x010400)
> +#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
> +#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
> +#define NIC_QSET_CQ_0_7_BASE            (0x010420)
> +#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
> +#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
> +#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
> +#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
> +#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
> +#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
> +
> +#define NIC_QSET_RQ_0_7_CFG             (0x010600)
> +#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
> +#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
> +
> +#define NIC_QSET_SQ_0_7_CFG             (0x010800)
> +#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
> +#define NIC_QSET_SQ_0_7_BASE            (0x010820)
> +#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
> +#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
> +#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
> +#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
> +#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
> +#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
> +#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
> +
> +#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
> +#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
> +#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
> +#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
> +#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
> +#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
> +#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
> +#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
> +#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
> +
> +/* vNIC HW Constants */
> +
> +#define NIC_Q_NUM_SHIFT                 18
> +
> +#define MAX_QUEUE_SET                   128
> +#define MAX_RCV_QUEUES_PER_QS           8
> +#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
> +#define MAX_SND_QUEUES_PER_QS           8
> +#define MAX_CMP_QUEUES_PER_QS           8
> +
> +#define NICVF_INTR_CQ_SHIFT             0
> +#define NICVF_INTR_SQ_SHIFT             8
> +#define NICVF_INTR_RBDR_SHIFT           16
> +#define NICVF_INTR_PKT_DROP_SHIFT       20
> +#define NICVF_INTR_TCP_TIMER_SHIFT      21
> +#define NICVF_INTR_MBOX_SHIFT           22
> +#define NICVF_INTR_QS_ERR_SHIFT         23
> +
> +#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
> +#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
> +#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
> +#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
> +#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
> +#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
> +#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
> +#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
> +
> +#define NICVF_CQ_WR_FULL                (1ULL << 26)
> +#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
> +#define NICVF_CQ_WR_FAULT               (1ULL << 24)
> +#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
> +					 NICVF_CQ_WR_DISABLE |\
> +					 NICVF_CQ_WR_FAULT)
> +#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
> +
> +#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
> +#define NICVF_SQ_ERR_SEND               (1ULL << 20)
> +#define NICVF_SQ_ERR_DPE                (1ULL << 19)
> +#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
> +					 NICVF_SQ_ERR_SEND |\
> +					 NICVF_SQ_ERR_DPE)
> +#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
> +
> +#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
> +#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
> +#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
> +
> +/* Queue reset */
> +#define NICVF_CQ_RESET                  (1ULL << 41)
> +#define NICVF_SQ_RESET                  (1ULL << 17)
> +#define NICVF_RBDR_RESET                (1ULL << 43)
> +
> +/* RSS constants */
> +#define NIC_MAX_RSS_HASH_BITS           (8)
> +#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
> +#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
> +#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
> +
> +#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
> +#define RSS_IP_ENA                      (1 << 1)
> +#define RSS_TCP_ENA                     (1 << 2)
> +#define RSS_TCP_SYN_ENA                 (1 << 3)
> +#define RSS_UDP_ENA                     (1 << 4)
> +#define RSS_L4_EXTENDED_ENA             (1 << 5)
> +#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
> +#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
> +#define RSS_TUN_VXLAN_ENA               (1 << 9)
> +#define RSS_TUN_GENEVE_ENA              (1 << 10)
> +#define RSS_TUN_NVGRE_ENA               (1 << 11)
> +
> +#define RBDR_QUEUE_SZ_8K                (8 * 1024)
> +#define RBDR_QUEUE_SZ_16K               (16 * 1024)
> +#define RBDR_QUEUE_SZ_32K               (32 * 1024)
> +#define RBDR_QUEUE_SZ_64K               (64 * 1024)
> +#define RBDR_QUEUE_SZ_128K              (128 * 1024)
> +#define RBDR_QUEUE_SZ_256K              (256 * 1024)
> +#define RBDR_QUEUE_SZ_512K              (512 * 1024)
> +
> +#define RBDR_SIZE_SHIFT                 (13) /* 8k */
> +
> +#define SND_QUEUE_SZ_1K                 (1 * 1024)
> +#define SND_QUEUE_SZ_2K                 (2 * 1024)
> +#define SND_QUEUE_SZ_4K                 (4 * 1024)
> +#define SND_QUEUE_SZ_8K                 (8 * 1024)
> +#define SND_QUEUE_SZ_16K                (16 * 1024)
> +#define SND_QUEUE_SZ_32K                (32 * 1024)
> +#define SND_QUEUE_SZ_64K                (64 * 1024)
> +
> +#define SND_QSIZE_SHIFT                 (10) /* 1k */
> +
> +#define CMP_QUEUE_SZ_1K                 (1 * 1024)
> +#define CMP_QUEUE_SZ_2K                 (2 * 1024)
> +#define CMP_QUEUE_SZ_4K                 (4 * 1024)
> +#define CMP_QUEUE_SZ_8K                 (8 * 1024)
> +#define CMP_QUEUE_SZ_16K                (16 * 1024)
> +#define CMP_QUEUE_SZ_32K                (32 * 1024)
> +#define CMP_QUEUE_SZ_64K                (64 * 1024)
> +
> +#define CMP_QSIZE_SHIFT                 (10) /* 1k */
> +
> +/* Min/Max packet size */
> +#define NIC_HW_MIN_FRS			64
> +#define NIC_HW_MAX_FRS			9200 /* 9216 max packet including FCS */
> +#define NIC_HW_MAX_SEGS			12
> +
> +/* Descriptor alignments */
> +#define NICVF_RBDR_BASE_ALIGN_BYTES	128 /* 7 bits */
> +#define NICVF_CQ_BASE_ALIGN_BYTES	512 /* 9 bits */
> +#define NICVF_SQ_BASE_ALIGN_BYTES	128 /* 7 bits */
> +
> +/* vNIC HW Enumerations */
> +
> +enum nic_send_ld_type_e {
> +	NIC_SEND_LD_TYPE_E_LDD = 0x0,
> +	NIC_SEND_LD_TYPE_E_LDT = 0x1,
> +	NIC_SEND_LD_TYPE_E_LDWB = 0x2,
> +	NIC_SEND_LD_TYPE_E_ENUM_LAST = 0x3,
unnecessary assignments

> +};
> +
> +enum ether_type_algorithm {
> +	ETYPE_ALG_NONE = 0x0,
> +	ETYPE_ALG_SKIP = 0x1,
> +	ETYPE_ALG_ENDPARSE = 0x2,
> +	ETYPE_ALG_VLAN = 0x3,
> +	ETYPE_ALG_VLAN_STRIP = 0x4,
unnecessary assignment
> +};
> +
> +enum layer3_type {
> +	L3TYPE_NONE = 0x0,
> +	L3TYPE_GRH = 0x1,
unnecessary assignment
> +	L3TYPE_IPV4 = 0x4,
> +	L3TYPE_IPV4_OPTIONS = 0x5,
> +	L3TYPE_IPV6 = 0x6,
> +	L3TYPE_IPV6_OPTIONS = 0x7,
> +	L3TYPE_ET_STOP = 0xD,
> +	L3TYPE_OTHER = 0xE,
> +};
> +
> +#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
> +#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
> +
> +enum layer4_type {
> +	L4TYPE_NONE = 0x0,
> +	L4TYPE_IPSEC_ESP = 0x1,
> +	L4TYPE_IPFRAG = 0x2,
> +	L4TYPE_IPCOMP = 0x3,
> +	L4TYPE_TCP = 0x4,
> +	L4TYPE_UDP = 0x5,
> +	L4TYPE_SCTP = 0x6,
> +	L4TYPE_GRE = 0x7,
> +	L4TYPE_ROCE_BTH = 0x8,
unnecessary assignment
> +	L4TYPE_OTHER = 0xE,
> +};
> +
> +/* CPI and RSSI configuration */
> +enum cpi_algorithm_type {
> +	CPI_ALG_NONE = 0x0,
> +	CPI_ALG_VLAN = 0x1,
> +	CPI_ALG_VLAN16 = 0x2,
> +	CPI_ALG_DIFF = 0x3,
unnecessary assignment, more usage below
> +};
> +
> +enum rss_algorithm_type {
> +	RSS_ALG_NONE = 0x00,
> +	RSS_ALG_PORT = 0x01,
> +	RSS_ALG_IP = 0x02,
> +	RSS_ALG_TCP_IP = 0x03,
> +	RSS_ALG_UDP_IP = 0x04,
> +	RSS_ALG_SCTP_IP = 0x05,
> +	RSS_ALG_GRE_IP = 0x06,
> +	RSS_ALG_ROCE = 0x07,
> +};
> +
> +enum rss_hash_cfg {
> +	RSS_HASH_L2ETC = 0x00,
> +	RSS_HASH_IP = 0x01,
> +	RSS_HASH_TCP = 0x02,
> +	RSS_HASH_TCP_SYN_DIS = 0x03,
> +	RSS_HASH_UDP = 0x04,
> +	RSS_HASH_L4ETC = 0x05,
> +	RSS_HASH_ROCE = 0x06,
> +	RSS_L3_BIDI = 0x07,
> +	RSS_L4_BIDI = 0x08,
> +};
> +
> +/* Completion queue entry types */
> +enum cqe_type {
> +	CQE_TYPE_INVALID = 0x0,
> +	CQE_TYPE_RX = 0x2,
> +	CQE_TYPE_RX_SPLIT = 0x3,
> +	CQE_TYPE_RX_TCP = 0x4,
> +	CQE_TYPE_SEND = 0x8,
> +	CQE_TYPE_SEND_PTP = 0x9,
> +};
> +
> +enum cqe_rx_tcp_status {
> +	CQE_RX_STATUS_VALID_TCP_CNXT = 0x00,
> +	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
> +};
> +
> +enum cqe_send_status {
> +	CQE_SEND_STATUS_GOOD = 0x00,
> +	CQE_SEND_STATUS_DESC_FAULT = 0x01,
> +	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
> +	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
> +	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
> +	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
> +	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
> +	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
> +	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
> +	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
> +	CQE_SEND_STATUS_DATA_FAULT = 0x86,
> +	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
> +	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
> +	CQE_SEND_STATUS_MEM_FAULT = 0x89,
> +	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
> +	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
> +};
> +
> +enum cqe_rx_tcp_end_reason {
> +	CQE_RX_TCP_END_FIN_FLAG_DET = 0,
> +	CQE_RX_TCP_END_INVALID_FLAG = 1,
> +	CQE_RX_TCP_END_TIMEOUT = 2,
> +	CQE_RX_TCP_END_OUT_OF_SEQ = 3,
> +	CQE_RX_TCP_END_PKT_ERR = 4,
> +	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
> +};
> +
> +/* Packet protocol level error enumeration */
> +enum cqe_rx_err_level {
> +	CQE_RX_ERRLVL_RE = 0x0,
> +	CQE_RX_ERRLVL_L2 = 0x1,
> +	CQE_RX_ERRLVL_L3 = 0x2,
> +	CQE_RX_ERRLVL_L4 = 0x3,
> +};
> +
> +/* Packet protocol level error type enumeration */
> +enum cqe_rx_err_opcode {
> +	CQE_RX_ERR_RE_NONE = 0x0,
> +	CQE_RX_ERR_RE_PARTIAL = 0x1,
> +	CQE_RX_ERR_RE_JABBER = 0x2,
> +	CQE_RX_ERR_RE_FCS = 0x7,
> +	CQE_RX_ERR_RE_TERMINATE = 0x9,
> +	CQE_RX_ERR_RE_RX_CTL = 0xb,
> +	CQE_RX_ERR_PREL2_ERR = 0x1f,
> +	CQE_RX_ERR_L2_FRAGMENT = 0x20,
> +	CQE_RX_ERR_L2_OVERRUN = 0x21,
> +	CQE_RX_ERR_L2_PFCS = 0x22,
> +	CQE_RX_ERR_L2_PUNY = 0x23,
> +	CQE_RX_ERR_L2_MAL = 0x24,
> +	CQE_RX_ERR_L2_OVERSIZE = 0x25,
> +	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
> +	CQE_RX_ERR_L2_LENMISM = 0x27,
> +	CQE_RX_ERR_L2_PCLP = 0x28,
> +	CQE_RX_ERR_IP_NOT = 0x41,
> +	CQE_RX_ERR_IP_CHK = 0x42,
> +	CQE_RX_ERR_IP_MAL = 0x43,
> +	CQE_RX_ERR_IP_MALD = 0x44,
> +	CQE_RX_ERR_IP_HOP = 0x45,
> +	CQE_RX_ERR_L3_ICRC = 0x46,
> +	CQE_RX_ERR_L3_PCLP = 0x47,
> +	CQE_RX_ERR_L4_MAL = 0x61,
> +	CQE_RX_ERR_L4_CHK = 0x62,
> +	CQE_RX_ERR_UDP_LEN = 0x63,
> +	CQE_RX_ERR_L4_PORT = 0x64,
> +	CQE_RX_ERR_TCP_FLAG = 0x65,
> +	CQE_RX_ERR_TCP_OFFSET = 0x66,
> +	CQE_RX_ERR_L4_PCLP = 0x67,
> +	CQE_RX_ERR_RBDR_TRUNC = 0x70,
> +};
> +
> +enum send_l4_csum_type {
> +	SEND_L4_CSUM_DISABLE = 0x00,
> +	SEND_L4_CSUM_UDP = 0x01,
> +	SEND_L4_CSUM_TCP = 0x02,
> +};
> +
> +enum send_crc_alg {
> +	SEND_CRCALG_CRC32 = 0x00,
> +	SEND_CRCALG_CRC32C = 0x01,
> +	SEND_CRCALG_ICRC = 0x02,
> +};
> +
> +enum send_load_type {
> +	SEND_LD_TYPE_LDD = 0x00,
> +	SEND_LD_TYPE_LDT = 0x01,
> +	SEND_LD_TYPE_LDWB = 0x02,
> +};
> +
> +enum send_mem_alg_type {
> +	SEND_MEMALG_SET = 0x00,
> +	SEND_MEMALG_ADD = 0x08,
> +	SEND_MEMALG_SUB = 0x09,
> +	SEND_MEMALG_ADDLEN = 0x0A,
> +	SEND_MEMALG_SUBLEN = 0x0B,
> +};
> +
> +enum send_mem_dsz_type {
> +	SEND_MEMDSZ_B64 = 0x00,
> +	SEND_MEMDSZ_B32 = 0x01,
> +	SEND_MEMDSZ_B8 = 0x03,
> +};
> +
> +enum sq_subdesc_type {
> +	SQ_DESC_TYPE_INVALID = 0x00,
> +	SQ_DESC_TYPE_HEADER = 0x01,
> +	SQ_DESC_TYPE_CRC = 0x02,
> +	SQ_DESC_TYPE_IMMEDIATE = 0x03,
> +	SQ_DESC_TYPE_GATHER = 0x04,
> +	SQ_DESC_TYPE_MEMORY = 0x05,
> +};
> +
> +enum l3_type_t {
> +	L3_NONE		= 0x00,
> +	L3_IPV4		= 0x04,
> +	L3_IPV4_OPT	= 0x05,
> +	L3_IPV6		= 0x06,
> +	L3_IPV6_OPT	= 0x07,
> +	L3_ET_STOP	= 0x0D,
> +	L3_OTHER	= 0x0E
> +};
> +
> +enum l4_type_t {
> +	L4_NONE		= 0x00,
> +	L4_IPSEC_ESP	= 0x01,
> +	L4_IPFRAG	= 0x02,
> +	L4_IPCOMP	= 0x03,
> +	L4_TCP		= 0x04,
> +	L4_UDP_PASS1	= 0x05,
> +	L4_GRE		= 0x07,
> +	L4_UDP_PASS2	= 0x08,
> +	L4_UDP_GENEVE	= 0x09,
> +	L4_UDP_VXLAN	= 0x0A,
> +	L4_NVGRE	= 0x0C,
> +	L4_OTHER	= 0x0E
> +};
> +
> +enum vlan_strip {
> +	NO_STRIP = 0x0,
> +	STRIP_FIRST_VLAN = 0x1,
> +	STRIP_SECOND_VLAN = 0x2,
> +	STRIP_RESERV = 0x3
> +};
> +
> +enum rbdr_state {
> +	RBDR_FIFO_STATE_INACTIVE = 0,
> +	RBDR_FIFO_STATE_ACTIVE   = 1,
> +	RBDR_FIFO_STATE_RESET    = 2,
> +	RBDR_FIFO_STATE_FAIL     = 3
> +};
> +
> +enum rq_cache_allocation {
> +	RQ_CACHE_ALLOC_OFF      = 0,
> +	RQ_CACHE_ALLOC_ALL      = 1,
> +	RQ_CACHE_ALLOC_FIRST    = 2,
> +	RQ_CACHE_ALLOC_TWO      = 3,
> +};
> +
> +enum cq_rx_errlvl_e {
> +	CQ_ERRLVL_MAC,
> +	CQ_ERRLVL_L2,
> +	CQ_ERRLVL_L3,
> +	CQ_ERRLVL_L4,
> +};
> +
> +enum cq_rx_errop_e {
> +	CQ_RX_ERROP_RE_NONE = 0x0,
> +	CQ_RX_ERROP_RE_PARTIAL = 0x1,
> +	CQ_RX_ERROP_RE_JABBER = 0x2,
> +	CQ_RX_ERROP_RE_FCS = 0x7,
> +	CQ_RX_ERROP_RE_TERMINATE = 0x9,
> +	CQ_RX_ERROP_RE_RX_CTL = 0xb,
> +	CQ_RX_ERROP_PREL2_ERR = 0x1f,
> +	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
> +	CQ_RX_ERROP_L2_OVERRUN = 0x21,
> +	CQ_RX_ERROP_L2_PFCS = 0x22,
> +	CQ_RX_ERROP_L2_PUNY = 0x23,
> +	CQ_RX_ERROP_L2_MAL = 0x24,
> +	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
> +	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
> +	CQ_RX_ERROP_L2_LENMISM = 0x27,
> +	CQ_RX_ERROP_L2_PCLP = 0x28,
> +	CQ_RX_ERROP_IP_NOT = 0x41,
> +	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
> +	CQ_RX_ERROP_IP_MAL = 0x43,
> +	CQ_RX_ERROP_IP_MALD = 0x44,
> +	CQ_RX_ERROP_IP_HOP = 0x45,
> +	CQ_RX_ERROP_L3_ICRC = 0x46,
> +	CQ_RX_ERROP_L3_PCLP = 0x47,
> +	CQ_RX_ERROP_L4_MAL = 0x61,
> +	CQ_RX_ERROP_L4_CHK = 0x62,
> +	CQ_RX_ERROP_UDP_LEN = 0x63,
> +	CQ_RX_ERROP_L4_PORT = 0x64,
> +	CQ_RX_ERROP_TCP_FLAG = 0x65,
> +	CQ_RX_ERROP_TCP_OFFSET = 0x66,
> +	CQ_RX_ERROP_L4_PCLP = 0x67,
> +	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
> +};
> +
> +enum cq_tx_errop_e {
> +	CQ_TX_ERROP_GOOD = 0x0,
> +	CQ_TX_ERROP_DESC_FAULT = 0x10,
> +	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
> +	CQ_TX_ERROP_SUBDC_ERR = 0x12,
> +	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
> +	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
> +	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
> +	CQ_TX_ERROP_LOCK_VIOL = 0x83,
> +	CQ_TX_ERROP_DATA_FAULT = 0x84,
> +	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
> +	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
> +	CQ_TX_ERROP_MEM_FAULT = 0x87,
> +	CQ_TX_ERROP_CK_OVERLAP = 0x88,
> +	CQ_TX_ERROP_CK_OFLOW = 0x89,
> +	CQ_TX_ERROP_ENUM_LAST = 0x8a,
> +};
> +
> +enum rq_sq_stats_reg_offset {
> +	RQ_SQ_STATS_OCTS = 0x0,
> +	RQ_SQ_STATS_PKTS = 0x1,
> +};
> +
> +enum nic_stat_vnic_rx_e {
> +	RX_OCTS = 0,
> +	RX_UCAST,
> +	RX_BCAST,
> +	RX_MCAST,
> +	RX_RED,
> +	RX_RED_OCTS,
> +	RX_ORUN,
> +	RX_ORUN_OCTS,
> +	RX_FCS,
> +	RX_L2ERR,
> +	RX_DRP_BCAST,
> +	RX_DRP_MCAST,
> +	RX_DRP_L3BCAST,
> +	RX_DRP_L3MCAST,
> +};
> +
> +enum nic_stat_vnic_tx_e {
> +	TX_OCTS = 0,
> +	TX_UCAST,
> +	TX_BCAST,
> +	TX_MCAST,
> +	TX_DROP,
> +};
> +
> +#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
> +
> +typedef uint64_t nicvf_phys_addr_t;
> +
> +#ifndef __BYTE_ORDER__
> +#error __BYTE_ORDER__ not defined
> +#endif
> +
> +/* vNIC HW Structures */
> +
> +#define NICVF_CQE_RBPTR_WORD         6
> +#define NICVF_CQE_RX2_RBPTR_WORD     7
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint64_t cqe_type:4;
> +		uint64_t stdn_fault:1;
> +		uint64_t rsvd0:1;
> +		uint64_t rq_qs:7;
> +		uint64_t rq_idx:3;
> +		uint64_t rsvd1:12;
> +		uint64_t rss_alg:4;
> +		uint64_t rsvd2:4;
> +		uint64_t rb_cnt:4;
> +		uint64_t vlan_found:1;
> +		uint64_t vlan_stripped:1;
> +		uint64_t vlan2_found:1;
> +		uint64_t vlan2_stripped:1;
> +		uint64_t l4_type:4;
> +		uint64_t l3_type:4;
> +		uint64_t l2_present:1;
> +		uint64_t err_level:3;
> +		uint64_t err_opcode:8;
> +#else
> +		uint64_t err_opcode:8;
> +		uint64_t err_level:3;
> +		uint64_t l2_present:1;
> +		uint64_t l3_type:4;
> +		uint64_t l4_type:4;
> +		uint64_t vlan2_stripped:1;
> +		uint64_t vlan2_found:1;
> +		uint64_t vlan_stripped:1;
> +		uint64_t vlan_found:1;
> +		uint64_t rb_cnt:4;
> +		uint64_t rsvd2:4;
> +		uint64_t rss_alg:4;
> +		uint64_t rsvd1:12;
> +		uint64_t rq_idx:3;
> +		uint64_t rq_qs:7;
> +		uint64_t rsvd0:1;
> +		uint64_t stdn_fault:1;
> +		uint64_t cqe_type:4;
> +#endif
> +	};
> +} cqe_rx_word0_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint64_t pkt_len:16;
> +		uint64_t l2_ptr:8;
> +		uint64_t l3_ptr:8;
> +		uint64_t l4_ptr:8;
> +		uint64_t cq_pkt_len:8;
> +		uint64_t align_pad:3;
> +		uint64_t rsvd3:1;
> +		uint64_t chan:12;
> +#else
> +		uint64_t chan:12;
> +		uint64_t rsvd3:1;
> +		uint64_t align_pad:3;
> +		uint64_t cq_pkt_len:8;
> +		uint64_t l4_ptr:8;
> +		uint64_t l3_ptr:8;
> +		uint64_t l2_ptr:8;
> +		uint64_t pkt_len:16;
> +#endif
> +	};
> +} cqe_rx_word1_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint64_t rss_tag:32;
> +		uint64_t vlan_tci:16;
> +		uint64_t vlan_ptr:8;
> +		uint64_t vlan2_ptr:8;
> +#else
> +		uint64_t vlan2_ptr:8;
> +		uint64_t vlan_ptr:8;
> +		uint64_t vlan_tci:16;
> +		uint64_t rss_tag:32;
> +#endif
> +	};
> +} cqe_rx_word2_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint16_t rb3_sz;
> +		uint16_t rb2_sz;
> +		uint16_t rb1_sz;
> +		uint16_t rb0_sz;
> +#else
> +		uint16_t rb0_sz;
> +		uint16_t rb1_sz;
> +		uint16_t rb2_sz;
> +		uint16_t rb3_sz;
> +#endif
> +	};
> +} cqe_rx_word3_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint16_t rb7_sz;
> +		uint16_t rb6_sz;
> +		uint16_t rb5_sz;
> +		uint16_t rb4_sz;
> +#else
> +		uint16_t rb4_sz;
> +		uint16_t rb5_sz;
> +		uint16_t rb6_sz;
> +		uint16_t rb7_sz;
> +#endif
> +	};
> +} cqe_rx_word4_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint16_t rb11_sz;
> +		uint16_t rb10_sz;
> +		uint16_t rb9_sz;
> +		uint16_t rb8_sz;
> +#else
> +		uint16_t rb8_sz;
> +		uint16_t rb9_sz;
> +		uint16_t rb10_sz;
> +		uint16_t rb11_sz;
> +#endif
> +	};
> +} cqe_rx_word5_t;
> +
> +typedef union {
> +	uint64_t u64;
> +	struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +		uint64_t vlan_found:1;
> +		uint64_t vlan_stripped:1;
> +		uint64_t vlan2_found:1;
> +		uint64_t vlan2_stripped:1;
> +		uint64_t rsvd2:3;
> +		uint64_t inner_l2:1;
> +		uint64_t inner_l4type:4;
> +		uint64_t inner_l3type:4;
> +		uint64_t vlan_ptr:8;
> +		uint64_t vlan2_ptr:8;
> +		uint64_t rsvd1:8;
> +		uint64_t rsvd0:8;
> +		uint64_t inner_l3ptr:8;
> +		uint64_t inner_l4ptr:8;
> +#else
> +		uint64_t inner_l4ptr:8;
> +		uint64_t inner_l3ptr:8;
> +		uint64_t rsvd0:8;
> +		uint64_t rsvd1:8;
> +		uint64_t vlan2_ptr:8;
> +		uint64_t vlan_ptr:8;
> +		uint64_t inner_l3type:4;
> +		uint64_t inner_l4type:4;
> +		uint64_t inner_l2:1;
> +		uint64_t rsvd2:3;
> +		uint64_t vlan2_stripped:1;
> +		uint64_t vlan2_found:1;
> +		uint64_t vlan_stripped:1;
> +		uint64_t vlan_found:1;
> +#endif
> +	};
> +} cqe_rx2_word6_t;
> +
> +struct cqe_rx_t {
> +	cqe_rx_word0_t word0;
> +	cqe_rx_word1_t word1;
> +	cqe_rx_word2_t word2;
> +	cqe_rx_word3_t word3;
> +	cqe_rx_word4_t word4;
> +	cqe_rx_word5_t word5;
> +	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
> +};
> +
> +struct cqe_rx_tcp_err_t {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t   cqe_type:4; /* W0 */
> +	uint64_t   rsvd0:60;
> +
> +	uint64_t   rsvd1:4; /* W1 */
> +	uint64_t   partial_first:1;
> +	uint64_t   rsvd2:27;
> +	uint64_t   rbdr_bytes:8;
> +	uint64_t   rsvd3:24;
> +#else
> +	uint64_t   rsvd0:60;
> +	uint64_t   cqe_type:4;
> +
> +	uint64_t   rsvd3:24;
> +	uint64_t   rbdr_bytes:8;
> +	uint64_t   rsvd2:27;
> +	uint64_t   partial_first:1;
> +	uint64_t   rsvd1:4;
> +#endif
> +};
> +
> +struct cqe_rx_tcp_t {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t   cqe_type:4; /* W0 */
> +	uint64_t   rsvd0:52;
> +	uint64_t   cq_tcp_status:8;
> +
> +	uint64_t   rsvd1:32; /* W1 */
> +	uint64_t   tcp_cntx_bytes:8;
> +	uint64_t   rsvd2:8;
> +	uint64_t   tcp_err_bytes:16;
> +#else
> +	uint64_t   cq_tcp_status:8;
> +	uint64_t   rsvd0:52;
> +	uint64_t   cqe_type:4; /* W0 */
> +
> +	uint64_t   tcp_err_bytes:16;
> +	uint64_t   rsvd2:8;
> +	uint64_t   tcp_cntx_bytes:8;
> +	uint64_t   rsvd1:32; /* W1 */
> +#endif
> +};
> +
> +struct cqe_send_t {
> +#if defined(__BIG_ENDIAN_BITFIELD)
> +	uint64_t   cqe_type:4; /* W0 */
> +	uint64_t   rsvd0:4;
> +	uint64_t   sqe_ptr:16;
> +	uint64_t   rsvd1:4;
> +	uint64_t   rsvd2:10;
> +	uint64_t   sq_qs:7;
> +	uint64_t   sq_idx:3;
> +	uint64_t   rsvd3:8;
> +	uint64_t   send_status:8;
> +
> +	uint64_t   ptp_timestamp:64; /* W1 */
> +#elif defined(__LITTLE_ENDIAN_BITFIELD)
> +	uint64_t   send_status:8;
> +	uint64_t   rsvd3:8;
> +	uint64_t   sq_idx:3;
> +	uint64_t   sq_qs:7;
> +	uint64_t   rsvd2:10;
> +	uint64_t   rsvd1:4;
> +	uint64_t   sqe_ptr:16;
> +	uint64_t   rsvd0:4;
> +	uint64_t   cqe_type:4; /* W0 */
> +
> +	uint64_t   ptp_timestamp:64;
> +#endif
> +};
> +
> +struct cq_entry_type_t {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t cqe_type:4;
> +	uint64_t __pad:60;
> +#else
> +	uint64_t __pad:60;
> +	uint64_t cqe_type:4;
> +#endif
> +};
> +
> +union cq_entry_t {
> +	uint64_t u[64];
> +	struct cq_entry_type_t type;
> +	struct cqe_rx_t rx_hdr;
> +	struct cqe_rx_tcp_t rx_tcp_hdr;
> +	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
> +	struct cqe_send_t cqe_send;
> +};
> +
> +NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
> +
> +struct rbdr_entry_t {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	union {
> +		struct {
> +			uint64_t   rsvd0:15;
> +			uint64_t   buf_addr:42;
> +			uint64_t   cache_align:7;
> +		};
> +		nicvf_phys_addr_t full_addr;
> +	};
> +#else
> +	union {
> +		struct {
> +			uint64_t   cache_align:7;
> +			uint64_t   buf_addr:42;
> +			uint64_t   rsvd0:15;
> +		};
> +		nicvf_phys_addr_t full_addr;
> +	};
> +#endif
> +};
> +
> +NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
> +
> +/* TCP reassembly context */
> +struct rbe_tcp_cnxt_t {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t   tcp_pkt_cnt:12;
> +	uint64_t   rsvd1:4;
> +	uint64_t   align_hdr_bytes:4;
> +	uint64_t   align_ptr_bytes:4;
> +	uint64_t   ptr_bytes:16;
> +	uint64_t   rsvd2:24;
> +	uint64_t   cqe_type:4;
> +	uint64_t   rsvd0:54;
> +	uint64_t   tcp_end_reason:2;
> +	uint64_t   tcp_status:4;
> +#else
> +	uint64_t   tcp_status:4;
> +	uint64_t   tcp_end_reason:2;
> +	uint64_t   rsvd0:54;
> +	uint64_t   cqe_type:4;
> +	uint64_t   rsvd2:24;
> +	uint64_t   ptr_bytes:16;
> +	uint64_t   align_ptr_bytes:4;
> +	uint64_t   align_hdr_bytes:4;
> +	uint64_t   rsvd1:4;
> +	uint64_t   tcp_pkt_cnt:12;
> +#endif
> +};
> +
> +/* Always Big endian */
> +struct rx_hdr_t {
> +	uint64_t   opaque:32;
> +	uint64_t   rss_flow:8;
> +	uint64_t   skip_length:6;
> +	uint64_t   disable_rss:1;
> +	uint64_t   disable_tcp_reassembly:1;
> +	uint64_t   nodrop:1;
> +	uint64_t   dest_alg:2;
> +	uint64_t   rsvd0:2;
> +	uint64_t   dest_rq:11;
> +};
> +
> +struct sq_crc_subdesc {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t    rsvd1:32;
> +	uint64_t    crc_ival:32;
> +	uint64_t    subdesc_type:4;
> +	uint64_t    crc_alg:2;
> +	uint64_t    rsvd0:10;
> +	uint64_t    crc_insert_pos:16;
> +	uint64_t    hdr_start:16;
> +	uint64_t    crc_len:16;
> +#else
> +	uint64_t    crc_len:16;
> +	uint64_t    hdr_start:16;
> +	uint64_t    crc_insert_pos:16;
> +	uint64_t    rsvd0:10;
> +	uint64_t    crc_alg:2;
> +	uint64_t    subdesc_type:4;
> +	uint64_t    crc_ival:32;
> +	uint64_t    rsvd1:32;
> +#endif
> +};
> +
> +struct sq_gather_subdesc {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t    subdesc_type:4; /* W0 */
> +	uint64_t    ld_type:2;
> +	uint64_t    rsvd0:42;
> +	uint64_t    size:16;
> +
> +	uint64_t    rsvd1:15; /* W1 */
> +	uint64_t    addr:49;
> +#else
> +	uint64_t    size:16;
> +	uint64_t    rsvd0:42;
> +	uint64_t    ld_type:2;
> +	uint64_t    subdesc_type:4; /* W0 */
> +
> +	uint64_t    addr:49;
> +	uint64_t    rsvd1:15; /* W1 */
> +#endif
> +};
> +
> +/* SQ immediate subdescriptor */
> +struct sq_imm_subdesc {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t    subdesc_type:4; /* W0 */
> +	uint64_t    rsvd0:46;
> +	uint64_t    len:14;
> +
> +	uint64_t    data:64; /* W1 */
> +#else
> +	uint64_t    len:14;
> +	uint64_t    rsvd0:46;
> +	uint64_t    subdesc_type:4; /* W0 */
> +
> +	uint64_t    data:64; /* W1 */
> +#endif
> +};
> +
> +struct sq_mem_subdesc {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t    subdesc_type:4; /* W0 */
> +	uint64_t    mem_alg:4;
> +	uint64_t    mem_dsz:2;
> +	uint64_t    wmem:1;
> +	uint64_t    rsvd0:21;
> +	uint64_t    offset:32;
> +
> +	uint64_t    rsvd1:15; /* W1 */
> +	uint64_t    addr:49;
> +#else
> +	uint64_t    offset:32;
> +	uint64_t    rsvd0:21;
> +	uint64_t    wmem:1;
> +	uint64_t    mem_dsz:2;
> +	uint64_t    mem_alg:4;
> +	uint64_t    subdesc_type:4; /* W0 */
> +
> +	uint64_t    addr:49;
> +	uint64_t    rsvd1:15; /* W1 */
> +#endif
> +};
> +
> +struct sq_hdr_subdesc {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t    subdesc_type:4;
> +	uint64_t    tso:1;
> +	uint64_t    post_cqe:1; /* Post CQE on no error also */
> +	uint64_t    dont_send:1;
> +	uint64_t    tstmp:1;
> +	uint64_t    subdesc_cnt:8;
> +	uint64_t    csum_l4:2;
> +	uint64_t    csum_l3:1;
> +	uint64_t    csum_inner_l4:2;
> +	uint64_t    csum_inner_l3:1;
> +	uint64_t    rsvd0:2;
> +	uint64_t    l4_offset:8;
> +	uint64_t    l3_offset:8;
> +	uint64_t    rsvd1:4;
> +	uint64_t    tot_len:20; /* W0 */
> +
> +	uint64_t    rsvd2:24;
> +	uint64_t    inner_l4_offset:8;
> +	uint64_t    inner_l3_offset:8;
> +	uint64_t    tso_start:8;
> +	uint64_t    rsvd3:2;
> +	uint64_t    tso_max_paysize:14; /* W1 */
> +#else
> +	uint64_t    tot_len:20;
> +	uint64_t    rsvd1:4;
> +	uint64_t    l3_offset:8;
> +	uint64_t    l4_offset:8;
> +	uint64_t    rsvd0:2;
> +	uint64_t    csum_inner_l3:1;
> +	uint64_t    csum_inner_l4:2;
> +	uint64_t    csum_l3:1;
> +	uint64_t    csum_l4:2;
> +	uint64_t    subdesc_cnt:8;
> +	uint64_t    tstmp:1;
> +	uint64_t    dont_send:1;
> +	uint64_t    post_cqe:1; /* Post CQE on no error also */
> +	uint64_t    tso:1;
> +	uint64_t    subdesc_type:4; /* W0 */
> +
> +	uint64_t    tso_max_paysize:14;
> +	uint64_t    rsvd3:2;
> +	uint64_t    tso_start:8;
> +	uint64_t    inner_l3_offset:8;
> +	uint64_t    inner_l4_offset:8;
> +	uint64_t    rsvd2:24; /* W1 */
> +#endif
> +};
> +
> +/* Each sq entry is 128 bits wide */
> +union sq_entry_t {
> +	uint64_t buff[2];
> +	struct sq_hdr_subdesc hdr;
> +	struct sq_imm_subdesc imm;
> +	struct sq_gather_subdesc gather;
> +	struct sq_crc_subdesc crc;
> +	struct sq_mem_subdesc mem;
> +};
> +
> +NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
> +
> +/* Queue config register formats */
> +struct rq_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserved_2_63:62;
> +	uint64_t ena:1;
> +	uint64_t reserved_0:1;
> +#else
> +	uint64_t reserved_0:1;
> +	uint64_t ena:1;
> +	uint64_t reserved_2_63:62;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct cq_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserved_43_63:21;
> +	uint64_t ena:1;
> +	uint64_t reset:1;
> +	uint64_t caching:1;
> +	uint64_t reserved_35_39:5;
> +	uint64_t qsize:3;
> +	uint64_t reserved_25_31:7;
> +	uint64_t avg_con:9;
> +	uint64_t reserved_0_15:16;
> +#else
> +	uint64_t reserved_0_15:16;
> +	uint64_t avg_con:9;
> +	uint64_t reserved_25_31:7;
> +	uint64_t qsize:3;
> +	uint64_t reserved_35_39:5;
> +	uint64_t caching:1;
> +	uint64_t reset:1;
> +	uint64_t ena:1;
> +	uint64_t reserved_43_63:21;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct sq_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserved_20_63:44;
> +	uint64_t ena:1;
> +	uint64_t reserved_18_18:1;
> +	uint64_t reset:1;
> +	uint64_t ldwb:1;
> +	uint64_t reserved_11_15:5;
> +	uint64_t qsize:3;
> +	uint64_t reserved_3_7:5;
> +	uint64_t tstmp_bgx_intf:3;
> +#else
> +	uint64_t tstmp_bgx_intf:3;
> +	uint64_t reserved_3_7:5;
> +	uint64_t qsize:3;
> +	uint64_t reserved_11_15:5;
> +	uint64_t ldwb:1;
> +	uint64_t reset:1;
> +	uint64_t reserved_18_18:1;
> +	uint64_t ena:1;
> +	uint64_t reserved_20_63:44;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct rbdr_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserved_45_63:19;
> +	uint64_t ena:1;
> +	uint64_t reset:1;
> +	uint64_t ldwb:1;
> +	uint64_t reserved_36_41:6;
> +	uint64_t qsize:4;
> +	uint64_t reserved_25_31:7;
> +	uint64_t avg_con:9;
> +	uint64_t reserved_12_15:4;
> +	uint64_t lines:12;
> +#else
> +	uint64_t lines:12;
> +	uint64_t reserved_12_15:4;
> +	uint64_t avg_con:9;
> +	uint64_t reserved_25_31:7;
> +	uint64_t qsize:4;
> +	uint64_t reserved_36_41:6;
> +	uint64_t ldwb:1;
> +	uint64_t reset:1;
> +	uint64_t ena: 1;
> +	uint64_t reserved_45_63:19;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct pf_qs_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserved_32_63:32;
> +	uint64_t ena:1;
> +	uint64_t reserved_27_30:4;
> +	uint64_t sq_ins_ena:1;
> +	uint64_t sq_ins_pos:6;
> +	uint64_t lock_ena:1;
> +	uint64_t lock_viol_cqe_ena:1;
> +	uint64_t send_tstmp_ena:1;
> +	uint64_t be:1;
> +	uint64_t reserved_7_15:9;
> +	uint64_t vnic:7;
> +#else
> +	uint64_t vnic:7;
> +	uint64_t reserved_7_15:9;
> +	uint64_t be:1;
> +	uint64_t send_tstmp_ena:1;
> +	uint64_t lock_viol_cqe_ena:1;
> +	uint64_t lock_ena:1;
> +	uint64_t sq_ins_pos:6;
> +	uint64_t sq_ins_ena:1;
> +	uint64_t reserved_27_30:4;
> +	uint64_t ena:1;
> +	uint64_t reserved_32_63:32;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct pf_rq_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t reserverd1:1;
> +	uint64_t reserverd0:34;
> +	uint64_t strip_pre_l2:1;
> +	uint64_t caching:2;
> +	uint64_t cq_qs:7;
> +	uint64_t cq_idx:3;
> +	uint64_t rbdr_cont_qs:7;
> +	uint64_t rbdr_cont_idx:1;
> +	uint64_t rbdr_strt_qs:7;
> +	uint64_t rbdr_strt_idx:1;
> +#else
> +	uint64_t rbdr_strt_idx:1;
> +	uint64_t rbdr_strt_qs:7;
> +	uint64_t rbdr_cont_idx:1;
> +	uint64_t rbdr_cont_qs:7;
> +	uint64_t cq_idx:3;
> +	uint64_t cq_qs:7;
> +	uint64_t caching:2;
> +	uint64_t strip_pre_l2:1;
> +	uint64_t reserverd0:34;
> +	uint64_t reserverd1:1;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +struct pf_rq_drop_cfg { union { struct {
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	uint64_t rbdr_red:1;
> +	uint64_t cq_red:1;
> +	uint64_t reserved3:14;
> +	uint64_t rbdr_pass:8;
> +	uint64_t rbdr_drop:8;
> +	uint64_t reserved2:8;
> +	uint64_t cq_pass:8;
> +	uint64_t cq_drop:8;
> +	uint64_t reserved1:8;
> +#else
> +	uint64_t reserved1:8;
> +	uint64_t cq_drop:8;
> +	uint64_t cq_pass:8;
> +	uint64_t reserved2:8;
> +	uint64_t rbdr_drop:8;
> +	uint64_t rbdr_pass:8;
> +	uint64_t reserved3:14;
> +	uint64_t cq_red:1;
> +	uint64_t rbdr_red:1;
> +#endif
> +	};
> +	uint64_t value;
> +}; };
> +
> +#endif /* _THUNDERX_NICVF_HW_DEFS_H */
> diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
> new file mode 100644
> index 0000000..715c7c3
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_mbox.c
> @@ -0,0 +1,416 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <assert.h>
> +#include <unistd.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +
> +#include "nicvf_plat.h"
> +
> +static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
> +	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
> +	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
> +	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
> +	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
> +	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
> +	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
> +	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
> +	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
> +	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
> +	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
> +	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
> +	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
> +	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
> +	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
> +	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
> +	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
> +	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
> +	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
> +	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
> +	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
> +	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
> +	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
> +};
> +
> +static inline const char *
> +nicvf_mbox_msg_str(int msg)
> +{
> +	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
> +	/* undefined messages */
> +	if (mbox_message[msg] == NULL)
> +		msg = 0;
> +	return mbox_message[msg];
> +}
> +
> +static inline void
> +nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
> +{
> +	uint64_t *mbx_data;
> +	uint64_t mbx_addr;
> +	int i;
> +
> +	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
> +	mbx_data = (uint64_t *)mbx;
> +	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
> +		nicvf_reg_write(nic, mbx_addr, *mbx_data);
> +		mbx_data++;
> +		mbx_addr += sizeof(uint64_t);
> +	}
> +	nicvf_mbox_log("msg sent %s (VF%d)",
> +			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
> +}
> +
> +static inline void
> +nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
> +{
> +	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
> +	/* Messages without ack are racy!*/
> +	nicvf_delay_us(1000);
hardcoded delay time, more blow

> +}
> +
> +static inline int
> +nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
> +{
> +	long timeout;
> +	long sleep = 10;
> +	int i, retry = 5;
> +
> +	for (i = 0; i < retry; i++) {
> +		nic->pf_acked = false;
> +		nic->pf_nacked = false;
> +		nicvf_smp_wmb();
> +
> +		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
> +		/* Give some time to get PF response */
> +		nicvf_delay_us(1000);
> +		timeout = NIC_MBOX_MSG_TIMEOUT;
> +		while (timeout > 0) {
> +			/* Periodic poll happens from nicvf_interrupt() */
> +			nicvf_smp_rmb();
> +
> +			if (nic->pf_nacked)
> +				return -EINVAL;
> +			if (nic->pf_acked)
> +				return 0;
> +
> +			nicvf_delay_us(1000);
> +			timeout -= sleep;
> +		}
> +		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
> +				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
> +				nic->vf_id, i, retry);
> +	}
> +	return -EBUSY;
> +}
> +
> +
> +int
> +nicvf_handle_mbx_intr(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx;
> +	uint64_t *mbx_data = (uint64_t *)&mbx;
> +	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
> +	size_t i;
> +
> +	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
> +		*mbx_data = nicvf_reg_read(nic, mbx_addr);
> +		mbx_data++;
> +		mbx_addr += sizeof(uint64_t);
> +	}
> +
> +	/* Overwrite the message so we won't receive it again */
> +	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
> +
> +	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
> +			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
> +
> +	switch (mbx.msg.msg) {
> +	case NIC_MBOX_MSG_READY:
> +		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
> +		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
> +		nic->node = mbx.nic_cfg.node_id;
> +		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
> +		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
> +		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
> +				(struct ether_addr *)nic->mac_addr);
> +		nic->pf_acked = true;
> +		break;
> +	case NIC_MBOX_MSG_ACK:
> +		nic->pf_acked = true;
> +		break;
> +	case NIC_MBOX_MSG_NACK:
> +		nic->pf_nacked = true;
> +		break;
> +	case NIC_MBOX_MSG_RSS_SIZE:
> +		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
> +		nic->pf_acked = true;
> +		break;
> +	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
> +		nic->link_up = mbx.link_status.link_up;
> +		nic->duplex = mbx.link_status.duplex;
> +		nic->speed = mbx.link_status.speed;
> +		nic->pf_acked = true;
> +		break;
> +	default:
> +		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
> +				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
> +		break;
> +	}
> +	nicvf_smp_wmb();
> +
> +	return mbx.msg.msg;
> +}
> +
> +/*
> + * Checks if VF is able to communicate with PF
> + * and also gets the VNIC number this VF is associated to.
> + */
> +int
> +nicvf_mbox_check_pf_ready(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
> +
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_set_mac_addr(struct nicvf *nic,
> +			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
> +{
> +	struct nic_mbx mbx = { .msg = {0} };
> +	int i;
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
> +	mbx.mac.vf_id = nic->vf_id;
> +	for (i = 0; i < 6; i++)
> +		mbx.mac.mac_addr[i] = mac[i];
> +
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
> +	mbx.cpi_cfg.vf_id = nic->vf_id;
> +	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
> +	mbx.cpi_cfg.rq_cnt = qcnt;
> +
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_get_rss_size(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
> +	mbx.rss_size.vf_id = nic->vf_id;
> +
> +	/* Result will be stored in nic->rss_info.rss_size */
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_config_rss(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +	struct nicvf_rss_reta_info *rss = &nic->rss_info;
> +	size_t tot_len = rss->rss_size;
> +	size_t cur_len;
> +	size_t cur_idx = 0;
> +	size_t i;
> +
> +	mbx.rss_cfg.vf_id = nic->vf_id;
> +	mbx.rss_cfg.hash_bits = rss->hash_bits;
> +	mbx.rss_cfg.tbl_len = 0;
> +	mbx.rss_cfg.tbl_offset = 0;
> +
> +	while (cur_idx < tot_len) {
> +		cur_len = nicvf_min(tot_len - cur_idx,
> +				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
> +		mbx.msg.msg = (cur_idx > 0) ?
> +			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
> +		mbx.rss_cfg.tbl_offset = cur_idx;
> +		mbx.rss_cfg.tbl_len = cur_len;
> +		for (i = 0; i < cur_len; i++)
> +			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
> +
> +		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
> +			return NICVF_ERR_RSS_TBL_UPDATE;
> +	}
> +
> +	return 0;
> +}
> +
> +int
> +nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
> +		     struct pf_rq_cfg *pf_rq_cfg)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
> +	mbx.rq.qs_num = nic->vf_id;
> +	mbx.rq.rq_num = qidx;
> +	mbx.rq.cfg = pf_rq_cfg->value;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
> +	mbx.sq.qs_num = nic->vf_id;
> +	mbx.sq.sq_num = qidx;
> +	mbx.sq.sqs_mode = nic->sqs_mode;
> +	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
> +	qs_cfg->be = 1;
> +#endif
> +	/* Send a mailbox msg to PF to config Qset */
> +	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
> +	mbx.qs.num = nic->vf_id;
> +	mbx.qs.cfg = qs_cfg->value;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +	struct pf_rq_drop_cfg *drop_cfg;
> +
> +	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
> +	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
> +	mbx.rq.qs_num = nic->vf_id;
> +	mbx.rq.rq_num = qidx;
> +	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
> +	drop_cfg->value = 0;
> +	if (enable) {
> +		drop_cfg->cq_red = 1;
> +		drop_cfg->cq_drop = 2;
> +	}
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
> +	mbx.frs.max_frs = mtu;
> +	mbx.frs.vf_id = nic->vf_id;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_rq_sync(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	/* Make sure all packets in the pipeline are written back into mem */
> +	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
> +	mbx.rq.cfg = 0;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
> +	mbx.rq.qs_num = nic->vf_id;
> +	mbx.rq.rq_num = qidx;
> +	mbx.rq.cfg = 0;
> +	if (enable)
> +		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
> +	mbx.lbk.vf_id = nic->vf_id;
> +	mbx.lbk.enable = enable;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +int
> +nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
> +			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
> +			       uint16_t sq_stat_mask)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
> +	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
> +	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
> +	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
> +	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
> +	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +void
> +nicvf_mbox_shutdown(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
> +	nicvf_mbox_send_msg_to_pf(nic, &mbx);
> +}
> +
> +void
> +nicvf_mbox_cfg_done(struct nicvf *nic)
> +{
> +	struct nic_mbx mbx = { .msg = { 0 } };
> +
> +	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
> +	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
> +}
> diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
> new file mode 100644
> index 0000000..7c0c6a9
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_mbox.h
> @@ -0,0 +1,232 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef __THUNDERX_NICVF_MBOX__
> +#define __THUNDERX_NICVF_MBOX__
> +
> +#include <stdint.h>
> +
> +#include "nicvf_plat.h"
> +
> +/* PF <--> VF Mailbox communication
> + * Two 64bit registers are shared between PF and VF for each VF
> + * Writing into second register means end of message.
> + */
> +
> +/* PF <--> VF mailbox communication */
> +#define	NIC_PF_VF_MAILBOX_SIZE		2
> +#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
> +
> +/* Mailbox message types */
> +#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
> +#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
> +#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
> +#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
> +#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
> +#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
> +#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
> +#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
> +#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
> +#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
> +#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
> +#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
> +#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
> +#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
> +#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
> +#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
> +#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
> +#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
> +#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
> +#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
> +#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
> +#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
> +#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
> +
> +/* Get vNIC VF configuration */
> +struct nic_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint8_t    node_id;
> +	bool	   tns_mode:1;
> +	bool	   sqs_mode:1;
> +	bool	   loopback_supported:1;
> +	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
> +};
> +
> +/* Qset configuration */
> +struct qs_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    num;
> +	uint8_t    sqs_count;
> +	uint64_t   cfg;
> +};
> +
> +/* Receive queue configuration */
> +struct rq_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    qs_num;
> +	uint8_t    rq_num;
> +	uint64_t   cfg;
> +};
> +
> +/* Send queue configuration */
> +struct sq_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    qs_num;
> +	uint8_t    sq_num;
> +	bool       sqs_mode;
> +	uint64_t   cfg;
> +};
> +
> +/* Set VF's MAC address */
> +struct set_mac_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
> +};
> +
> +/* Set Maximum frame size */
> +struct set_frs_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint16_t   max_frs;
> +};
> +
> +/* Set CPI algorithm type */
> +struct cpi_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint8_t    rq_cnt;
> +	uint8_t    cpi_alg;
> +};
> +
> +/* Get RSS table size */
> +struct rss_sz_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint16_t   ind_tbl_size;
> +};
> +
> +/* Set RSS configuration */
> +struct rss_cfg_msg {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	uint8_t    hash_bits;
> +	uint8_t    tbl_len;
> +	uint8_t    tbl_offset;
> +#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
> +	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
> +};
> +
> +/* Physical interface link status */
> +struct bgx_link_status {
> +	uint8_t    msg;
> +	uint8_t    link_up;
> +	uint8_t    duplex;
> +	uint32_t   speed;
> +};
> +
> +/* Set interface in loopback mode */
> +struct set_loopback {
> +	uint8_t    msg;
> +	uint8_t    vf_id;
> +	bool	   enable;
> +};
> +
> +/* Reset statistics counters */
> +struct reset_stat_cfg {
> +	uint8_t    msg;
> +	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
> +	uint16_t   rx_stat_mask;
> +	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
> +	uint8_t    tx_stat_mask;
> +	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
> +	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
> +	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
> +	 * ..
> +	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
> +	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
> +	 */
> +	uint16_t   rq_stat_mask;
> +	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
> +	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
> +	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
> +	 * ..
> +	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
> +	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
> +	 */
> +	uint16_t   sq_stat_mask;
> +};
> +
> +struct nic_mbx {
> +/* 128 bit shared memory between PF and each VF */
> +union {
> +	struct { uint8_t msg; }	msg;
> +	struct nic_cfg_msg	nic_cfg;
> +	struct qs_cfg_msg	qs;
> +	struct rq_cfg_msg	rq;
> +	struct sq_cfg_msg	sq;
> +	struct set_mac_msg	mac;
> +	struct set_frs_msg	frs;
> +	struct cpi_cfg_msg	cpi_cfg;
> +	struct rss_sz_msg	rss_size;
> +	struct rss_cfg_msg	rss_cfg;
> +	struct bgx_link_status  link_status;
> +	struct set_loopback	lbk;
> +	struct reset_stat_cfg	reset_stat;
> +};
> +};
> +
> +NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
> +
> +int nicvf_handle_mbx_intr(struct nicvf *nic);
> +int nicvf_mbox_check_pf_ready(struct nicvf *nic);
> +int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
> +int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
> +			 struct pf_rq_cfg *pf_rq_cfg);
> +int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
> +int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
> +int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
> +int nicvf_mbox_set_mac_addr(struct nicvf *nic,
> +			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
> +int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
> +int nicvf_mbox_get_rss_size(struct nicvf *nic);
> +int nicvf_mbox_config_rss(struct nicvf *nic);
> +int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
> +int nicvf_mbox_rq_sync(struct nicvf *nic);
> +int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
> +int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
> +	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
> +void nicvf_mbox_shutdown(struct nicvf *nic);
> +void nicvf_mbox_cfg_done(struct nicvf *nic);
> +
> +#endif /* __THUNDERX_NICVF_MBOX__ */
> diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
> new file mode 100644
> index 0000000..83c1844
> --- /dev/null
> +++ b/drivers/net/thunderx/base/nicvf_plat.h
> @@ -0,0 +1,132 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _THUNDERX_NICVF_H
> +#define _THUNDERX_NICVF_H
> +
> +/* Platform/OS/arch specific abstractions */
> +
> +/* log */
> +#include <rte_log.h>
> +#include "../nicvf_logs.h"
> +
> +#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
> +
> +#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
> +
> +#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
> +
> +#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
Why not using RTE_LOG but fprintf to stderr?

> +
> +/* delay */
> +#include <rte_cycles.h>
> +#define nicvf_delay_us(x) rte_delay_us(x)
> +
> +/* barrier */
> +#include <rte_atomic.h>
> +#define nicvf_smp_wmb() rte_smp_wmb()
> +#define nicvf_smp_rmb() rte_smp_rmb()
> +
> +/* utils */
> +#include <rte_common.h>
> +#define nicvf_min(x, y) RTE_MIN(x, y)
> +
> +/* byte order */
> +#include <rte_byteorder.h>
> +#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
> +#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
> +
> +/* Constants */
> +#include <rte_ether.h>
> +#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
> +
> +/* ARM64 specific functions */
> +#if defined(RTE_ARCH_ARM64)
> +#define nicvf_prefetch_store_keep(_ptr) ({\
> +	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
> +
> +static inline void __attribute__((always_inline))
> +nicvf_addr_write(uintptr_t addr, uint64_t val)
> +{
> +	asm volatile(
> +		    "str %x[val], [%x[addr]]"
> +		    :
> +		    : [val] "r" (val), [addr] "r" (addr));
> +}
> +
> +static inline uint64_t __attribute__((always_inline))
> +nicvf_addr_read(uintptr_t addr)
> +{
> +	uint64_t val;
> +
> +	asm volatile(
> +		    "ldr %x[val], [%x[addr]]"
> +		    : [val] "=r" (val)
> +		    : [addr] "r" (addr));
> +	return val;
> +}
> +
> +#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
> +			asm volatile(			\
> +			"ldp %x[x1], %x[x0], [%x[p1]]"	\
> +			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
> +			: [p1]"r"(addr)			\
> +			); })
> +
> +#else /* non optimized functions for building on non arm64 arch */
> +
> +#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
> +
> +static inline void __attribute__((always_inline))
> +nicvf_addr_write(uintptr_t addr, uint64_t val)
> +{
> +	*(volatile uint64_t *)addr = val;
> +}
> +
> +static inline uint64_t __attribute__((always_inline))
> +nicvf_addr_read(uintptr_t addr)
> +{
> +	return	*(volatile uint64_t *)addr;
> +}
> +
> +#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
> +do {							\
> +	reg1 = nicvf_addr_read((uintptr_t)addr);	\
> +	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
> +} while (0)
> +
> +#endif
> +
> +#include "nicvf_hw.h"
> +#include "nicvf_mbox.h"
> +
> +#endif /* _THUNDERX_NICVF_H */
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton
  2016-06-07 16:40     ` [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
  2016-06-08 12:18       ` Ferruh Yigit
@ 2016-06-08 16:06       ` Ferruh Yigit
  1 sibling, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:06 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Introduce driver initialization and enable build infrastructure for
> nicvf pmd driver.
> 
> By default, It is enabled only for defconfig_arm64-thunderx-*
> config as it is an inbuilt NIC device.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  config/common_base                                 |  10 +
>  config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
>  drivers/net/Makefile                               |   1 +
>  drivers/net/thunderx/Makefile                      |  63 ++++++
>  drivers/net/thunderx/nicvf_ethdev.c                | 251 +++++++++++++++++++++
>  drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
>  drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
>  drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
>  .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
>  mk/rte.app.mk                                      |   2 +
>  10 files changed, 596 insertions(+)
>  create mode 100644 drivers/net/thunderx/Makefile
>  create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
>  create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
>  create mode 100644 drivers/net/thunderx/nicvf_logs.h
>  create mode 100644 drivers/net/thunderx/nicvf_struct.h
>  create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
> 
> diff --git a/config/common_base b/config/common_base
> index 47c26f6..ad5686b 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
>  CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
>  
>  #
> +# Compile burst-oriented Cavium Thunderx NICVF PMD driver
> +#
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
> +
> +#
>  # Compile burst-oriented VIRTIO PMD driver
>  #
>  CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
> diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
> index fe5e987..7940bbd 100644
> --- a/config/defconfig_arm64-thunderx-linuxapp-gcc
> +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
> @@ -34,3 +34,13 @@
>  CONFIG_RTE_MACHINE="thunderx"
>  
>  CONFIG_RTE_CACHE_LINE_SIZE=128
> +
> +#
> +# Compile Cavium Thunderx NICVF PMD driver
> +#
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
> +CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 6ba7658..0e29a33 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -50,6 +50,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
>  DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
> +DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
>  DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
>  DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
> diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
> new file mode 100644
> index 0000000..eb9f100
> --- /dev/null
> +++ b/drivers/net/thunderx/Makefile
> @@ -0,0 +1,63 @@
> +#   BSD LICENSE
> +#
> +#   Copyright(c) 2016 Cavium Networks. All rights reserved.
> +#   All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +#     * Redistributions of source code must retain the above copyright
> +#       notice, this list of conditions and the following disclaimer.
> +#     * Redistributions in binary form must reproduce the above copyright
> +#       notice, this list of conditions and the following disclaimer in
> +#       the documentation and/or other materials provided with the
> +#       distribution.
> +#     * Neither the name of Cavium Networks nor the names of its
> +#       contributors may be used to endorse or promote products derived
> +#       from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +#
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_thunderx_nicvf.a
> +
> +CFLAGS += $(WERROR_FLAGS)
> +
> +EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
> +
> +LIBABIVER := 1
> +
> +OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
> +$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
> +
> +VPATH += $(SRCDIR)/base
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
> +
> +
> +# this lib depends upon:
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> new file mode 100644
> index 0000000..45bfc13
> --- /dev/null
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -0,0 +1,251 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <assert.h>
> +#include <stdio.h>
> +#include <stdbool.h>
> +#include <errno.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <stdarg.h>
> +#include <inttypes.h>
> +#include <netinet/in.h>
> +#include <sys/queue.h>
> +#include <sys/timerfd.h>
> +
> +#include <rte_common.h>
> +#include <rte_byteorder.h>
> +#include <rte_cycles.h>
> +#include <rte_interrupts.h>
> +#include <rte_log.h>
> +#include <rte_debug.h>
> +#include <rte_pci.h>
> +#include <rte_atomic.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_memory.h>
> +#include <rte_memzone.h>
> +#include <rte_tailq.h>
> +#include <rte_eal.h>
> +#include <rte_alarm.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev.h>
> +#include <rte_malloc.h>
> +#include <rte_random.h>
> +#include <rte_dev.h>
I guess this is not part of convention but I think these are better
sorted alphabetically, just personal taste

> +
> +#include "base/nicvf_plat.h"
> +
> +#include "nicvf_ethdev.h"
> +
> +#include "nicvf_logs.h"
> +
> +static void
> +nicvf_interrupt(void *arg)
> +{
> +	struct nicvf *nic = (struct nicvf *)arg;
unnecessary cast

> +
> +	nicvf_reg_poll_interrupts(nic);
> +
> +	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
> +				nicvf_interrupt, nic);
> +}
> +
> +static int
> +nicvf_periodic_alarm_start(struct nicvf *nic)
> +{
> +	return rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
> +					nicvf_interrupt, nic);
> +}
> +
> +static int
> +nicvf_periodic_alarm_stop(struct nicvf *nic)
> +{
> +	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
> +}
> +
> +/* Initialize and register driver with DPDK Application */
> +static const struct eth_dev_ops nicvf_eth_dev_ops = {
> +};
no assignment, does {} required?

> +
> +static int
> +nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
> +{
> +	int ret;
> +	struct rte_pci_device *pci_dev;
> +	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	eth_dev->dev_ops = &nicvf_eth_dev_ops;
> +
> +	pci_dev = eth_dev->pci_dev;
> +	rte_eth_copy_pci_info(eth_dev, pci_dev);
> +
> +	nic->device_id = pci_dev->id.device_id;
> +	nic->vendor_id = pci_dev->id.vendor_id;
> +	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
> +	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +	nic->eth_dev = eth_dev;
> +
> +	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
> +			pci_dev->id.vendor_id, pci_dev->id.device_id,
> +			pci_dev->addr.domain, pci_dev->addr.bus,
> +			pci_dev->addr.devid, pci_dev->addr.function);
> +
> +	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
> +	if (!nic->reg_base) {
> +		PMD_INIT_LOG(ERR, "Failed to map BAR0");
> +		ret = -ENODEV;
> +		goto fail;
> +	}
> +
> +	nicvf_disable_all_interrupts(nic);
> +
> +	ret = nicvf_periodic_alarm_start(nic);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to start period alarm");
> +		goto fail;
> +	}
> +
> +	ret = nicvf_mbox_check_pf_ready(nic);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
> +		goto alarm_fail;
> +	} else {
> +		PMD_INIT_LOG(INFO,
> +			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
> +			nic->node, nic->vf_id,
> +			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
> +			nic->sqs_mode ? "true" : "false",
> +			nic->loopback_supported ? "true" : "false"
> +			);
> +	}
> +
> +	if (nic->sqs_mode) {
> +		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
> +		/* Detach port by returning Postive error number */
> +		ret = ENOTSUP;
Intended to have negative value? Although this looks valid since
rte_eth_dev_init() just check zero or not.

> +		goto alarm_fail;
> +	}
> +
> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
> +		ret = -ENOMEM;
> +		goto alarm_fail;
> +	}
> +	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
> +		eth_random_addr(&nic->mac_addr[0]);
> +
> +	ether_addr_copy((struct ether_addr *)nic->mac_addr,
> +			&eth_dev->data->mac_addrs[0]);
> +
> +	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to set mac addr");
> +		goto malloc_fail;
> +	}
> +
> +	ret = nicvf_base_init(nic);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
> +		goto malloc_fail;
> +	}
> +
> +	ret = nicvf_mbox_get_rss_size(nic);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to get rss table size");
> +		goto malloc_fail;
> +	}
> +
> +	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
> +		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
> +		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
> +		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
> +
> +	return 0;
> +
> +malloc_fail:
> +	rte_free(eth_dev->data->mac_addrs);
> +alarm_fail:
> +	nicvf_periodic_alarm_stop(nic);
> +fail:
> +	return ret;
> +}
> +
> +static const struct rte_pci_id pci_id_nicvf_map[] = {
> +	{
> +		.vendor_id = PCI_VENDOR_ID_CAVIUM,
> +		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
> +		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
> +		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
> +	},
> +	{
> +		.vendor_id = PCI_VENDOR_ID_CAVIUM,
> +		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
> +		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
> +		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
> +	},
> +	{
> +		.vendor_id = 0,
> +	},
> +};
> +
> +static struct eth_driver rte_nicvf_pmd = {
> +	.pci_drv = {
> +		.name = "rte_nicvf_pmd",
> +		.id_table = pci_id_nicvf_map,
> +		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
> +	},
> +	.eth_dev_init = nicvf_eth_dev_init,
> +	.dev_private_size = sizeof(struct nicvf),
> +};
> +
> +static int
> +rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
> +			THUNDERX_NICVF_PMD_VERSION);
> +
> +	rte_eth_driver_register(&rte_nicvf_pmd);
> +	return 0;
> +}
> +
> +static struct rte_driver rte_nicvf_driver = {
> +	.name = "nicvf_driver",
> +	.type = PMD_PDEV,
> +	.init = rte_nicvf_pmd_init,
> +};
> +
> +PMD_REGISTER_DRIVER(rte_nicvf_driver);
> diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
> new file mode 100644
> index 0000000..d4d2071
> --- /dev/null
> +++ b/drivers/net/thunderx/nicvf_ethdev.h
> @@ -0,0 +1,48 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef __THUNDERX_NICVF_ETHDEV_H__
> +#define __THUNDERX_NICVF_ETHDEV_H__
> +
> +#include <rte_ethdev.h>
> +
> +#define THUNDERX_NICVF_PMD_VERSION      "1.0"
> +
> +#define NICVF_INTR_POLL_INTERVAL_MS	50
> +
> +static inline struct nicvf *
> +nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
> +{
> +	return eth_dev->data->dev_private;
> +}
> +
> +#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
> diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
> new file mode 100644
> index 0000000..0667d46
> --- /dev/null
> +++ b/drivers/net/thunderx/nicvf_logs.h
> @@ -0,0 +1,83 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef __THUNDERX_NICVF_LOGS__
> +#define __THUNDERX_NICVF_LOGS__
> +
> +#include <assert.h>
> +
> +#define PMD_INIT_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +
> +#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
> +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
> +#else
> +#define PMD_INIT_FUNC_TRACE() do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
> +#define PMD_RX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#define NICVF_RX_ASSERT(x) assert(x)
> +#else
> +#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
> +#define NICVF_RX_ASSERT(x) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
> +#define PMD_TX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#define NICVF_TX_ASSERT(x) assert(x)
> +#else
> +#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
> +#define NICVF_TX_ASSERT(x) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
> +#else
> +#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
> +#define PMD_DRV_FUNC_TRACE() do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
> +#define PMD_MBOX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> +#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
> +#else
> +#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
> +#define PMD_MBOX_FUNC_TRACE() do { } while (0)
> +#endif
> +
> +#endif /* __THUNDERX_NICVF_LOGS__ */
> diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
> new file mode 100644
> index 0000000..c52545d
> --- /dev/null
> +++ b/drivers/net/thunderx/nicvf_struct.h
> @@ -0,0 +1,124 @@
> +/*
> + *   BSD LICENSE
> + *
> + *   Copyright (C) Cavium networks Ltd. 2016.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Cavium networks nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _THUNDERX_NICVF_STRUCT_H
> +#define _THUNDERX_NICVF_STRUCT_H
> +
> +#include <stdint.h>
> +
> +#include <rte_spinlock.h>
> +#include <rte_mempool.h>
> +#include <rte_mbuf.h>
> +#include <rte_interrupts.h>
> +#include <rte_ethdev.h>
> +#include <rte_memory.h>
> +
> +struct nicvf_rbdr {
> +	uint64_t rbdr_status;
> +	uint64_t rbdr_door;
> +	struct rbdr_entry_t *desc;
> +	nicvf_phys_addr_t phys;
> +	uint32_t buffsz;
> +	uint32_t tail;
> +	uint32_t next_tail;
> +	uint32_t head;
> +	uint32_t qlen_mask;
> +} __rte_cache_aligned;
> +
> +struct nicvf_txq {
> +	union sq_entry_t *desc;
> +	nicvf_phys_addr_t phys;
> +	struct rte_mbuf **txbuffs;
> +	uint64_t sq_head;
> +	uint64_t sq_door;
> +	struct rte_mempool *pool;
> +	struct nicvf *nic;
> +	void (*pool_free)(struct nicvf_txq *sq);
> +	uint32_t head;
> +	uint32_t tail;
> +	int32_t xmit_bufs;
> +	uint32_t qlen_mask;
> +	uint32_t txq_flags;
> +	uint16_t queue_id;
> +	uint16_t tx_free_thresh;
> +} __rte_cache_aligned;
> +
> +struct nicvf_rxq {
> +	uint64_t mbuf_phys_off;
> +	uint64_t cq_status;
> +	uint64_t cq_door;
> +	nicvf_phys_addr_t phys;
> +	union cq_entry_t *desc;
> +	struct nicvf_rbdr *shared_rbdr;
> +	struct nicvf *nic;
> +	struct rte_mempool *pool;
> +	uint32_t head;
> +	uint32_t qlen_mask;
> +	int32_t available_space;
> +	int32_t recv_buffers;
> +	uint16_t rx_free_thresh;
> +	uint16_t queue_id;
> +	uint16_t precharge_cnt;
> +	uint8_t rx_drop_en;
> +	uint8_t  port_id;
> +	uint8_t  rbptr_offset;
> +} __rte_cache_aligned;
> +
> +struct nicvf {
> +	uint8_t vf_id;
> +	uint8_t node;
> +	uintptr_t reg_base;
> +	bool tns_mode;
> +	bool sqs_mode;
> +	bool loopback_supported;
> +	bool pf_acked:1;
> +	bool pf_nacked:1;
> +	uint64_t hwcap;
> +	uint8_t link_up;
> +	uint8_t	duplex;
> +	uint32_t speed;
> +	uint32_t msg_enable;
> +	uint16_t device_id;
> +	uint16_t vendor_id;
> +	uint16_t subsystem_device_id;
> +	uint16_t subsystem_vendor_id;
> +	struct nicvf_rbdr *rbdr;
> +	struct nicvf_rss_reta_info rss_info;
> +	struct rte_eth_dev *eth_dev;
> +	struct rte_intr_handle intr_handle;
> +	uint8_t cpi_alg;
> +	uint16_t mtu;
> +	bool vlan_filter_en;
> +	uint8_t mac_addr[ETHER_ADDR_LEN];
> +} __rte_cache_aligned;
> +
> +#endif /* _THUNDERX_NICVF_STRUCT_H */
> diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
> new file mode 100644
> index 0000000..349c6e1
> --- /dev/null
> +++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
> @@ -0,0 +1,4 @@
> +DPDK_16.04 {
DPDK_16.07?

> +
> +	local: *;
> +};
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index b84b56d..1d8d8cd 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
>  # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
>  # for calculating HMAC precomputes
>  ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
> @@ -150,6 +151,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
>  
>  ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH 03/20] thunderx/nicvf: add link status and link update support
  2016-05-07 15:16 ` [PATCH 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
@ 2016-06-08 16:10   ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:10 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 5/7/2016 4:16 PM, Jerin Jacob wrote:
> Extended the nicvf_interrupt function to respond
> NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
> struct rte_eth_link accordingly.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support
  2016-06-07 16:40     ` [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
@ 2016-06-08 16:16       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:16 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  drivers/net/thunderx/nicvf_ethdev.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index 5d28eea..34b4735 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -70,6 +70,9 @@
>  #include "nicvf_logs.h"
>  
>  static int nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
> +static int nicvf_dev_get_reg_length(struct rte_eth_dev *dev);
> +static int nicvf_dev_get_regs(struct rte_eth_dev *dev,
> +			      struct rte_dev_reg_info *regs);

Is these declarations required, function order seems correct? Since
these are static functions, it is possible to remove these by
re-ordering function locations.

>  
>  static inline int
>  nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
> @@ -145,9 +148,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
>  	return nicvf_atomic_write_link_status(dev, &link);
>  }
>  
> +static int
> +nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
> +{
> +	return nicvf_reg_get_count();
> +}
> +
> +static int
> +nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
> +{
> +	uint64_t *data = regs->data;
> +	struct nicvf *nic = nicvf_pmd_priv(dev);
> +
> +	if (data == NULL)
> +		return -EINVAL;
> +
> +	/* Support only full register dump */
> +	if ((regs->length == 0) ||
> +		(regs->length == (uint32_t)nicvf_reg_get_count())) {
> +		regs->version = nic->vendor_id << 16 | nic->device_id;
> +		nicvf_reg_dump(nic, data);
> +		return 0;
> +	}
> +	return -ENOTSUP;
> +}
> +
>  /* Initialize and register driver with DPDK Application */
>  static const struct eth_dev_ops nicvf_eth_dev_ops = {
>  	.link_update              = nicvf_dev_link_update,
> +	.get_reg_length           = nicvf_dev_get_reg_length,
> +	.get_reg                  = nicvf_dev_get_regs,
>  };
>  
>  static int
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 05/20] thunderx/nicvf: add dev_configure support
  2016-06-07 16:40     ` [PATCH v3 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
@ 2016-06-08 16:21       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:21 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

...
>  
> +static int nicvf_dev_configure(struct rte_eth_dev *dev);
Same as previous, I think this is not required.

...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support
  2016-06-07 16:40     ` [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
@ 2016-06-08 16:23       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:23 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support
  2016-06-07 16:40     ` [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
@ 2016-06-08 16:42       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:42 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support
  2016-06-07 16:40     ` [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
@ 2016-06-08 16:45       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:45 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support
  2016-06-07 16:40     ` [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
@ 2016-06-08 16:48       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:48 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 11/20] thunderx/nicvf: add stats support
  2016-06-07 16:40     ` [PATCH v3 11/20] thunderx/nicvf: add stats support Jerin Jacob
@ 2016-06-08 16:53       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 16:53 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions
  2016-06-07 16:40     ` [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
@ 2016-06-08 17:04       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 17:04 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support
  2016-06-07 16:40     ` [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
@ 2016-06-08 17:17       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 17:17 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---

...

> +uint32_t
> +nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
> +{
> +	struct nicvf_rxq *rxq;
> +
> +	rxq = (struct nicvf_rxq *)dev->data->rx_queues[queue_idx];
Unnecessary cast

...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support
  2016-06-07 16:40     ` [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
@ 2016-06-08 17:42       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 17:42 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---
>  drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
>  drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
>  drivers/net/thunderx/nicvf_rxtx.h   |   1 +
>  3 files changed, 194 insertions(+)
> 
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index 5da07da..ba32803 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -88,6 +88,8 @@ static int nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
>  				     struct rte_eth_rss_conf *rss_conf);
>  static int nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
>  				       struct rte_eth_rss_conf *rss_conf);
> +static int nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
> +static int nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
These declarations not required, there are more usages in other patches.

...
>  
>  static int
> +nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
> +{
> +	int ret;
> +
> +	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
> +		return -EINVAL;
This check already done by librte_ether

> +
> +	ret = nicvf_start_rx_queue(dev, qidx);
> +	if (ret)
> +		return ret;
> +
> +	ret = nicvf_configure_cpi(dev);
> +	if (ret)
> +		return ret;
> +
> +	return nicvf_configure_rss_reta(dev);
> +}
> +
> +static int
> +nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
> +{
> +	int ret;
> +
> +	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_rx_queues)
> +		return -EINVAL;
Same for this case

> +
> +	ret = nicvf_stop_rx_queue(dev, qidx);
> +	ret |= nicvf_configure_cpi(dev);
> +	ret |= nicvf_configure_rss_reta(dev);
> +	return ret;
> +}
> +
...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 16/20] thunderx/nicvf: add tx queue start and stop support
  2016-06-07 16:40     ` [PATCH v3 16/20] thunderx/nicvf: add tx " Jerin Jacob
@ 2016-06-08 17:46       ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 17:46 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

...

> +static inline int
> +nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
> +{
> +	struct nicvf_txq *txq;
> +	int ret;
> +
> +	if (dev->data->tx_queue_state[qidx] == 
> +	    RTE_ETH_QUEUE_STATE_STARTED)
Is line wrap required?

...
>  
>  static inline int
>  nicvf_configure_cpi(struct rte_eth_dev *dev)
> @@ -912,6 +960,24 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
>  }
>  
>  static int
> +nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
> +{
> +	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
> +		return -EINVAL;
This check already done by librte_ether

> +
> +	return nicvf_start_tx_queue(dev, qidx);
> +}
> +
> +static int
> +nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
> +{
> +	if (qidx >= nicvf_pmd_priv(dev)->eth_dev->data->nb_tx_queues)
> +		return -EINVAL;
Same here

> +
> +	return nicvf_stop_tx_queue(dev, qidx);
> +}
> +
...

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 18/20] thunderx/config: set max numa node to two
  2016-06-07 16:40     ` [PATCH v3 18/20] thunderx/config: set max numa node to two Jerin Jacob
@ 2016-06-08 17:54       ` Ferruh Yigit
  2016-06-13 13:11         ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-08 17:54 UTC (permalink / raw)
  To: Jerin Jacob, dev; +Cc: thomas.monjalon, bruce.richardson

On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  config/defconfig_arm64-thunderx-linuxapp-gcc | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
> index 7940bbd..cc12cee 100644
> --- a/config/defconfig_arm64-thunderx-linuxapp-gcc
> +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
> @@ -34,6 +34,7 @@
>  CONFIG_RTE_MACHINE="thunderx"
>  
>  CONFIG_RTE_CACHE_LINE_SIZE=128
> +CONFIG_RTE_MAX_NUMA_NODES=2
Isn't this platform level configuration? And sets max numa nodes
independent from driver, right?

Can you please add some more information why this is required?

Also does it make sense to separate this patch from driver patchset?

>  
>  #
>  # Compile Cavium Thunderx NICVF PMD driver
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-08 15:08           ` Bruce Richardson
@ 2016-06-09 10:49             ` Jerin Jacob
  2016-06-09 14:02               ` Thomas Monjalon
  0 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-09 10:49 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: Thomas Monjalon, Ferruh Yigit, dev

On Wed, Jun 08, 2016 at 04:08:37PM +0100, Bruce Richardson wrote:
> On Wed, Jun 08, 2016 at 03:42:14PM +0200, Thomas Monjalon wrote:
> > 2016-06-08 18:13, Jerin Jacob:
> > > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > > Hi Jerin,
> > > > 
> > > > In patch subject, as tag, other drivers are using only driver name, and
> > > > Intel drivers also has "driver/base", since base code has some special
> > > > case. For thunderx, what do you think about keeping subject as:
> > > >  "thunderx: ...."
> > > > 
> > > 
> > > Hi Ferruh,
> > > 
> > > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > > in future to DPDK.
> > > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > > If you don't have any objection then I would like to keep it as
> > > thunderx/nicvf or just nicvf.
> > 
> > I don't like the name nicvf but I guess that's the official name?
> > 
> > Thus I agree the title should be thunderx/nicvf or thunderx_nicvf.
> 
> I think I'd prefer the underscore version.

underscore option looks bit odd when comparing to exiting other git logs.
as Ferruh suggested "net/thunderx:" looks good to me. If their no
objections I would like to go with "net/thunderx:"

Jerin

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-09 10:49             ` Jerin Jacob
@ 2016-06-09 14:02               ` Thomas Monjalon
  2016-06-09 14:11                 ` Bruce Richardson
  0 siblings, 1 reply; 204+ messages in thread
From: Thomas Monjalon @ 2016-06-09 14:02 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Bruce Richardson, Ferruh Yigit, dev

2016-06-09 16:19, Jerin Jacob:
> On Wed, Jun 08, 2016 at 04:08:37PM +0100, Bruce Richardson wrote:
> > On Wed, Jun 08, 2016 at 03:42:14PM +0200, Thomas Monjalon wrote:
> > > 2016-06-08 18:13, Jerin Jacob:
> > > > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > > > Hi Jerin,
> > > > > 
> > > > > In patch subject, as tag, other drivers are using only driver name, and
> > > > > Intel drivers also has "driver/base", since base code has some special
> > > > > case. For thunderx, what do you think about keeping subject as:
> > > > >  "thunderx: ...."
> > > > > 
> > > > 
> > > > Hi Ferruh,
> > > > 
> > > > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > > > in future to DPDK.
> > > > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > > > If you don't have any objection then I would like to keep it as
> > > > thunderx/nicvf or just nicvf.
> > > 
> > > I don't like the name nicvf but I guess that's the official name?
> > > 
> > > Thus I agree the title should be thunderx/nicvf or thunderx_nicvf.
> > 
> > I think I'd prefer the underscore version.
> 
> underscore option looks bit odd when comparing to exiting other git logs.
> as Ferruh suggested "net/thunderx:" looks good to me. If their no
> objections I would like to go with "net/thunderx:"

net/thunderx would be easy to parse if all other drivers were using
net/ or crypto/ prefixes.
Do we want to add these prefixes to driver commits?

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 00/20] DPDK PMD for ThunderX NIC device
  2016-06-09 14:02               ` Thomas Monjalon
@ 2016-06-09 14:11                 ` Bruce Richardson
  0 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-09 14:11 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Jerin Jacob, Ferruh Yigit, dev

On Thu, Jun 09, 2016 at 04:02:17PM +0200, Thomas Monjalon wrote:
> 2016-06-09 16:19, Jerin Jacob:
> > On Wed, Jun 08, 2016 at 04:08:37PM +0100, Bruce Richardson wrote:
> > > On Wed, Jun 08, 2016 at 03:42:14PM +0200, Thomas Monjalon wrote:
> > > > 2016-06-08 18:13, Jerin Jacob:
> > > > > On Wed, Jun 08, 2016 at 01:30:28PM +0100, Ferruh Yigit wrote:
> > > > > > Hi Jerin,
> > > > > > 
> > > > > > In patch subject, as tag, other drivers are using only driver name, and
> > > > > > Intel drivers also has "driver/base", since base code has some special
> > > > > > case. For thunderx, what do you think about keeping subject as:
> > > > > >  "thunderx: ...."
> > > > > > 
> > > > > 
> > > > > Hi Ferruh,
> > > > > 
> > > > > We may add crypto or other builtin ThunderX HW accelerated block drivers
> > > > > in future to DPDK.
> > > > > So that is the reason why I thought of keeping the subject as thunderx/nicvf.
> > > > > If you don't have any objection then I would like to keep it as
> > > > > thunderx/nicvf or just nicvf.
> > > > 
> > > > I don't like the name nicvf but I guess that's the official name?
> > > > 
> > > > Thus I agree the title should be thunderx/nicvf or thunderx_nicvf.
> > > 
> > > I think I'd prefer the underscore version.
> > 
> > underscore option looks bit odd when comparing to exiting other git logs.
> > as Ferruh suggested "net/thunderx:" looks good to me. If their no
> > objections I would like to go with "net/thunderx:"
> 
> net/thunderx would be easy to parse if all other drivers were using
> net/ or crypto/ prefixes.
> Do we want to add these prefixes to driver commits?
I would be in favour of that going forward.
It would avoid problems between ring library and ring pmd, and vhost library
and vhost pmd also.

/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v3 18/20] thunderx/config: set max numa node to two
  2016-06-08 17:54       ` Ferruh Yigit
@ 2016-06-13 13:11         ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:11 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, thomas.monjalon, bruce.richardson

On Wed, Jun 08, 2016 at 06:54:43PM +0100, Ferruh Yigit wrote:
> On 6/7/2016 5:40 PM, Jerin Jacob wrote:
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> >  config/defconfig_arm64-thunderx-linuxapp-gcc | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
> > index 7940bbd..cc12cee 100644
> > --- a/config/defconfig_arm64-thunderx-linuxapp-gcc
> > +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
> > @@ -34,6 +34,7 @@
> >  CONFIG_RTE_MACHINE="thunderx"
> >  
> >  CONFIG_RTE_CACHE_LINE_SIZE=128
> > +CONFIG_RTE_MAX_NUMA_NODES=2
> Isn't this platform level configuration? And sets max numa nodes
> independent from driver, right?
> 

Yes

> Can you please add some more information why this is required?
> 
> Also does it make sense to separate this patch from driver patchset?

Yes, I will separate this patch from the series.

Thanks Ferruh for the comprehensive review of this patch series.
I will send the v4, addressing all of your review comments.

Jerin

> 
> >  
> >  #
> >  # Compile Cavium Thunderx NICVF PMD driver
> > 
> 

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v4 00/19] DPDK PMD for ThunderX NIC device
  2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-06-08 12:18       ` Ferruh Yigit
  2016-06-08 15:45       ` Ferruh Yigit
@ 2016-06-13 13:55       ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
                           ` (20 more replies)
  2 siblings, 21 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with git HEAD change-set
ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

V1->V2

http://dpdk.org/dev/patchwork/patch/12609/
-- added const for the const struct tables
-- remove multiple blank lines
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12610/
-- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
-- add const for table structs
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12614/
-- s/DEFAULT_*/NICVF_DEFAULT_*/gc
http://dpdk.org/dev/patchwork/patch/12615/
-- Fix typos
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12616/
-- removed redundant txq->tail = 0 and txq->head = 0
http://dpdk.org/dev/patchwork/patch/12627/
-- fixed the documentation changes

-- fixed TAB+space occurrences in functions
-- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db

V2->V3

http://dpdk.org/dev/patchwork/patch/13060/
-- Changed polling infrastructure to use rte_eal_alarm* instead of timerfd_create API
-- rebased to ca173a909538a2f1082cd0dcb4d778a97dab69c3

V3->V4

ddressed review comments of Ferruh's review

http://dpdk.org/dev/patchwork/patch/13314/
-- s/avilable/available
http://dpdk.org/dev/patchwork/patch/13323/
-- s/witout/without

http://dpdk.org/dev/patchwork/patch/13318/
-- s/nicvf_free_xmittted_buffers/nicvf_free_xmitted_buffers
-- fix checkpatch errors
http://dpdk.org/dev/patchwork/patch/13307/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13308/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13320/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13321/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13322/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13324/
-- addressed review comments and created separated patch for
platform specific config change

-- update change log to net/thunderx: ........

Jerin Jacob (19):
  net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  net/thunderx: add pmd skeleton
  net/thunderx: add link status and link update support
  net/thunderx: add get_reg and get_reg_length support
  net/thunderx: add dev_configure support
  net/thunderx: add dev_infos_get support
  net/thunderx: add rx_queue_setup/release support
  net/thunderx: add tx_queue_setup/release support
  net/thunderx: add rss and reta query and update support
  net/thunderx: add mtu_set and promiscuous_enable support
  net/thunderx: add stats support
  net/thunderx: add single and multi segment tx functions
  net/thunderx: add single and multi segment rx functions
  net/thunderx: add dev_supported_ptypes_get and rx_queue_count support
  net/thunderx: add rx queue start and stop support
  net/thunderx: add tx queue start and stop support
  net/thunderx: add device start,stop and close support
  net/thunderx: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   10 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   96 +-
 doc/guides/nics/thunderx.rst                       |  354 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   65 +
 drivers/net/thunderx/base/nicvf_hw.c               |  905 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1219 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  418 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1789 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  106 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  599 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |  101 ++
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    2 +
 23 files changed, 6450 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 15:09           ` Bruce Richardson
  2016-06-13 13:55         ` [PATCH v4 02/19] net/thunderx: add pmd skeleton Jerin Jacob
                           ` (19 subsequent siblings)
  20 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
drivers/net/thunderx/nicvf/base directory.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c      |  905 +++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h      |  240 ++++++
 drivers/net/thunderx/base/nicvf_hw_defs.h | 1219 +++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c    |  418 ++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h    |  232 ++++++
 drivers/net/thunderx/base/nicvf_plat.h    |  132 ++++
 6 files changed, 3146 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..001b0ed
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,905 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_POLL_ITER_NR   (10)
+#define NICVF_REG_POLL_DELAY_US  (2000)
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static const struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_HEAD, qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL,	qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+				NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= NICVF_QSIZE_MIN_VAL);
+	assert(val <= NICVF_QSIZE_MAX_VAL);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+						RBDR_SIZE_SHIFT);
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+			RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+			RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+			RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < rbdr->qlen_mask) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int
+nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+			SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+			SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+			SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+			CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+			CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+			CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..b9ba3f4
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,240 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8174 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8173 */
+	NICVF_ERR_BASE_INIT,     /* -8172 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8171 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..7d68b2b
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,1219 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+#define NICVF_QSIZE_MIN_VAL		(0)
+#define NICVF_QSIZE_MAX_VAL		(6)
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS			64
+#define NIC_HW_MAX_FRS			9200 /* 9216 max packet including FCS */
+#define NIC_HW_MAX_SEGS			12
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES	128 /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES	512 /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES	128 /* 7 bits */
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD,
+	NIC_SEND_LD_TYPE_E_LDT,
+	NIC_SEND_LD_TYPE_E_LDWB,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE,
+	ETYPE_ALG_SKIP,
+	ETYPE_ALG_ENDPARSE,
+	ETYPE_ALG_VLAN,
+	ETYPE_ALG_VLAN_STRIP,
+};
+
+enum layer3_type {
+	L3TYPE_NONE,
+	L3TYPE_GRH,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE,
+	L4TYPE_IPSEC_ESP,
+	L4TYPE_IPFRAG,
+	L4TYPE_IPCOMP,
+	L4TYPE_TCP,
+	L4TYPE_UDP,
+	L4TYPE_SCTP,
+	L4TYPE_GRE,
+	L4TYPE_ROCE_BTH,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE,
+	CPI_ALG_VLAN,
+	CPI_ALG_VLAN16,
+	CPI_ALG_DIFF,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE,
+	RSS_ALG_PORT,
+	RSS_ALG_IP,
+	RSS_ALG_TCP_IP,
+	RSS_ALG_UDP_IP,
+	RSS_ALG_SCTP_IP,
+	RSS_ALG_GRE_IP,
+	RSS_ALG_ROCE,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC,
+	RSS_HASH_IP,
+	RSS_HASH_TCP,
+	RSS_HASH_TCP_SYN_DIS,
+	RSS_HASH_UDP,
+	RSS_HASH_L4ETC,
+	RSS_HASH_ROCE,
+	RSS_L3_BIDI,
+	RSS_L4_BIDI,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET,
+	CQE_RX_TCP_END_INVALID_FLAG,
+	CQE_RX_TCP_END_TIMEOUT,
+	CQE_RX_TCP_END_OUT_OF_SEQ,
+	CQE_RX_TCP_END_PKT_ERR,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE,
+	CQE_RX_ERRLVL_L2,
+	CQE_RX_ERRLVL_L3,
+	CQE_RX_ERRLVL_L4,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE,
+	CQE_RX_ERR_RE_PARTIAL,
+	CQE_RX_ERR_RE_JABBER,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE,
+	SEND_L4_CSUM_UDP,
+	SEND_L4_CSUM_TCP,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32,
+	SEND_CRCALG_CRC32C,
+	SEND_CRCALG_ICRC,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD,
+	SEND_LD_TYPE_LDT,
+	SEND_LD_TYPE_LDWB,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64,
+	SEND_MEMDSZ_B32,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID,
+	SQ_DESC_TYPE_HEADER,
+	SQ_DESC_TYPE_CRC,
+	SQ_DESC_TYPE_IMMEDIATE,
+	SQ_DESC_TYPE_GATHER,
+	SQ_DESC_TYPE_MEMORY,
+};
+
+enum l3_type_t {
+	L3_NONE,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP,
+	STRIP_FIRST_VLAN,
+	STRIP_SECOND_VLAN,
+	STRIP_RESERV,
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE,
+	RBDR_FIFO_STATE_ACTIVE,
+	RBDR_FIFO_STATE_RESET,
+	RBDR_FIFO_STATE_FAIL,
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF,
+	RQ_CACHE_ALLOC_ALL,
+	RQ_CACHE_ALLOC_FIRST,
+	RQ_CACHE_ALLOC_TWO,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS,
+	RQ_SQ_STATS_PKTS,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Structures */
+
+#define NICVF_CQE_RBPTR_WORD         6
+#define NICVF_CQE_RX2_RBPTR_WORD     7
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved1:1;
+	uint64_t reserved0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserved0:34;
+	uint64_t reserved1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..3067331
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,418 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+#define NICVF_MBOX_PF_RESPONSE_DELAY_US   (1000)
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+				nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..83c1844
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,132 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#include "nicvf_hw.h"
+#include "nicvf_mbox.h"
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 02/19] net/thunderx: add pmd skeleton
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 03/19] net/thunderx: add link status and link update support Jerin Jacob
                           ` (18 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  63 ++++++
 drivers/net/thunderx/nicvf_ethdev.c                | 251 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   2 +
 10 files changed, 596 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index 47c26f6..ad5686b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index fe5e987..7940bbd 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,3 +34,13 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 6ba7658..0e29a33 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -50,6 +50,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..eb9f100
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..3ca5a2b
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,251 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_pci.h>
+#include <rte_tailq.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static void
+nicvf_interrupt(void *arg)
+{
+	struct nicvf *nic = arg;
+
+	nicvf_reg_poll_interrupts(nic);
+
+	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+				nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	return rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+					nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
+}
+
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning Positive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+			THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..d4d2071
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf *
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return eth_dev->data->dev_private;
+}
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..c52545d
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	void (*pool_free)(struct nicvf_txq *sq);
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..1901bcb
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.07 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index b84b56d..1d8d8cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
 # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
 # for calculating HMAC precomputes
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
@@ -150,6 +151,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
 
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 03/19] net/thunderx: add link status and link update support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 02/19] net/thunderx: add pmd skeleton Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 04/19] net/thunderx: add get_reg and get_reg_length support Jerin Jacob
                           ` (17 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3ca5a2b..6fa486a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,12 +69,45 @@
 
 #include "nicvf_logs.h"
 
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+		*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static void
 nicvf_interrupt(void *arg)
 {
 	struct nicvf *nic = arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+				RTE_ETH_EVENT_INTR_LSC);
+	}
 
 	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
 				nicvf_interrupt, nic);
@@ -93,8 +126,26 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index d4d2071..8189856 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 04/19] net/thunderx: add get_reg and get_reg_length support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (2 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 03/19] net/thunderx: add link status and link update support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 05/19] net/thunderx: add dev_configure support Jerin Jacob
                           ` (16 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6fa486a..5c066e2 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -143,9 +143,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 05/19] net/thunderx: add dev_configure support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (3 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 04/19] net/thunderx: add get_reg and get_reg_length support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 06/19] net/thunderx: add dev_infos_get support Jerin Jacob
                           ` (15 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 78 +++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5c066e2..1814341 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,8 +168,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 06/19] net/thunderx: add dev_infos_get support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (4 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 05/19] net/thunderx: add dev_configure support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 07/19] net/thunderx: add rx_queue_setup/release support Jerin Jacob
                           ` (14 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 45 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1814341..109c6cb 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -249,6 +293,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8189856..e31657d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define NICVF_DEFAULT_RX_FREE_THRESH    224
+#define NICVF_DEFAULT_TX_FREE_THRESH    224
+#define NICVF_TX_FREE_MPOOL_THRESH      16
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 07/19] net/thunderx: add rx_queue_setup/release support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (5 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 06/19] net/thunderx: add dev_infos_get support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 08/19] net/thunderx: add tx_queue_setup/release support Jerin Jacob
                           ` (13 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 136 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   2 +
 2 files changed, 138 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 109c6cb..4652438 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+					NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->nb_mem_chunks != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				rx_conf->rx_free_thresh :
+				NICVF_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > NICVF_MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+				rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+			qidx, rxq, mp->name, nb_desc,
+			rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -294,6 +428,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index e31657d..afb875a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -59,6 +59,8 @@
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
 #define NICVF_TX_FREE_MPOOL_THRESH      16
+#define NICVF_MAX_RX_FREE_THRESH        1024
+#define NICVF_MAX_TX_FREE_THRESH        1024
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 08/19] net/thunderx: add tx_queue_setup/release support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (6 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 07/19] net/thunderx: add rx_queue_setup/release support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 09/19] net/thunderx: add rss and reta query and update support Jerin Jacob
                           ` (12 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 4652438..167149e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -191,6 +191,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				tx_conf->tx_free_thresh :
+				NICVF_DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) ||
+		tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!is_single_pool) {
+		txq->tx_free_thresh = (uint16_t)
+		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
+				NICVF_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	}
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+			qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -430,6 +603,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 09/19] net/thunderx: add rss and reta query and update support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (7 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 08/19] net/thunderx: add tx_queue_setup/release support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 10/19] net/thunderx: add mtu_set and promiscuous_enable support Jerin Jacob
                           ` (11 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++
 1 file changed, 172 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 167149e..1d5bea7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+		rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+				rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -601,6 +769,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 10/19] net/thunderx: add mtu_set and promiscuous_enable support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (8 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 09/19] net/thunderx: add rss and reta query and update support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 11/19] net/thunderx: add stats support Jerin Jacob
                           ` (10 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 51 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1d5bea7..f0e3371 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -144,6 +144,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -168,6 +211,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -768,7 +817,9 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index afb875a..b1af468 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -62,6 +62,8 @@
 #define NICVF_MAX_RX_FREE_THRESH        1024
 #define NICVF_MAX_TX_FREE_THRESH        1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 11/19] net/thunderx: add stats support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (9 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 10/19] net/thunderx: add mtu_set and promiscuous_enable support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 12/19] net/thunderx: add single and multi segment tx functions Jerin Jacob
                           ` (9 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 66 +++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index f0e3371..19ad85a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -211,6 +211,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -817,6 +881,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 12/19] net/thunderx: add single and multi segment tx functions
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (10 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 11/19] net/thunderx: add stats support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 13/19] net/thunderx: add single and multi segment rx functions Jerin Jacob
                           ` (8 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile       |   2 +
 drivers/net/thunderx/nicvf_ethdev.c |   5 +-
 drivers/net/thunderx/nicvf_rxtx.c   | 255 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
 4 files changed, 354 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index eb9f100..9079b5b 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 19ad85a..15f5cfc 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static inline int
@@ -617,6 +617,9 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
 				NICVF_TX_FREE_MPOOL_THRESH :
 				tx_conf->tx_free_thresh);
+		txq->pool_free = nicvf_multi_pool_free_xmited_buffers;
+	} else {
+		txq->pool_free = nicvf_single_pool_free_xmited_buffers;
 	}
 
 	/* Allocate software ring */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..88a5152
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,255 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+}
+
+void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[NICVF_MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmitted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+		if (unlikely(sq->pool == NULL))
+			sq->pool = tx_pkts[0]->pool;
+
+		sq->pool_free(sq);
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..b1fdc69
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return (i & ~3) + 3 - (i & 3);
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+#else
+
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return i;
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |
+			 (uint64_t)NIC_SEND_LD_TYPE_E_LDT << 58 |
+			 pkt->data_len;
+	entry->buff[1] = rte_mbuf_data_dma_addr(pkt);
+}
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+void nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq);
+void nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 13/19] net/thunderx: add single and multi segment rx functions
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (11 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 12/19] net/thunderx: add single and multi segment tx functions Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 14/19] net/thunderx: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
                           ` (7 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  33 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 317 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |   5 +
 3 files changed, 355 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b1af468..59fa19c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -70,4 +70,37 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	struct rte_mempool_memhdr *hdr;
+
+	hdr = STAILQ_FIRST(&mp->mem_list);
+	assert(hdr != NULL);
+	return (uint64_t)((uintptr_t)hdr->addr - hdr->phys_addr);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
+
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 88a5152..fed0859 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -253,3 +253,320 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+		(nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+					__ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						& NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	cqe_rx_word3_t cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cqe_rx = (struct cqe_rx_t *)&desc[cqe_head];
+		NICVF_RX_ASSERT(((struct cq_entry_type_t *)cqe_rx)->cqe_type
+						 == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	const uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index b1fdc69..d2ca2c9 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -83,6 +84,10 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 14/19] net/thunderx: add dev_supported_ptypes_get and rx_queue_count support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (12 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 13/19] net/thunderx: add single and multi segment rx functions Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 15/19] net/thunderx: add rx queue start and stop support Jerin Jacob
                           ` (6 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  9 ++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  2 ++
 3 files changed, 52 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 15f5cfc..8b8d9d9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -260,6 +260,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+			sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -888,6 +927,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
@@ -895,6 +935,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index fed0859..1c6d6a8 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -570,3 +570,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d2ca2c9..ded87f3 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -84,6 +84,8 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 15/19] net/thunderx: add rx queue start and stop support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (13 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 14/19] net/thunderx: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 16/19] net/thunderx: add tx " Jerin Jacob
                           ` (5 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 167 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 186 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8b8d9d9..7a58cb3 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -687,6 +735,33 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
+					NICVF_MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -695,6 +770,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -707,6 +845,33 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -933,6 +1098,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 1c6d6a8..eb51a72 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -579,3 +579,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, NICVF_MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index ded87f3..9dad8a5 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -85,6 +85,7 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 #endif
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 16/19] net/thunderx: add tx queue start and stop support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (14 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 15/19] net/thunderx: add rx queue start and stop support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 17/19] net/thunderx: add device start, stop and close support Jerin Jacob
                           ` (4 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 7a58cb3..3c88290 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,51 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -872,6 +917,18 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1100,6 +1157,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 17/19] net/thunderx: add device start, stop and close support
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (15 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 16/19] net/thunderx: add tx " Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 18/19] net/thunderx: updated driver documentation and release notes Jerin Jacob
                           ` (3 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 467 ++++++++++++++++++++++++++++++++++++
 1 file changed, 467 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3c88290..7d545f9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,8 @@
 #include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
+
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
 			       struct rte_eth_link *link)
@@ -534,6 +536,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -629,6 +707,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -673,6 +776,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->pool_free == nicvf_single_pool_free_xmited_buffers)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1064,6 +1209,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited without proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1144,7 +1600,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1179,6 +1638,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 18/19] net/thunderx: updated driver documentation and release notes
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (16 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 17/19] net/thunderx: add device start, stop and close support Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 13:55         ` [PATCH v4 19/19] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
                           ` (2 subsequent siblings)
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  96 ++++-----
 doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 404 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 0b13698..ddf75f4 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     nfp
     qede
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index 0bd8fae..df28510 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,40 +74,40 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s v v v v x
-                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h i i m e
-                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e o r r x n
-                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d s t t n v
-                        c x x i e 0       . v v   f e e e e k k k k     e         v   a t i i e i
-                        k   v n           . f f       . v v   . v v               f   t   o o t r
-                        e   f g           .   .       . f f   . f f                   a     . 3 t
-                        t                 v   v       v   v   v   v                   2     v
-                                          e   e       e   e   e   e                         e
-                                          c   c       c   c   c   c                         c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s t v v v v x
+                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h h i i m e
+                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e u o r r x n
+                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d n s t t n v
+                        c x x i e 0       . v v   f e e e e k k k k     e         v   a d t i i e i
+                        k   v n           . f f       . v v   . v v               f   t e   o o t r
+                        e   f g           .   .       . f f   . f f                   a r     . 3 t
+                        t                 v   v       v   v   v   v                   2 x     v
+                                          e   e       e   e   e   e                           e
+                                          c   c       c   c   c   c                           c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y
-   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y
-   Queue status event                                                                   Y
+   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y Y
+   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y Y
+   Queue status event                                                                     Y
    Rx interrupt                   Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y   Y Y
-   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y   Y
+   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y Y   Y Y
+   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y                 Y
+   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y Y   Y
    LRO                                              Y Y Y Y
    TSO                          Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y   Y Y
-   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y   Y Y
-   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
-   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y       Y Y
-   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y
-   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y Y   Y Y
+   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y Y   Y Y
+   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
+   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y         Y Y
+   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y     Y
+   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
+   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
    VMDq                                 Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y
+   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y     Y
    DCB                                  Y Y     Y   Y Y
-   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
+   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
    Ethertype filter                     Y Y     Y   Y Y
    N-tuple filter                               Y   Y Y
    SYN filter                                   Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                 Y Y     Y Y     Y   Y Y                         Y Y
    Rate limitation                                  Y Y
    Traffic mirroring                    Y Y         Y Y
-   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
-   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
+   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     Y
+   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     P
    QinQ offload                   Y     Y   Y   Y Y Y   Y
-   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
+   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
    Inner L3 checksum                Y   Y   Y       Y   Y           Y
    Inner L4 checksum                Y   Y   Y       Y   Y           Y
-   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y
+   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y     Y
    Timesync                             Y Y     Y   Y Y
-   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y   Y Y
+   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y Y
+   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y   Y
+   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y Y   Y Y
    EEPROM dump                                  Y   Y Y
-   Registers dump                               Y Y Y Y Y Y
-   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
-   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y       Y Y
-   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
+   Registers dump                               Y Y Y Y Y Y                             Y
+   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                         Y Y
+   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y         Y Y
+   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y   Y Y
    Other kdrv                                                       Y Y               Y
-   ARMv7                                                                      Y           Y Y
-   ARMv8                                                                      Y           Y Y
+   ARMv7                                                                      Y             Y Y
+   ARMv8                                                                      Y         Y   Y Y
    Power8                                                           Y Y       Y
    TILE-Gx                                                                    Y
-   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y         Y Y Y
-   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y   Y
+   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y           Y Y Y
+   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y   Y Y Y
+   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..e38f260
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,354 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
+        -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx \
+        --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table.
+           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
+           loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 \
+      -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 30e78d4..29b8b52 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -47,6 +47,7 @@ New Features
   * Dropped specific Xen Dom0 code.
   * Dropped specific anonymous mempool code in testpmd.
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Resolved Issues
 ---------------
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v4 19/19] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (17 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 18/19] net/thunderx: updated driver documentation and release notes Jerin Jacob
@ 2016-06-13 13:55         ` Jerin Jacob
  2016-06-13 15:46         ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Bruce Richardson
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
  20 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-13 13:55 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e8558f..625423f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -336,6 +336,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 
+Cavium ThunderX nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-13 13:55         ` [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-06-13 15:09           ` Bruce Richardson
  2016-06-14 13:52             ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Bruce Richardson @ 2016-06-13 15:09 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, ferruh.yigit, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Mon, Jun 13, 2016 at 07:25:25PM +0530, Jerin Jacob wrote:
> Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
> drivers/net/thunderx/nicvf/base directory.
> 

Hi Jerin,

we are trying to move away from huge drops of shared code in a single patchfile,
so as to make the commits smaller and then easier to review. Can you split this
patch into e.g. 3+ smaller commits based around logical functionality. For
example, the base code mailbox functionality in the mbox.[ch] files could be
its own commit. Obviously, the finer-grained the breakdown the better :-), but
I'd rather not see patches >1 kloc looking to be merged in.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v4 00/19] DPDK PMD for ThunderX NIC device
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (18 preceding siblings ...)
  2016-06-13 13:55         ` [PATCH v4 19/19] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-06-13 15:46         ` Bruce Richardson
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
  20 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-13 15:46 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit

On Mon, Jun 13, 2016 at 07:25:24PM +0530, Jerin Jacob wrote:
> This patch set provides the initial version of DPDK PMD for the
> built-in NIC device in Cavium ThunderX SoC family.
> 
> Implemented features and ThunderX nicvf PMD documentation added
> in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> respectively in this patch set.
> 
> These patches are checked using checkpatch.sh with following
> additional ignore option:
>     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> CAMELCASE - To accommodate PRIx64
> BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> 
Hi Jerin,

other than the fact that patch 1 is very big, this set looks pretty ok to me.
However, as a general comment on the series: the commit titles are overly
low-level, as they refer too much to function/structure names e.g. patches 4
through 10. If you run the script "check-git-log.sh" on your patchset this will
be flagged. What is expected in commit titles is that the change introduced by
the path is explained without directly using the function names.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-13 15:09           ` Bruce Richardson
@ 2016-06-14 13:52             ` Jerin Jacob
  0 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 13:52 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, ferruh.yigit, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On Mon, Jun 13, 2016 at 04:09:24PM +0100, Bruce Richardson wrote:
> On Mon, Jun 13, 2016 at 07:25:25PM +0530, Jerin Jacob wrote:
> > Adds hardware specific API for ThunderX nicvf inbuilt NIC device under
> > drivers/net/thunderx/nicvf/base directory.
> > 
> 
> Hi Jerin,
> 
> we are trying to move away from huge drops of shared code in a single patchfile,
> so as to make the commits smaller and then easier to review. Can you split this
> patch into e.g. 3+ smaller commits based around logical functionality. For
> example, the base code mailbox functionality in the mbox.[ch] files could be
> its own commit. Obviously, the finer-grained the breakdown the better :-), but
> I'd rather not see patches >1 kloc looking to be merged in.

Hi Bruce,

I will send the next revision with splitting drivers/net/thunderx/nicvf/base

Jerin

> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
                           ` (19 preceding siblings ...)
  2016-06-13 15:46         ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Bruce Richardson
@ 2016-06-14 19:06         ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 01/25] net/thunderx/base: add HW constants for ThunderX inbuilt NIC Jerin Jacob
                             ` (26 more replies)
  20 siblings, 27 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with git HEAD change-set
ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

V1->V2

http://dpdk.org/dev/patchwork/patch/12609/
-- added const for the const struct tables
-- remove multiple blank lines
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12610/
-- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
-- add const for table structs
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12614/
-- s/DEFAULT_*/NICVF_DEFAULT_*/gc
http://dpdk.org/dev/patchwork/patch/12615/
-- Fix typos
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12616/
-- removed redundant txq->tail = 0 and txq->head = 0
http://dpdk.org/dev/patchwork/patch/12627/
-- fixed the documentation changes

-- fixed TAB+space occurrences in functions
-- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db

V2->V3

http://dpdk.org/dev/patchwork/patch/13060/
-- Changed polling infrastructure to use rte_eal_alarm* instead of timerfd_create API
-- rebased to ca173a909538a2f1082cd0dcb4d778a97dab69c3

V3->V4

addressed review comments of Ferruh's review

http://dpdk.org/dev/patchwork/patch/13314/
-- s/avilable/available
http://dpdk.org/dev/patchwork/patch/13323/
-- s/witout/without

http://dpdk.org/dev/patchwork/patch/13318/
-- s/nicvf_free_xmittted_buffers/nicvf_free_xmitted_buffers
-- fix checkpatch errors
http://dpdk.org/dev/patchwork/patch/13307/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13308/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13320/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13321/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13322/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13324/
-- addressed review comments and created separated patch for
platform specific config change

-- update change log to net/thunderx: ........

V4->V5
-- splitting up drivers/net/thunderx/nicvf/base files to following
patches as suggested by Bruce

net/thunderx/base: add HW constants for ThunderX inbuilt NIC
net/thunderx/base: add register definition for ThunderX inbuilt NIC
net/thunderx/base: implement DPDK based platform abstraction for base code
net/thunderx/base: add mbox API for ThunderX PF/VF driver communication
net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
net/thunderx/base: add RSS and reta configuration HW APIs
net/thunderx/base: add statistics get HW APIs

-- Corrected wrong git commit log messages flagged by check-git-log.sh

Jerin Jacob (25):
  net/thunderx/base: add HW constants for ThunderX inbuilt NIC
  net/thunderx/base: add register definition for ThunderX inbuilt NIC
  net/thunderx/base: implement DPDK based platform abstraction for base
    code
  net/thunderx/base: add mbox API for ThunderX PF/VF driver
    communication
  net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  net/thunderx/base: add RSS and reta configuration HW APIs
  net/thunderx/base: add statistics get HW APIs
  net/thunderx: add pmd skeleton
  net/thunderx: add link status and link update support
  net/thunderx: add registers dump support
  net/thunderx: add ethdev configure support
  net/thunderx: add get device info support
  net/thunderx: add Rx queue setup and release support
  net/thunderx: add Tx queue setup and release support
  net/thunderx: add RSS and reta query and update support
  net/thunderx: add MTU set and promiscuous enable support
  net/thunderx: add stats support
  net/thunderx: add single and multi segment Tx functions
  net/thunderx: add single and multi segment Rx functions
  net/thunderx: implement supported ptype get and Rx queue count
  net/thunderx: add Rx queue start and stop support
  net/thunderx: add Tx queue start and stop support
  net/thunderx: add device start,stop and close support
  net/thunderx: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   10 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   96 +-
 doc/guides/nics/thunderx.rst                       |  354 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   65 +
 drivers/net/thunderx/base/nicvf_hw.c               |  905 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1219 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  418 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1789 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  106 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  599 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |  101 ++
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    2 +
 23 files changed, 6450 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v5 01/25] net/thunderx/base: add HW constants for ThunderX inbuilt NIC
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 02/25] net/thunderx/base: add register definition " Jerin Jacob
                             ` (25 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw_defs.h | 551 ++++++++++++++++++++++++++++++
 1 file changed, 551 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h

diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..8a58f03
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,551 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+#define NICVF_QSIZE_MIN_VAL             (0)
+#define NICVF_QSIZE_MAX_VAL             (6)
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS                  (64)
+#define NIC_HW_MAX_FRS                  (9200) /* 9216 max pkt including FCS */
+#define NIC_HW_MAX_SEGS                 (12)
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES     (128) /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES       (512) /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES       (128) /* 7 bits */
+
+#define NICVF_CQE_RBPTR_WORD            (6)
+#define NICVF_CQE_RX2_RBPTR_WORD        (7)
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD,
+	NIC_SEND_LD_TYPE_E_LDT,
+	NIC_SEND_LD_TYPE_E_LDWB,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE,
+	ETYPE_ALG_SKIP,
+	ETYPE_ALG_ENDPARSE,
+	ETYPE_ALG_VLAN,
+	ETYPE_ALG_VLAN_STRIP,
+};
+
+enum layer3_type {
+	L3TYPE_NONE,
+	L3TYPE_GRH,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE,
+	L4TYPE_IPSEC_ESP,
+	L4TYPE_IPFRAG,
+	L4TYPE_IPCOMP,
+	L4TYPE_TCP,
+	L4TYPE_UDP,
+	L4TYPE_SCTP,
+	L4TYPE_GRE,
+	L4TYPE_ROCE_BTH,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE,
+	CPI_ALG_VLAN,
+	CPI_ALG_VLAN16,
+	CPI_ALG_DIFF,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE,
+	RSS_ALG_PORT,
+	RSS_ALG_IP,
+	RSS_ALG_TCP_IP,
+	RSS_ALG_UDP_IP,
+	RSS_ALG_SCTP_IP,
+	RSS_ALG_GRE_IP,
+	RSS_ALG_ROCE,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC,
+	RSS_HASH_IP,
+	RSS_HASH_TCP,
+	RSS_HASH_TCP_SYN_DIS,
+	RSS_HASH_UDP,
+	RSS_HASH_L4ETC,
+	RSS_HASH_ROCE,
+	RSS_L3_BIDI,
+	RSS_L4_BIDI,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET,
+	CQE_RX_TCP_END_INVALID_FLAG,
+	CQE_RX_TCP_END_TIMEOUT,
+	CQE_RX_TCP_END_OUT_OF_SEQ,
+	CQE_RX_TCP_END_PKT_ERR,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE,
+	CQE_RX_ERRLVL_L2,
+	CQE_RX_ERRLVL_L3,
+	CQE_RX_ERRLVL_L4,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE,
+	CQE_RX_ERR_RE_PARTIAL,
+	CQE_RX_ERR_RE_JABBER,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE,
+	SEND_L4_CSUM_UDP,
+	SEND_L4_CSUM_TCP,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32,
+	SEND_CRCALG_CRC32C,
+	SEND_CRCALG_ICRC,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD,
+	SEND_LD_TYPE_LDT,
+	SEND_LD_TYPE_LDWB,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64,
+	SEND_MEMDSZ_B32,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID,
+	SQ_DESC_TYPE_HEADER,
+	SQ_DESC_TYPE_CRC,
+	SQ_DESC_TYPE_IMMEDIATE,
+	SQ_DESC_TYPE_GATHER,
+	SQ_DESC_TYPE_MEMORY,
+};
+
+enum l3_type_t {
+	L3_NONE,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP,
+	STRIP_FIRST_VLAN,
+	STRIP_SECOND_VLAN,
+	STRIP_RESERV,
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE,
+	RBDR_FIFO_STATE_ACTIVE,
+	RBDR_FIFO_STATE_RESET,
+	RBDR_FIFO_STATE_FAIL,
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF,
+	RQ_CACHE_ALLOC_ALL,
+	RQ_CACHE_ALLOC_FIRST,
+	RQ_CACHE_ALLOC_TWO,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS,
+	RQ_SQ_STATS_PKTS,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 02/25] net/thunderx/base: add register definition for ThunderX inbuilt NIC
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 01/25] net/thunderx/base: add HW constants for ThunderX inbuilt NIC Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 03/25] net/thunderx/base: implement DPDK based platform abstraction for base code Jerin Jacob
                             ` (24 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw_defs.h | 668 ++++++++++++++++++++++++++++++
 1 file changed, 668 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
index 8a58f03..88ecd17 100644
--- a/drivers/net/thunderx/base/nicvf_hw_defs.h
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -548,4 +548,672 @@ enum nic_stat_vnic_tx_e {
 	TX_DROP,
 };
 
+/* vNIC HW Register structures */
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved1:1;
+	uint64_t reserved0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserved0:34;
+	uint64_t reserved1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
 #endif /* _THUNDERX_NICVF_HW_DEFS_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 03/25] net/thunderx/base: implement DPDK based platform abstraction for base code
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 01/25] net/thunderx/base: add HW constants for ThunderX inbuilt NIC Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 02/25] net/thunderx/base: add register definition " Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 04/25] net/thunderx/base: add mbox API for ThunderX PF/VF driver communication Jerin Jacob
                             ` (23 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_plat.h | 129 +++++++++++++++++++++++++++++++++
 1 file changed, 129 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..33fef08
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,129 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 04/25] net/thunderx/base: add mbox API for ThunderX PF/VF driver communication
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (2 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 03/25] net/thunderx/base: implement DPDK based platform abstraction for base code Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 05/25] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
                             ` (22 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski

DPDK nicvf driver doesn't have access to NIC's PF address space.
Introduce a mailbox mechanism to communicate with PF driver through
shared 128bit register interface.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
---
 drivers/net/thunderx/base/nicvf_mbox.c | 418 +++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h | 232 ++++++++++++++++++
 drivers/net/thunderx/base/nicvf_plat.h |   2 +
 3 files changed, 652 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h

diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..3067331
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,418 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+#define NICVF_MBOX_PF_RESPONSE_DELAY_US   (1000)
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+				nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
index 33fef08..fbf28ce 100644
--- a/drivers/net/thunderx/base/nicvf_plat.h
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -126,4 +126,6 @@ do {							\
 
 #endif
 
+#include "nicvf_mbox.h"
+
 #endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 05/25] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (3 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 04/25] net/thunderx/base: add mbox API for ThunderX PF/VF driver communication Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 06/25] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
                             ` (21 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

add nicvf hardware specific APIs for initialization and configuration.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c   | 731 +++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h   | 176 ++++++++
 drivers/net/thunderx/base/nicvf_plat.h |   1 +
 3 files changed, 908 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..ec24f9c
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,731 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_POLL_ITER_NR   (10)
+#define NICVF_REG_POLL_DELAY_US  (2000)
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static const struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_HEAD, qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL,	qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+				NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= NICVF_QSIZE_MIN_VAL);
+	assert(val <= NICVF_QSIZE_MAX_VAL);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+						RBDR_SIZE_SHIFT);
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+			RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+			RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+			RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < rbdr->qlen_mask) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int
+nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+			SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+			SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+			SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+			CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+			CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+			CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..dc9f4f1
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,176 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8174 */
+	NICVF_ERR_BASE_INIT,     /* -8173 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
index fbf28ce..83c1844 100644
--- a/drivers/net/thunderx/base/nicvf_plat.h
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -126,6 +126,7 @@ do {							\
 
 #endif
 
+#include "nicvf_hw.h"
 #include "nicvf_mbox.h"
 
 #endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 06/25] net/thunderx/base: add RSS and reta configuration HW APIs
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (4 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 05/25] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 07/25] net/thunderx/base: add statistics get " Jerin Jacob
                             ` (20 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c | 129 +++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h |  20 ++++++
 2 files changed, 149 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
index ec24f9c..3366aa5 100644
--- a/drivers/net/thunderx/base/nicvf_hw.c
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -721,6 +721,135 @@ nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
 	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
 }
 
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 int
 nicvf_loopback_config(struct nicvf *nic, bool enable)
 {
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
index dc9f4f1..a7ae531 100644
--- a/drivers/net/thunderx/base/nicvf_hw.h
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -76,10 +76,18 @@ enum nicvf_err_e {
 	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
 	NICVF_ERR_LOOPBACK_CFG,  /* -8174 */
 	NICVF_ERR_BASE_INIT,     /* -8173 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8172 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8171 */
 };
 
 typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
 
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
 /* Common structs used in DPDK and base layer are defined in DPDK layer */
 #include "../nicvf_struct.h"
 
@@ -171,6 +179,18 @@ uint32_t nicvf_qsize_sq_roundup(uint32_t val);
 
 void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
 
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
 int nicvf_loopback_config(struct nicvf *nic, bool enable);
 
 #endif /* _THUNDERX_NICVF_HW_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 07/25] net/thunderx/base: add statistics get HW APIs
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (5 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 06/25] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 08/25] net/thunderx: add pmd skeleton Jerin Jacob
                             ` (19 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c | 45 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h | 44 +++++++++++++++++++++++++++++++++++
 2 files changed, 89 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
index 3366aa5..001b0ed 100644
--- a/drivers/net/thunderx/base/nicvf_hw.c
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -858,3 +858,48 @@ nicvf_loopback_config(struct nicvf *nic, bool enable)
 
 	return nicvf_mbox_loopback_config(nic, enable);
 }
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
index a7ae531..9db1d30 100644
--- a/drivers/net/thunderx/base/nicvf_hw.h
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -45,6 +45,11 @@
 
 #define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
 
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
 #define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
 #define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
 
@@ -82,6 +87,39 @@ enum nicvf_err_e {
 
 typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
 
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
 struct nicvf_rss_reta_info {
 	uint8_t hash_bits;
 	uint16_t rss_size;
@@ -193,4 +231,10 @@ uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
 
 int nicvf_loopback_config(struct nicvf *nic, bool enable);
 
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
 #endif /* _THUNDERX_NICVF_HW_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 08/25] net/thunderx: add pmd skeleton
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (6 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 07/25] net/thunderx/base: add statistics get " Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 09/25] net/thunderx: add link status and link update support Jerin Jacob
                             ` (18 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  63 ++++++
 drivers/net/thunderx/nicvf_ethdev.c                | 251 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   2 +
 10 files changed, 596 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index 47c26f6..ad5686b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -259,6 +259,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index fe5e987..7940bbd 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -34,3 +34,13 @@
 CONFIG_RTE_MACHINE="thunderx"
 
 CONFIG_RTE_CACHE_LINE_SIZE=128
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 6ba7658..0e29a33 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -50,6 +50,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..eb9f100
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..3ca5a2b
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,251 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_pci.h>
+#include <rte_tailq.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static void
+nicvf_interrupt(void *arg)
+{
+	struct nicvf *nic = arg;
+
+	nicvf_reg_poll_interrupts(nic);
+
+	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+				nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	return rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+					nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
+}
+
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning Positive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+			THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..d4d2071
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf *
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return eth_dev->data->dev_private;
+}
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..c52545d
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	void (*pool_free)(struct nicvf_txq *sq);
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..1901bcb
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.07 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index b84b56d..1d8d8cd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -102,6 +102,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lxenstore
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD)      += -lgxio
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD)        += -lm
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lz
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lm
 # QAT / AESNI GCM PMDs are dependent on libcrypto (from openssl)
 # for calculating HMAC precomputes
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_QAT),y)
@@ -150,6 +151,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL)       += -lrte_pmd_null
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
 
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 09/25] net/thunderx: add link status and link update support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (7 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 08/25] net/thunderx: add pmd skeleton Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 10/25] net/thunderx: add registers dump support Jerin Jacob
                             ` (17 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3ca5a2b..6fa486a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,12 +69,45 @@
 
 #include "nicvf_logs.h"
 
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+		*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static void
 nicvf_interrupt(void *arg)
 {
 	struct nicvf *nic = arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+				RTE_ETH_EVENT_INTR_LSC);
+	}
 
 	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
 				nicvf_interrupt, nic);
@@ -93,8 +126,26 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index d4d2071..8189856 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 10/25] net/thunderx: add registers dump support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (8 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 09/25] net/thunderx: add link status and link update support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 11/25] net/thunderx: add ethdev configure support Jerin Jacob
                             ` (16 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6fa486a..5c066e2 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -143,9 +143,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 11/25] net/thunderx: add ethdev configure support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (9 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 10/25] net/thunderx: add registers dump support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 12/25] net/thunderx: add get device info support Jerin Jacob
                             ` (15 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 78 +++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5c066e2..1814341 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,8 +168,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 12/25] net/thunderx: add get device info support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (10 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 11/25] net/thunderx: add ethdev configure support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 13/25] net/thunderx: add Rx queue setup and release support Jerin Jacob
                             ` (14 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 45 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1814341..109c6cb 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -249,6 +293,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8189856..e31657d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define NICVF_DEFAULT_RX_FREE_THRESH    224
+#define NICVF_DEFAULT_TX_FREE_THRESH    224
+#define NICVF_TX_FREE_MPOOL_THRESH      16
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 13/25] net/thunderx: add Rx queue setup and release support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (11 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 12/25] net/thunderx: add get device info support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 14/25] net/thunderx: add Tx " Jerin Jacob
                             ` (13 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 136 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   2 +
 2 files changed, 138 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 109c6cb..4652438 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+					NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->nb_mem_chunks != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				rx_conf->rx_free_thresh :
+				NICVF_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > NICVF_MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+				rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+			qidx, rxq, mp->name, nb_desc,
+			rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -294,6 +428,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index e31657d..afb875a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -59,6 +59,8 @@
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
 #define NICVF_TX_FREE_MPOOL_THRESH      16
+#define NICVF_MAX_RX_FREE_THRESH        1024
+#define NICVF_MAX_TX_FREE_THRESH        1024
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 14/25] net/thunderx: add Tx queue setup and release support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (12 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 13/25] net/thunderx: add Rx queue setup and release support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 15/25] net/thunderx: add RSS and reta query and update support Jerin Jacob
                             ` (12 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 4652438..167149e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -191,6 +191,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				tx_conf->tx_free_thresh :
+				NICVF_DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) ||
+		tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!is_single_pool) {
+		txq->tx_free_thresh = (uint16_t)
+		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
+				NICVF_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	}
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+			qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -430,6 +603,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 15/25] net/thunderx: add RSS and reta query and update support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (13 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 14/25] net/thunderx: add Tx " Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 16/25] net/thunderx: add MTU set and promiscuous enable support Jerin Jacob
                             ` (11 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++
 1 file changed, 172 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 167149e..1d5bea7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+		rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+				rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -601,6 +769,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 16/25] net/thunderx: add MTU set and promiscuous enable support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (14 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 15/25] net/thunderx: add RSS and reta query and update support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 17/25] net/thunderx: add stats support Jerin Jacob
                             ` (10 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 51 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1d5bea7..f0e3371 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -144,6 +144,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -168,6 +211,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -768,7 +817,9 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index afb875a..b1af468 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -62,6 +62,8 @@
 #define NICVF_MAX_RX_FREE_THRESH        1024
 #define NICVF_MAX_TX_FREE_THRESH        1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 17/25] net/thunderx: add stats support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (15 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 16/25] net/thunderx: add MTU set and promiscuous enable support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 18/25] net/thunderx: add single and multi segment Tx functions Jerin Jacob
                             ` (9 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 66 +++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index f0e3371..19ad85a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -211,6 +211,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -817,6 +881,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 18/25] net/thunderx: add single and multi segment Tx functions
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (16 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 17/25] net/thunderx: add stats support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 19/25] net/thunderx: add single and multi segment Rx functions Jerin Jacob
                             ` (8 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile       |   2 +
 drivers/net/thunderx/nicvf_ethdev.c |   5 +-
 drivers/net/thunderx/nicvf_rxtx.c   | 255 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
 4 files changed, 354 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index eb9f100..9079b5b 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 19ad85a..15f5cfc 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static inline int
@@ -617,6 +617,9 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
 				NICVF_TX_FREE_MPOOL_THRESH :
 				tx_conf->tx_free_thresh);
+		txq->pool_free = nicvf_multi_pool_free_xmited_buffers;
+	} else {
+		txq->pool_free = nicvf_single_pool_free_xmited_buffers;
 	}
 
 	/* Allocate software ring */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..88a5152
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,255 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+}
+
+void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[NICVF_MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmitted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+		if (unlikely(sq->pool == NULL))
+			sq->pool = tx_pkts[0]->pool;
+
+		sq->pool_free(sq);
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..b1fdc69
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return (i & ~3) + 3 - (i & 3);
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+#else
+
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return i;
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |
+			 (uint64_t)NIC_SEND_LD_TYPE_E_LDT << 58 |
+			 pkt->data_len;
+	entry->buff[1] = rte_mbuf_data_dma_addr(pkt);
+}
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+void nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq);
+void nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 19/25] net/thunderx: add single and multi segment Rx functions
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (17 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 18/25] net/thunderx: add single and multi segment Tx functions Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 20/25] net/thunderx: implement supported ptype get and Rx queue count Jerin Jacob
                             ` (7 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  33 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 317 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |   5 +
 3 files changed, 355 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b1af468..59fa19c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -70,4 +70,37 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	struct rte_mempool_memhdr *hdr;
+
+	hdr = STAILQ_FIRST(&mp->mem_list);
+	assert(hdr != NULL);
+	return (uint64_t)((uintptr_t)hdr->addr - hdr->phys_addr);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
+
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 88a5152..fed0859 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -253,3 +253,320 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+		(nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+					__ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						& NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	cqe_rx_word3_t cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cqe_rx = (struct cqe_rx_t *)&desc[cqe_head];
+		NICVF_RX_ASSERT(((struct cq_entry_type_t *)cqe_rx)->cqe_type
+						 == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	const uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index b1fdc69..d2ca2c9 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -83,6 +84,10 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 20/25] net/thunderx: implement supported ptype get and Rx queue count
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (18 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 19/25] net/thunderx: add single and multi segment Rx functions Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 21/25] net/thunderx: add Rx queue start and stop support Jerin Jacob
                             ` (6 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  9 ++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  2 ++
 3 files changed, 52 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 15f5cfc..8b8d9d9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -260,6 +260,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+			sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -888,6 +927,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
@@ -895,6 +935,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index fed0859..1c6d6a8 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -570,3 +570,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d2ca2c9..ded87f3 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -84,6 +84,8 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 21/25] net/thunderx: add Rx queue start and stop support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (19 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 20/25] net/thunderx: implement supported ptype get and Rx queue count Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 22/25] net/thunderx: add Tx " Jerin Jacob
                             ` (5 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 167 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 186 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8b8d9d9..7a58cb3 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -687,6 +735,33 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
+					NICVF_MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -695,6 +770,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -707,6 +845,33 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -933,6 +1098,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 1c6d6a8..eb51a72 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -579,3 +579,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, NICVF_MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index ded87f3..9dad8a5 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -85,6 +85,7 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 #endif
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 22/25] net/thunderx: add Tx queue start and stop support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (20 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 21/25] net/thunderx: add Rx queue start and stop support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 23/25] net/thunderx: add device start, stop and close support Jerin Jacob
                             ` (4 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 7a58cb3..3c88290 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,51 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -872,6 +917,18 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1100,6 +1157,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 23/25] net/thunderx: add device start, stop and close support
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (21 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 22/25] net/thunderx: add Tx " Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 24/25] net/thunderx: updated driver documentation and release notes Jerin Jacob
                             ` (3 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 467 ++++++++++++++++++++++++++++++++++++
 1 file changed, 467 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3c88290..7d545f9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,8 @@
 #include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
+
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
 			       struct rte_eth_link *link)
@@ -534,6 +536,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -629,6 +707,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -673,6 +776,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->pool_free == nicvf_single_pool_free_xmited_buffers)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1064,6 +1209,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited without proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1144,7 +1600,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1179,6 +1638,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 24/25] net/thunderx: updated driver documentation and release notes
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (22 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 23/25] net/thunderx: add device start, stop and close support Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-14 19:06           ` [PATCH v5 25/25] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
                             ` (2 subsequent siblings)
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  96 ++++-----
 doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 404 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 0b13698..ddf75f4 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
     nfp
     qede
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index 0bd8fae..df28510 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,40 +74,40 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s v v v v x
-                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h i i m e
-                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e o r r x n
-                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d s t t n v
-                        c x x i e 0       . v v   f e e e e k k k k     e         v   a t i i e i
-                        k   v n           . f f       . v v   . v v               f   t   o o t r
-                        e   f g           .   .       . f f   . f f                   a     . 3 t
-                        t                 v   v       v   v   v   v                   2     v
-                                          e   e       e   e   e   e                         e
-                                          c   c       c   c   c   c                         c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s t v v v v x
+                        f n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h h i i m e
+                        p x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e u o r r x n
+                        a 2 2 d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d n s t t n v
+                        c x x i e 0       . v v   f e e e e k k k k     e         v   a d t i i e i
+                        k   v n           . f f       . v v   . v v               f   t e   o o t r
+                        e   f g           .   .       . f f   . f f                   a r     . 3 t
+                        t                 v   v       v   v   v   v                   2 x     v
+                                          e   e       e   e   e   e                           e
+                                          c   c       c   c   c   c                           c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y
-   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y
-   Queue status event                                                                   Y
+   Link status            Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y Y
+   Link status event      Y Y     Y     Y Y     Y   Y Y             Y Y         Y Y     Y Y
+   Queue status event                                                                     Y
    Rx interrupt                   Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y   Y Y
-   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y   Y
+   Queue start/stop             Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y Y   Y Y
+   MTU update                   Y Y Y           Y   Y Y Y Y         Y Y                 Y
+   Jumbo frame                  Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   Scattered Rx                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y Y   Y
    LRO                                              Y Y Y Y
    TSO                          Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y   Y Y
-   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y   Y Y
-   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
-   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y       Y Y
-   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y
-   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y Y   Y Y
+   Allmulticast mode            Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y Y   Y Y
+   Unicast MAC filter     Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
+   Multicast MAC filter   Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y         Y Y
+   RSS hash                     Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y     Y
+   RSS key update                   Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
+   RSS reta update                  Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
    VMDq                                 Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y
+   SR-IOV                   Y       Y   Y Y     Y   Y Y             Y Y           Y     Y
    DCB                                  Y Y     Y   Y Y
-   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
+   VLAN filter                    Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
    Ethertype filter                     Y Y     Y   Y Y
    N-tuple filter                               Y   Y Y
    SYN filter                                   Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                 Y Y     Y Y     Y   Y Y                         Y Y
    Rate limitation                                  Y Y
    Traffic mirroring                    Y Y         Y Y
-   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
-   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
+   CRC offload                  Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     Y
+   VLAN offload                 Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     P
    QinQ offload                   Y     Y   Y   Y Y Y   Y
-   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
+   L4 checksum offload          Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
    Inner L3 checksum                Y   Y   Y       Y   Y           Y
    Inner L4 checksum                Y   Y   Y       Y   Y           Y
-   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y
+   Packet type parsing          Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y     Y
    Timesync                             Y Y     Y   Y Y
-   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y   Y Y
+   Basic stats            Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y Y
+   Extended stats                   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y   Y
+   Stats per queue              Y                   Y Y     Y Y Y Y Y Y         Y Y   Y Y   Y Y
    EEPROM dump                                  Y   Y Y
-   Registers dump                               Y Y Y Y Y Y
-   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
-   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y       Y Y
-   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
+   Registers dump                               Y Y Y Y Y Y                             Y
+   Multiprocess aware                   Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   BSD nic_uio                  Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                         Y Y
+   Linux UIO              Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y         Y Y
+   Linux VFIO                   Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y   Y Y
    Other kdrv                                                       Y Y               Y
-   ARMv7                                                                      Y           Y Y
-   ARMv8                                                                      Y           Y Y
+   ARMv7                                                                      Y             Y Y
+   ARMv8                                                                      Y         Y   Y Y
    Power8                                                           Y Y       Y
    TILE-Gx                                                                    Y
-   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y         Y Y Y
-   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y   Y
+   x86-32                       Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y           Y Y Y
+   x86-64                 Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y   Y Y Y
+   Usage doc              Y Y   Y     Y                             Y Y       Y Y Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..e38f260
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,354 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
+        -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx \
+        --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table.
+           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
+           loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 \
+      -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 30e78d4..29b8b52 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -47,6 +47,7 @@ New Features
   * Dropped specific Xen Dom0 code.
   * Dropped specific anonymous mempool code in testpmd.
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Resolved Issues
 ---------------
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v5 25/25] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (23 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 24/25] net/thunderx: updated driver documentation and release notes Jerin Jacob
@ 2016-06-14 19:06           ` Jerin Jacob
  2016-06-15 14:39           ` [PATCH v5 00/25] DPDK PMD for ThunderX NIC device Bruce Richardson
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
  26 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-14 19:06 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e8558f..625423f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -336,6 +336,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 
+Cavium ThunderX nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (24 preceding siblings ...)
  2016-06-14 19:06           ` [PATCH v5 25/25] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-06-15 14:39           ` Bruce Richardson
  2016-06-16  9:31             ` Jerin Jacob
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
  26 siblings, 1 reply; 204+ messages in thread
From: Bruce Richardson @ 2016-06-15 14:39 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit

On Wed, Jun 15, 2016 at 12:36:15AM +0530, Jerin Jacob wrote:
> This patch set provides the initial version of DPDK PMD for the
> built-in NIC device in Cavium ThunderX SoC family.
> 
> Implemented features and ThunderX nicvf PMD documentation added
> in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> respectively in this patch set.
> 
> These patches are checked using checkpatch.sh with following
> additional ignore option:
>     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> CAMELCASE - To accommodate PRIx64
> BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> 
> This patch set is based on DPDK 16.07-RC1
> and tested with git HEAD change-set
> ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> following depended patch
> 
> http://dpdk.org/dev/patchwork/patch/11826/
> ethdev: add tunnel and port RSS offload types
> 
Hi Jerin,

hopefully a final set of comments before merge on this set, as it's looking
very good now.

* Two patches look like they need to be split, as they are combining multiple
  functions into one patch. They are:
    [dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
    [dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
  For the other patches which add multiple functions, the functions seem to be
  logically related so I don't think there is a problem

* check-git-logs.sh is warning about a few of the commit messages being too long.
  Splitting patch 20 should fix one of those, but there are a few remaining.
  A number of titles refer to ThunderX in the message, but this is probably
  unnecessary, as the prefix already contains "net/thunderx" in it.

Regards,
/Bruce

PS: Please also baseline patches on dpdk-next-net/rel_16_07 tree. They currently
apply fine to that tree so there is no problem, but just in case later commits
break things, that is the tree that net patches should be based on.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-15 14:39           ` [PATCH v5 00/25] DPDK PMD for ThunderX NIC device Bruce Richardson
@ 2016-06-16  9:31             ` Jerin Jacob
  2016-06-16 10:58               ` Bruce Richardson
  0 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-16  9:31 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, thomas.monjalon, ferruh.yigit

On Wed, Jun 15, 2016 at 03:39:25PM +0100, Bruce Richardson wrote:
> On Wed, Jun 15, 2016 at 12:36:15AM +0530, Jerin Jacob wrote:
> > This patch set provides the initial version of DPDK PMD for the
> > built-in NIC device in Cavium ThunderX SoC family.
> > 
> > Implemented features and ThunderX nicvf PMD documentation added
> > in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> > respectively in this patch set.
> > 
> > These patches are checked using checkpatch.sh with following
> > additional ignore option:
> >     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> > CAMELCASE - To accommodate PRIx64
> > BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> > 
> > This patch set is based on DPDK 16.07-RC1
> > and tested with git HEAD change-set
> > ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> > following depended patch
> > 
> > http://dpdk.org/dev/patchwork/patch/11826/
> > ethdev: add tunnel and port RSS offload types
> > 
> Hi Jerin,
> 
> hopefully a final set of comments before merge on this set, as it's looking
> very good now.
> 
> * Two patches look like they need to be split, as they are combining multiple
>   functions into one patch. They are:
>     [dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
>     [dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
>   For the other patches which add multiple functions, the functions seem to be
>   logically related so I don't think there is a problem
> 
> * check-git-logs.sh is warning about a few of the commit messages being too long.
>   Splitting patch 20 should fix one of those, but there are a few remaining.
>   A number of titles refer to ThunderX in the message, but this is probably
>   unnecessary, as the prefix already contains "net/thunderx" in it.

OK. I will send the next revision.

> 
> Regards,
> /Bruce
> 
> PS: Please also baseline patches on dpdk-next-net/rel_16_07 tree. They currently
> apply fine to that tree so there is no problem, but just in case later commits
> break things, that is the tree that net patches should be based on.

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-16  9:31             ` Jerin Jacob
@ 2016-06-16 10:58               ` Bruce Richardson
  2016-06-16 11:17                 ` Jerin Jacob
  0 siblings, 1 reply; 204+ messages in thread
From: Bruce Richardson @ 2016-06-16 10:58 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit

On Thu, Jun 16, 2016 at 03:01:02PM +0530, Jerin Jacob wrote:
> On Wed, Jun 15, 2016 at 03:39:25PM +0100, Bruce Richardson wrote:
> > On Wed, Jun 15, 2016 at 12:36:15AM +0530, Jerin Jacob wrote:
> > > This patch set provides the initial version of DPDK PMD for the
> > > built-in NIC device in Cavium ThunderX SoC family.
> > > 
> > > Implemented features and ThunderX nicvf PMD documentation added
> > > in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> > > respectively in this patch set.
> > > 
> > > These patches are checked using checkpatch.sh with following
> > > additional ignore option:
> > >     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> > > CAMELCASE - To accommodate PRIx64
> > > BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> > > 
> > > This patch set is based on DPDK 16.07-RC1
> > > and tested with git HEAD change-set
> > > ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> > > following depended patch
> > > 
> > > http://dpdk.org/dev/patchwork/patch/11826/
> > > ethdev: add tunnel and port RSS offload types
> > > 
> > Hi Jerin,
> > 
> > hopefully a final set of comments before merge on this set, as it's looking
> > very good now.
> > 
> > * Two patches look like they need to be split, as they are combining multiple
> >   functions into one patch. They are:
> >     [dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
> >     [dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
> >   For the other patches which add multiple functions, the functions seem to be
> >   logically related so I don't think there is a problem
> > 
> > * check-git-logs.sh is warning about a few of the commit messages being too long.
> >   Splitting patch 20 should fix one of those, but there are a few remaining.
> >   A number of titles refer to ThunderX in the message, but this is probably
> >   unnecessary, as the prefix already contains "net/thunderx" in it.
> 
> OK. I will send the next revision.
> 

Please hold off a few hours, as I'm hoping to merge in the bnxt driver this
afternoon. If all goes well, I would appreciate it if you could base your patchset
off the rel_16_07 tree with that set applied - save me having to resolve conflicts
in files like the nic overview doc, which is always a pain to try and edit. :-)

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-16 10:58               ` Bruce Richardson
@ 2016-06-16 11:17                 ` Jerin Jacob
  2016-06-16 14:33                   ` Bruce Richardson
  0 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-16 11:17 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, thomas.monjalon, ferruh.yigit

On Thu, Jun 16, 2016 at 11:58:27AM +0100, Bruce Richardson wrote:
> On Thu, Jun 16, 2016 at 03:01:02PM +0530, Jerin Jacob wrote:
> > On Wed, Jun 15, 2016 at 03:39:25PM +0100, Bruce Richardson wrote:
> > > On Wed, Jun 15, 2016 at 12:36:15AM +0530, Jerin Jacob wrote:
> > > > This patch set provides the initial version of DPDK PMD for the
> > > > built-in NIC device in Cavium ThunderX SoC family.
> > > > 
> > > > Implemented features and ThunderX nicvf PMD documentation added
> > > > in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> > > > respectively in this patch set.
> > > > 
> > > > These patches are checked using checkpatch.sh with following
> > > > additional ignore option:
> > > >     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> > > > CAMELCASE - To accommodate PRIx64
> > > > BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> > > > 
> > > > This patch set is based on DPDK 16.07-RC1
> > > > and tested with git HEAD change-set
> > > > ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> > > > following depended patch
> > > > 
> > > > http://dpdk.org/dev/patchwork/patch/11826/
> > > > ethdev: add tunnel and port RSS offload types
> > > > 
> > > Hi Jerin,
> > > 
> > > hopefully a final set of comments before merge on this set, as it's looking
> > > very good now.
> > > 
> > > * Two patches look like they need to be split, as they are combining multiple
> > >   functions into one patch. They are:
> > >     [dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
> > >     [dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
> > >   For the other patches which add multiple functions, the functions seem to be
> > >   logically related so I don't think there is a problem
> > > 
> > > * check-git-logs.sh is warning about a few of the commit messages being too long.
> > >   Splitting patch 20 should fix one of those, but there are a few remaining.
> > >   A number of titles refer to ThunderX in the message, but this is probably
> > >   unnecessary, as the prefix already contains "net/thunderx" in it.
> > 
> > OK. I will send the next revision.
> > 
> 
> Please hold off a few hours, as I'm hoping to merge in the bnxt driver this
> afternoon. If all goes well, I would appreciate it if you could base your patchset
> off the rel_16_07 tree with that set applied - save me having to resolve conflicts
> in files like the nic overview doc, which is always a pain to try and edit. :-)

OK. I will re-base the changes once you have done with bnxt merge.
Let me know once its done.

> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v5 00/25] DPDK PMD for ThunderX NIC device
  2016-06-16 11:17                 ` Jerin Jacob
@ 2016-06-16 14:33                   ` Bruce Richardson
  0 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-16 14:33 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit

On Thu, Jun 16, 2016 at 04:47:39PM +0530, Jerin Jacob wrote:
> On Thu, Jun 16, 2016 at 11:58:27AM +0100, Bruce Richardson wrote:
> > On Thu, Jun 16, 2016 at 03:01:02PM +0530, Jerin Jacob wrote:
> > > On Wed, Jun 15, 2016 at 03:39:25PM +0100, Bruce Richardson wrote:
> > > > On Wed, Jun 15, 2016 at 12:36:15AM +0530, Jerin Jacob wrote:
> > > > > This patch set provides the initial version of DPDK PMD for the
> > > > > built-in NIC device in Cavium ThunderX SoC family.
> > > > > 
> > > > > Implemented features and ThunderX nicvf PMD documentation added
> > > > > in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> > > > > respectively in this patch set.
> > > > > 
> > > > > These patches are checked using checkpatch.sh with following
> > > > > additional ignore option:
> > > > >     options="$options --ignore=CAMELCASE,BRACKET_SPACE"
> > > > > CAMELCASE - To accommodate PRIx64
> > > > > BRACKET_SPACE - To accommodate AT&T inline line assembly in two places
> > > > > 
> > > > > This patch set is based on DPDK 16.07-RC1
> > > > > and tested with git HEAD change-set
> > > > > ca173a909538a2f1082cd0dcb4d778a97dab69c3 along with
> > > > > following depended patch
> > > > > 
> > > > > http://dpdk.org/dev/patchwork/patch/11826/
> > > > > ethdev: add tunnel and port RSS offload types
> > > > > 
> > > > Hi Jerin,
> > > > 
> > > > hopefully a final set of comments before merge on this set, as it's looking
> > > > very good now.
> > > > 
> > > > * Two patches look like they need to be split, as they are combining multiple
> > > >   functions into one patch. They are:
> > > >     [dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
> > > >     [dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
> > > >   For the other patches which add multiple functions, the functions seem to be
> > > >   logically related so I don't think there is a problem
> > > > 
> > > > * check-git-logs.sh is warning about a few of the commit messages being too long.
> > > >   Splitting patch 20 should fix one of those, but there are a few remaining.
> > > >   A number of titles refer to ThunderX in the message, but this is probably
> > > >   unnecessary, as the prefix already contains "net/thunderx" in it.
> > > 
> > > OK. I will send the next revision.
> > > 
> > 
> > Please hold off a few hours, as I'm hoping to merge in the bnxt driver this
> > afternoon. If all goes well, I would appreciate it if you could base your patchset
> > off the rel_16_07 tree with that set applied - save me having to resolve conflicts
> > in files like the nic overview doc, which is always a pain to try and edit. :-)
> 
> OK. I will re-base the changes once you have done with bnxt merge.
> Let me know once its done.
> 
Done now. Feel free to submit a new version based on rel_16_07 branch.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v6 00/27] DPDK PMD for ThunderX NIC device
  2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
                             ` (25 preceding siblings ...)
  2016-06-15 14:39           ` [PATCH v5 00/25] DPDK PMD for ThunderX NIC device Bruce Richardson
@ 2016-06-17 13:29           ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 01/27] net/thunderx/base: add HW constants Jerin Jacob
                               ` (27 more replies)
  26 siblings, 28 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev; +Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob

This patch set provides the initial version of DPDK PMD for the
built-in NIC device in Cavium ThunderX SoC family.

Implemented features and ThunderX nicvf PMD documentation added
in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
respectively in this patch set.

These patches are checked using checkpatch.sh with following
additional ignore option:
    options="$options --ignore=CAMELCASE,BRACKET_SPACE"
CAMELCASE - To accommodate PRIx64
BRACKET_SPACE - To accommodate AT&T inline line assembly in two places

This patch set is based on DPDK 16.07-RC1
and tested with git HEAD change-set
ad00c7ec23e3b7723217bc29e03eb40409aaf617(in dpdk-next-net/rel_16_07)
along with following depended patch

http://dpdk.org/dev/patchwork/patch/11826/
ethdev: add tunnel and port RSS offload types

V1->V2

http://dpdk.org/dev/patchwork/patch/12609/
-- added const for the const struct tables
-- remove multiple blank lines
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12610/
-- removed DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_net lib/librte_malloc
-- add const for table structs
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12614/
-- s/DEFAULT_*/NICVF_DEFAULT_*/gc
http://dpdk.org/dev/patchwork/patch/12615/
-- Fix typos
-- addressed style comments
http://dpdk.org/dev/patchwork/patch/12616/
-- removed redundant txq->tail = 0 and txq->head = 0
http://dpdk.org/dev/patchwork/patch/12627/
-- fixed the documentation changes

-- fixed TAB+space occurrences in functions
-- rebased to c8c33ad7f94c59d1c0676af0cfd61207b3e808db

V2->V3

http://dpdk.org/dev/patchwork/patch/13060/
-- Changed polling infrastructure to use rte_eal_alarm* instead of timerfd_create API
-- rebased to ca173a909538a2f1082cd0dcb4d778a97dab69c3

V3->V4

addressed review comments of Ferruh's review

http://dpdk.org/dev/patchwork/patch/13314/
-- s/avilable/available
http://dpdk.org/dev/patchwork/patch/13323/
-- s/witout/without

http://dpdk.org/dev/patchwork/patch/13318/
-- s/nicvf_free_xmittted_buffers/nicvf_free_xmitted_buffers
-- fix checkpatch errors
http://dpdk.org/dev/patchwork/patch/13307/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13308/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13320/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13321/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13322/
-- addressed review comments
http://dpdk.org/dev/patchwork/patch/13324/
-- addressed review comments and created separated patch for
platform specific config change

-- update change log to net/thunderx: ........

V4->V5
-- splitting up drivers/net/thunderx/nicvf/base files to following
patches as suggested by Bruce

net/thunderx/base: add HW constants for ThunderX inbuilt NIC
net/thunderx/base: add register definition for ThunderX inbuilt NIC
net/thunderx/base: implement DPDK based platform abstraction for base code
net/thunderx/base: add mbox API for ThunderX PF/VF driver communication
net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC
net/thunderx/base: add RSS and reta configuration HW APIs
net/thunderx/base: add statistics get HW APIs

-- Corrected wrong git commit log messages flagged by check-git-log.sh

V5->V6
-- Rebased to dpdk-next-net/rel_16_07(ad00c7ec23e3b7723217bc29e03eb40409aaf617)
-- Splitted following patches in v5 to two logical patches
[dpdk-dev,v5,16/25] net/thunderx: add MTU set and promiscuous enable support
[dpdk-dev,v5,20/25] net/thunderx: implement supported ptype get and Rx queue count
-- Fixed check-git-logs.sh for "commit messages being too long"

Jerin Jacob (27):
  net/thunderx/base: add HW constants
  net/thunderx/base: add HW register definitions
  net/thunderx/base: implement DPDK based platform abstraction
  net/thunderx/base: add mbox APIs for PF/VF communication
  net/thunderx/base: add hardware API
  net/thunderx/base: add RSS and reta configuration HW APIs
  net/thunderx/base: add statistics get HW APIs
  net/thunderx: add pmd skeleton
  net/thunderx: add link status and link update support
  net/thunderx: add registers dump support
  net/thunderx: add ethdev configure support
  net/thunderx: add get device info support
  net/thunderx: add Rx queue setup and release support
  net/thunderx: add Tx queue setup and release support
  net/thunderx: add RSS and reta query and update support
  net/thunderx: add MTU set support
  net/thunderx: add promiscuous enable support
  net/thunderx: add stats support
  net/thunderx: add single and multi segment Tx functions
  net/thunderx: add single and multi segment Rx functions
  net/thunderx: add supported packet type get
  net/thunderx: add Rx queue count support
  net/thunderx: add Rx queue start and stop support
  net/thunderx: add Tx queue start and stop support
  net/thunderx: add device start,stop and close support
  net/thunderx: updated driver documentation and release notes
  maintainers: claim responsibility for the ThunderX nicvf PMD

 MAINTAINERS                                        |    6 +
 config/common_base                                 |   10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |   10 +
 doc/guides/nics/index.rst                          |    1 +
 doc/guides/nics/overview.rst                       |   96 +-
 doc/guides/nics/thunderx.rst                       |  354 ++++
 doc/guides/rel_notes/release_16_07.rst             |    1 +
 drivers/net/Makefile                               |    1 +
 drivers/net/thunderx/Makefile                      |   65 +
 drivers/net/thunderx/base/nicvf_hw.c               |  905 ++++++++++
 drivers/net/thunderx/base/nicvf_hw.h               |  240 +++
 drivers/net/thunderx/base/nicvf_hw_defs.h          | 1219 +++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.c             |  418 +++++
 drivers/net/thunderx/base/nicvf_mbox.h             |  232 +++
 drivers/net/thunderx/base/nicvf_plat.h             |  132 ++
 drivers/net/thunderx/nicvf_ethdev.c                | 1791 ++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  106 ++
 drivers/net/thunderx/nicvf_logs.h                  |   83 +
 drivers/net/thunderx/nicvf_rxtx.c                  |  599 +++++++
 drivers/net/thunderx/nicvf_rxtx.h                  |  101 ++
 drivers/net/thunderx/nicvf_struct.h                |  124 ++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |    4 +
 mk/rte.app.mk                                      |    1 +
 23 files changed, 6451 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 204+ messages in thread

* [PATCH v6 01/27] net/thunderx/base: add HW constants
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 02/27] net/thunderx/base: add HW register definitions Jerin Jacob
                               ` (26 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

add HW constants of ThunderX inbuilt NIC

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw_defs.h | 551 ++++++++++++++++++++++++++++++
 1 file changed, 551 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw_defs.h

diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
new file mode 100644
index 0000000..8a58f03
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -0,0 +1,551 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_DEFS_H
+#define _THUNDERX_NICVF_HW_DEFS_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+/* Virtual function register offsets */
+
+#define NIC_VF_CFG                      (0x000020)
+#define NIC_VF_PF_MAILBOX_0_1           (0x000130)
+#define NIC_VF_INT                      (0x000200)
+#define NIC_VF_INT_W1S                  (0x000220)
+#define NIC_VF_ENA_W1C                  (0x000240)
+#define NIC_VF_ENA_W1S                  (0x000260)
+
+#define NIC_VNIC_RSS_CFG                (0x0020E0)
+#define NIC_VNIC_RSS_KEY_0_4            (0x002200)
+#define NIC_VNIC_TX_STAT_0_4            (0x004000)
+#define NIC_VNIC_RX_STAT_0_13           (0x004100)
+#define NIC_VNIC_RQ_GEN_CFG             (0x010010)
+
+#define NIC_QSET_CQ_0_7_CFG             (0x010400)
+#define NIC_QSET_CQ_0_7_CFG2            (0x010408)
+#define NIC_QSET_CQ_0_7_THRESH          (0x010410)
+#define NIC_QSET_CQ_0_7_BASE            (0x010420)
+#define NIC_QSET_CQ_0_7_HEAD            (0x010428)
+#define NIC_QSET_CQ_0_7_TAIL            (0x010430)
+#define NIC_QSET_CQ_0_7_DOOR            (0x010438)
+#define NIC_QSET_CQ_0_7_STATUS          (0x010440)
+#define NIC_QSET_CQ_0_7_STATUS2         (0x010448)
+#define NIC_QSET_CQ_0_7_DEBUG           (0x010450)
+
+#define NIC_QSET_RQ_0_7_CFG             (0x010600)
+#define NIC_QSET_RQ_0_7_STATUS0         (0x010700)
+#define NIC_QSET_RQ_0_7_STATUS1         (0x010708)
+
+#define NIC_QSET_SQ_0_7_CFG             (0x010800)
+#define NIC_QSET_SQ_0_7_THRESH          (0x010810)
+#define NIC_QSET_SQ_0_7_BASE            (0x010820)
+#define NIC_QSET_SQ_0_7_HEAD            (0x010828)
+#define NIC_QSET_SQ_0_7_TAIL            (0x010830)
+#define NIC_QSET_SQ_0_7_DOOR            (0x010838)
+#define NIC_QSET_SQ_0_7_STATUS          (0x010840)
+#define NIC_QSET_SQ_0_7_DEBUG           (0x010848)
+#define NIC_QSET_SQ_0_7_STATUS0         (0x010900)
+#define NIC_QSET_SQ_0_7_STATUS1         (0x010908)
+
+#define NIC_QSET_RBDR_0_1_CFG           (0x010C00)
+#define NIC_QSET_RBDR_0_1_THRESH        (0x010C10)
+#define NIC_QSET_RBDR_0_1_BASE          (0x010C20)
+#define NIC_QSET_RBDR_0_1_HEAD          (0x010C28)
+#define NIC_QSET_RBDR_0_1_TAIL          (0x010C30)
+#define NIC_QSET_RBDR_0_1_DOOR          (0x010C38)
+#define NIC_QSET_RBDR_0_1_STATUS0       (0x010C40)
+#define NIC_QSET_RBDR_0_1_STATUS1       (0x010C48)
+#define NIC_QSET_RBDR_0_1_PRFCH_STATUS  (0x010C50)
+
+/* vNIC HW Constants */
+
+#define NIC_Q_NUM_SHIFT                 18
+
+#define MAX_QUEUE_SET                   128
+#define MAX_RCV_QUEUES_PER_QS           8
+#define MAX_RCV_BUF_DESC_RINGS_PER_QS   2
+#define MAX_SND_QUEUES_PER_QS           8
+#define MAX_CMP_QUEUES_PER_QS           8
+
+#define NICVF_INTR_CQ_SHIFT             0
+#define NICVF_INTR_SQ_SHIFT             8
+#define NICVF_INTR_RBDR_SHIFT           16
+#define NICVF_INTR_PKT_DROP_SHIFT       20
+#define NICVF_INTR_TCP_TIMER_SHIFT      21
+#define NICVF_INTR_MBOX_SHIFT           22
+#define NICVF_INTR_QS_ERR_SHIFT         23
+
+#define NICVF_INTR_CQ_MASK              (0xFF << NICVF_INTR_CQ_SHIFT)
+#define NICVF_INTR_SQ_MASK              (0xFF << NICVF_INTR_SQ_SHIFT)
+#define NICVF_INTR_RBDR_MASK            (0x03 << NICVF_INTR_RBDR_SHIFT)
+#define NICVF_INTR_PKT_DROP_MASK        (1 << NICVF_INTR_PKT_DROP_SHIFT)
+#define NICVF_INTR_TCP_TIMER_MASK       (1 << NICVF_INTR_TCP_TIMER_SHIFT)
+#define NICVF_INTR_MBOX_MASK            (1 << NICVF_INTR_MBOX_SHIFT)
+#define NICVF_INTR_QS_ERR_MASK          (1 << NICVF_INTR_QS_ERR_SHIFT)
+#define NICVF_INTR_ALL_MASK             (0x7FFFFF)
+
+#define NICVF_CQ_WR_FULL                (1ULL << 26)
+#define NICVF_CQ_WR_DISABLE             (1ULL << 25)
+#define NICVF_CQ_WR_FAULT               (1ULL << 24)
+#define NICVF_CQ_ERR_MASK               (NICVF_CQ_WR_FULL |\
+					 NICVF_CQ_WR_DISABLE |\
+					 NICVF_CQ_WR_FAULT)
+#define NICVF_CQ_CQE_COUNT_MASK         (0xFFFF)
+
+#define NICVF_SQ_ERR_STOPPED            (1ULL << 21)
+#define NICVF_SQ_ERR_SEND               (1ULL << 20)
+#define NICVF_SQ_ERR_DPE                (1ULL << 19)
+#define NICVF_SQ_ERR_MASK               (NICVF_SQ_ERR_STOPPED |\
+					 NICVF_SQ_ERR_SEND |\
+					 NICVF_SQ_ERR_DPE)
+#define NICVF_SQ_STATUS_STOPPED_BIT     (21)
+
+#define NICVF_RBDR_FIFO_STATE_SHIFT     (62)
+#define NICVF_RBDR_FIFO_STATE_MASK      (3ULL << NICVF_RBDR_FIFO_STATE_SHIFT)
+#define NICVF_RBDR_COUNT_MASK           (0x7FFFF)
+
+/* Queue reset */
+#define NICVF_CQ_RESET                  (1ULL << 41)
+#define NICVF_SQ_RESET                  (1ULL << 17)
+#define NICVF_RBDR_RESET                (1ULL << 43)
+
+/* RSS constants */
+#define NIC_MAX_RSS_HASH_BITS           (8)
+#define NIC_MAX_RSS_IDR_TBL_SIZE        (1 << NIC_MAX_RSS_HASH_BITS)
+#define RSS_HASH_KEY_SIZE               (5) /* 320 bit key */
+#define RSS_HASH_KEY_BYTE_SIZE          (40) /* 320 bit key */
+
+#define RSS_L2_EXTENDED_HASH_ENA        (1 << 0)
+#define RSS_IP_ENA                      (1 << 1)
+#define RSS_TCP_ENA                     (1 << 2)
+#define RSS_TCP_SYN_ENA                 (1 << 3)
+#define RSS_UDP_ENA                     (1 << 4)
+#define RSS_L4_EXTENDED_ENA             (1 << 5)
+#define RSS_L3_BI_DIRECTION_ENA         (1 << 7)
+#define RSS_L4_BI_DIRECTION_ENA         (1 << 8)
+#define RSS_TUN_VXLAN_ENA               (1 << 9)
+#define RSS_TUN_GENEVE_ENA              (1 << 10)
+#define RSS_TUN_NVGRE_ENA               (1 << 11)
+
+#define RBDR_QUEUE_SZ_8K                (8 * 1024)
+#define RBDR_QUEUE_SZ_16K               (16 * 1024)
+#define RBDR_QUEUE_SZ_32K               (32 * 1024)
+#define RBDR_QUEUE_SZ_64K               (64 * 1024)
+#define RBDR_QUEUE_SZ_128K              (128 * 1024)
+#define RBDR_QUEUE_SZ_256K              (256 * 1024)
+#define RBDR_QUEUE_SZ_512K              (512 * 1024)
+
+#define RBDR_SIZE_SHIFT                 (13) /* 8k */
+
+#define SND_QUEUE_SZ_1K                 (1 * 1024)
+#define SND_QUEUE_SZ_2K                 (2 * 1024)
+#define SND_QUEUE_SZ_4K                 (4 * 1024)
+#define SND_QUEUE_SZ_8K                 (8 * 1024)
+#define SND_QUEUE_SZ_16K                (16 * 1024)
+#define SND_QUEUE_SZ_32K                (32 * 1024)
+#define SND_QUEUE_SZ_64K                (64 * 1024)
+
+#define SND_QSIZE_SHIFT                 (10) /* 1k */
+
+#define CMP_QUEUE_SZ_1K                 (1 * 1024)
+#define CMP_QUEUE_SZ_2K                 (2 * 1024)
+#define CMP_QUEUE_SZ_4K                 (4 * 1024)
+#define CMP_QUEUE_SZ_8K                 (8 * 1024)
+#define CMP_QUEUE_SZ_16K                (16 * 1024)
+#define CMP_QUEUE_SZ_32K                (32 * 1024)
+#define CMP_QUEUE_SZ_64K                (64 * 1024)
+
+#define CMP_QSIZE_SHIFT                 (10) /* 1k */
+
+#define NICVF_QSIZE_MIN_VAL             (0)
+#define NICVF_QSIZE_MAX_VAL             (6)
+
+/* Min/Max packet size */
+#define NIC_HW_MIN_FRS                  (64)
+#define NIC_HW_MAX_FRS                  (9200) /* 9216 max pkt including FCS */
+#define NIC_HW_MAX_SEGS                 (12)
+
+/* Descriptor alignments */
+#define NICVF_RBDR_BASE_ALIGN_BYTES     (128) /* 7 bits */
+#define NICVF_CQ_BASE_ALIGN_BYTES       (512) /* 9 bits */
+#define NICVF_SQ_BASE_ALIGN_BYTES       (128) /* 7 bits */
+
+#define NICVF_CQE_RBPTR_WORD            (6)
+#define NICVF_CQE_RX2_RBPTR_WORD        (7)
+
+#define NICVF_STATIC_ASSERT(s) _Static_assert(s, #s)
+
+typedef uint64_t nicvf_phys_addr_t;
+
+#ifndef __BYTE_ORDER__
+#error __BYTE_ORDER__ not defined
+#endif
+
+/* vNIC HW Enumerations */
+
+enum nic_send_ld_type_e {
+	NIC_SEND_LD_TYPE_E_LDD,
+	NIC_SEND_LD_TYPE_E_LDT,
+	NIC_SEND_LD_TYPE_E_LDWB,
+	NIC_SEND_LD_TYPE_E_ENUM_LAST,
+};
+
+enum ether_type_algorithm {
+	ETYPE_ALG_NONE,
+	ETYPE_ALG_SKIP,
+	ETYPE_ALG_ENDPARSE,
+	ETYPE_ALG_VLAN,
+	ETYPE_ALG_VLAN_STRIP,
+};
+
+enum layer3_type {
+	L3TYPE_NONE,
+	L3TYPE_GRH,
+	L3TYPE_IPV4 = 0x4,
+	L3TYPE_IPV4_OPTIONS = 0x5,
+	L3TYPE_IPV6 = 0x6,
+	L3TYPE_IPV6_OPTIONS = 0x7,
+	L3TYPE_ET_STOP = 0xD,
+	L3TYPE_OTHER = 0xE,
+};
+
+#define NICVF_L3TYPE_OPTIONS_MASK	((uint8_t)1)
+#define NICVF_L3TYPE_IPVX_MASK		((uint8_t)0x06)
+
+enum layer4_type {
+	L4TYPE_NONE,
+	L4TYPE_IPSEC_ESP,
+	L4TYPE_IPFRAG,
+	L4TYPE_IPCOMP,
+	L4TYPE_TCP,
+	L4TYPE_UDP,
+	L4TYPE_SCTP,
+	L4TYPE_GRE,
+	L4TYPE_ROCE_BTH,
+	L4TYPE_OTHER = 0xE,
+};
+
+/* CPI and RSSI configuration */
+enum cpi_algorithm_type {
+	CPI_ALG_NONE,
+	CPI_ALG_VLAN,
+	CPI_ALG_VLAN16,
+	CPI_ALG_DIFF,
+};
+
+enum rss_algorithm_type {
+	RSS_ALG_NONE,
+	RSS_ALG_PORT,
+	RSS_ALG_IP,
+	RSS_ALG_TCP_IP,
+	RSS_ALG_UDP_IP,
+	RSS_ALG_SCTP_IP,
+	RSS_ALG_GRE_IP,
+	RSS_ALG_ROCE,
+};
+
+enum rss_hash_cfg {
+	RSS_HASH_L2ETC,
+	RSS_HASH_IP,
+	RSS_HASH_TCP,
+	RSS_HASH_TCP_SYN_DIS,
+	RSS_HASH_UDP,
+	RSS_HASH_L4ETC,
+	RSS_HASH_ROCE,
+	RSS_L3_BIDI,
+	RSS_L4_BIDI,
+};
+
+/* Completion queue entry types */
+enum cqe_type {
+	CQE_TYPE_INVALID,
+	CQE_TYPE_RX = 0x2,
+	CQE_TYPE_RX_SPLIT = 0x3,
+	CQE_TYPE_RX_TCP = 0x4,
+	CQE_TYPE_SEND = 0x8,
+	CQE_TYPE_SEND_PTP = 0x9,
+};
+
+enum cqe_rx_tcp_status {
+	CQE_RX_STATUS_VALID_TCP_CNXT,
+	CQE_RX_STATUS_INVALID_TCP_CNXT = 0x0F,
+};
+
+enum cqe_send_status {
+	CQE_SEND_STATUS_GOOD,
+	CQE_SEND_STATUS_DESC_FAULT = 0x01,
+	CQE_SEND_STATUS_HDR_CONS_ERR = 0x11,
+	CQE_SEND_STATUS_SUBDESC_ERR = 0x12,
+	CQE_SEND_STATUS_IMM_SIZE_OFLOW = 0x80,
+	CQE_SEND_STATUS_CRC_SEQ_ERR = 0x81,
+	CQE_SEND_STATUS_DATA_SEQ_ERR = 0x82,
+	CQE_SEND_STATUS_MEM_SEQ_ERR = 0x83,
+	CQE_SEND_STATUS_LOCK_VIOL = 0x84,
+	CQE_SEND_STATUS_LOCK_UFLOW = 0x85,
+	CQE_SEND_STATUS_DATA_FAULT = 0x86,
+	CQE_SEND_STATUS_TSTMP_CONFLICT = 0x87,
+	CQE_SEND_STATUS_TSTMP_TIMEOUT = 0x88,
+	CQE_SEND_STATUS_MEM_FAULT = 0x89,
+	CQE_SEND_STATUS_CSUM_OVERLAP = 0x8A,
+	CQE_SEND_STATUS_CSUM_OVERFLOW = 0x8B,
+};
+
+enum cqe_rx_tcp_end_reason {
+	CQE_RX_TCP_END_FIN_FLAG_DET,
+	CQE_RX_TCP_END_INVALID_FLAG,
+	CQE_RX_TCP_END_TIMEOUT,
+	CQE_RX_TCP_END_OUT_OF_SEQ,
+	CQE_RX_TCP_END_PKT_ERR,
+	CQE_RX_TCP_END_QS_DISABLED = 0x0F,
+};
+
+/* Packet protocol level error enumeration */
+enum cqe_rx_err_level {
+	CQE_RX_ERRLVL_RE,
+	CQE_RX_ERRLVL_L2,
+	CQE_RX_ERRLVL_L3,
+	CQE_RX_ERRLVL_L4,
+};
+
+/* Packet protocol level error type enumeration */
+enum cqe_rx_err_opcode {
+	CQE_RX_ERR_RE_NONE,
+	CQE_RX_ERR_RE_PARTIAL,
+	CQE_RX_ERR_RE_JABBER,
+	CQE_RX_ERR_RE_FCS = 0x7,
+	CQE_RX_ERR_RE_TERMINATE = 0x9,
+	CQE_RX_ERR_RE_RX_CTL = 0xb,
+	CQE_RX_ERR_PREL2_ERR = 0x1f,
+	CQE_RX_ERR_L2_FRAGMENT = 0x20,
+	CQE_RX_ERR_L2_OVERRUN = 0x21,
+	CQE_RX_ERR_L2_PFCS = 0x22,
+	CQE_RX_ERR_L2_PUNY = 0x23,
+	CQE_RX_ERR_L2_MAL = 0x24,
+	CQE_RX_ERR_L2_OVERSIZE = 0x25,
+	CQE_RX_ERR_L2_UNDERSIZE = 0x26,
+	CQE_RX_ERR_L2_LENMISM = 0x27,
+	CQE_RX_ERR_L2_PCLP = 0x28,
+	CQE_RX_ERR_IP_NOT = 0x41,
+	CQE_RX_ERR_IP_CHK = 0x42,
+	CQE_RX_ERR_IP_MAL = 0x43,
+	CQE_RX_ERR_IP_MALD = 0x44,
+	CQE_RX_ERR_IP_HOP = 0x45,
+	CQE_RX_ERR_L3_ICRC = 0x46,
+	CQE_RX_ERR_L3_PCLP = 0x47,
+	CQE_RX_ERR_L4_MAL = 0x61,
+	CQE_RX_ERR_L4_CHK = 0x62,
+	CQE_RX_ERR_UDP_LEN = 0x63,
+	CQE_RX_ERR_L4_PORT = 0x64,
+	CQE_RX_ERR_TCP_FLAG = 0x65,
+	CQE_RX_ERR_TCP_OFFSET = 0x66,
+	CQE_RX_ERR_L4_PCLP = 0x67,
+	CQE_RX_ERR_RBDR_TRUNC = 0x70,
+};
+
+enum send_l4_csum_type {
+	SEND_L4_CSUM_DISABLE,
+	SEND_L4_CSUM_UDP,
+	SEND_L4_CSUM_TCP,
+};
+
+enum send_crc_alg {
+	SEND_CRCALG_CRC32,
+	SEND_CRCALG_CRC32C,
+	SEND_CRCALG_ICRC,
+};
+
+enum send_load_type {
+	SEND_LD_TYPE_LDD,
+	SEND_LD_TYPE_LDT,
+	SEND_LD_TYPE_LDWB,
+};
+
+enum send_mem_alg_type {
+	SEND_MEMALG_SET,
+	SEND_MEMALG_ADD = 0x08,
+	SEND_MEMALG_SUB = 0x09,
+	SEND_MEMALG_ADDLEN = 0x0A,
+	SEND_MEMALG_SUBLEN = 0x0B,
+};
+
+enum send_mem_dsz_type {
+	SEND_MEMDSZ_B64,
+	SEND_MEMDSZ_B32,
+	SEND_MEMDSZ_B8 = 0x03,
+};
+
+enum sq_subdesc_type {
+	SQ_DESC_TYPE_INVALID,
+	SQ_DESC_TYPE_HEADER,
+	SQ_DESC_TYPE_CRC,
+	SQ_DESC_TYPE_IMMEDIATE,
+	SQ_DESC_TYPE_GATHER,
+	SQ_DESC_TYPE_MEMORY,
+};
+
+enum l3_type_t {
+	L3_NONE,
+	L3_IPV4		= 0x04,
+	L3_IPV4_OPT	= 0x05,
+	L3_IPV6		= 0x06,
+	L3_IPV6_OPT	= 0x07,
+	L3_ET_STOP	= 0x0D,
+	L3_OTHER	= 0x0E
+};
+
+enum l4_type_t {
+	L4_NONE,
+	L4_IPSEC_ESP	= 0x01,
+	L4_IPFRAG	= 0x02,
+	L4_IPCOMP	= 0x03,
+	L4_TCP		= 0x04,
+	L4_UDP_PASS1	= 0x05,
+	L4_GRE		= 0x07,
+	L4_UDP_PASS2	= 0x08,
+	L4_UDP_GENEVE	= 0x09,
+	L4_UDP_VXLAN	= 0x0A,
+	L4_NVGRE	= 0x0C,
+	L4_OTHER	= 0x0E
+};
+
+enum vlan_strip {
+	NO_STRIP,
+	STRIP_FIRST_VLAN,
+	STRIP_SECOND_VLAN,
+	STRIP_RESERV,
+};
+
+enum rbdr_state {
+	RBDR_FIFO_STATE_INACTIVE,
+	RBDR_FIFO_STATE_ACTIVE,
+	RBDR_FIFO_STATE_RESET,
+	RBDR_FIFO_STATE_FAIL,
+};
+
+enum rq_cache_allocation {
+	RQ_CACHE_ALLOC_OFF,
+	RQ_CACHE_ALLOC_ALL,
+	RQ_CACHE_ALLOC_FIRST,
+	RQ_CACHE_ALLOC_TWO,
+};
+
+enum cq_rx_errlvl_e {
+	CQ_ERRLVL_MAC,
+	CQ_ERRLVL_L2,
+	CQ_ERRLVL_L3,
+	CQ_ERRLVL_L4,
+};
+
+enum cq_rx_errop_e {
+	CQ_RX_ERROP_RE_NONE,
+	CQ_RX_ERROP_RE_PARTIAL = 0x1,
+	CQ_RX_ERROP_RE_JABBER = 0x2,
+	CQ_RX_ERROP_RE_FCS = 0x7,
+	CQ_RX_ERROP_RE_TERMINATE = 0x9,
+	CQ_RX_ERROP_RE_RX_CTL = 0xb,
+	CQ_RX_ERROP_PREL2_ERR = 0x1f,
+	CQ_RX_ERROP_L2_FRAGMENT = 0x20,
+	CQ_RX_ERROP_L2_OVERRUN = 0x21,
+	CQ_RX_ERROP_L2_PFCS = 0x22,
+	CQ_RX_ERROP_L2_PUNY = 0x23,
+	CQ_RX_ERROP_L2_MAL = 0x24,
+	CQ_RX_ERROP_L2_OVERSIZE = 0x25,
+	CQ_RX_ERROP_L2_UNDERSIZE = 0x26,
+	CQ_RX_ERROP_L2_LENMISM = 0x27,
+	CQ_RX_ERROP_L2_PCLP = 0x28,
+	CQ_RX_ERROP_IP_NOT = 0x41,
+	CQ_RX_ERROP_IP_CSUM_ERR = 0x42,
+	CQ_RX_ERROP_IP_MAL = 0x43,
+	CQ_RX_ERROP_IP_MALD = 0x44,
+	CQ_RX_ERROP_IP_HOP = 0x45,
+	CQ_RX_ERROP_L3_ICRC = 0x46,
+	CQ_RX_ERROP_L3_PCLP = 0x47,
+	CQ_RX_ERROP_L4_MAL = 0x61,
+	CQ_RX_ERROP_L4_CHK = 0x62,
+	CQ_RX_ERROP_UDP_LEN = 0x63,
+	CQ_RX_ERROP_L4_PORT = 0x64,
+	CQ_RX_ERROP_TCP_FLAG = 0x65,
+	CQ_RX_ERROP_TCP_OFFSET = 0x66,
+	CQ_RX_ERROP_L4_PCLP = 0x67,
+	CQ_RX_ERROP_RBDR_TRUNC = 0x70,
+};
+
+enum cq_tx_errop_e {
+	CQ_TX_ERROP_GOOD,
+	CQ_TX_ERROP_DESC_FAULT = 0x10,
+	CQ_TX_ERROP_HDR_CONS_ERR = 0x11,
+	CQ_TX_ERROP_SUBDC_ERR = 0x12,
+	CQ_TX_ERROP_IMM_SIZE_OFLOW = 0x80,
+	CQ_TX_ERROP_DATA_SEQUENCE_ERR = 0x81,
+	CQ_TX_ERROP_MEM_SEQUENCE_ERR = 0x82,
+	CQ_TX_ERROP_LOCK_VIOL = 0x83,
+	CQ_TX_ERROP_DATA_FAULT = 0x84,
+	CQ_TX_ERROP_TSTMP_CONFLICT = 0x85,
+	CQ_TX_ERROP_TSTMP_TIMEOUT = 0x86,
+	CQ_TX_ERROP_MEM_FAULT = 0x87,
+	CQ_TX_ERROP_CK_OVERLAP = 0x88,
+	CQ_TX_ERROP_CK_OFLOW = 0x89,
+	CQ_TX_ERROP_ENUM_LAST = 0x8a,
+};
+
+enum rq_sq_stats_reg_offset {
+	RQ_SQ_STATS_OCTS,
+	RQ_SQ_STATS_PKTS,
+};
+
+enum nic_stat_vnic_rx_e {
+	RX_OCTS,
+	RX_UCAST,
+	RX_BCAST,
+	RX_MCAST,
+	RX_RED,
+	RX_RED_OCTS,
+	RX_ORUN,
+	RX_ORUN_OCTS,
+	RX_FCS,
+	RX_L2ERR,
+	RX_DRP_BCAST,
+	RX_DRP_MCAST,
+	RX_DRP_L3BCAST,
+	RX_DRP_L3MCAST,
+};
+
+enum nic_stat_vnic_tx_e {
+	TX_OCTS,
+	TX_UCAST,
+	TX_BCAST,
+	TX_MCAST,
+	TX_DROP,
+};
+
+#endif /* _THUNDERX_NICVF_HW_DEFS_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 02/27] net/thunderx/base: add HW register definitions
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 01/27] net/thunderx/base: add HW constants Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 03/27] net/thunderx/base: implement DPDK based platform abstraction Jerin Jacob
                               ` (25 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

add HW register definitions of ThunderX inbuilt NIC

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw_defs.h | 668 ++++++++++++++++++++++++++++++
 1 file changed, 668 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw_defs.h b/drivers/net/thunderx/base/nicvf_hw_defs.h
index 8a58f03..88ecd17 100644
--- a/drivers/net/thunderx/base/nicvf_hw_defs.h
+++ b/drivers/net/thunderx/base/nicvf_hw_defs.h
@@ -548,4 +548,672 @@ enum nic_stat_vnic_tx_e {
 	TX_DROP,
 };
 
+/* vNIC HW Register structures */
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t cqe_type:4;
+		uint64_t stdn_fault:1;
+		uint64_t rsvd0:1;
+		uint64_t rq_qs:7;
+		uint64_t rq_idx:3;
+		uint64_t rsvd1:12;
+		uint64_t rss_alg:4;
+		uint64_t rsvd2:4;
+		uint64_t rb_cnt:4;
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t l4_type:4;
+		uint64_t l3_type:4;
+		uint64_t l2_present:1;
+		uint64_t err_level:3;
+		uint64_t err_opcode:8;
+#else
+		uint64_t err_opcode:8;
+		uint64_t err_level:3;
+		uint64_t l2_present:1;
+		uint64_t l3_type:4;
+		uint64_t l4_type:4;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+		uint64_t rb_cnt:4;
+		uint64_t rsvd2:4;
+		uint64_t rss_alg:4;
+		uint64_t rsvd1:12;
+		uint64_t rq_idx:3;
+		uint64_t rq_qs:7;
+		uint64_t rsvd0:1;
+		uint64_t stdn_fault:1;
+		uint64_t cqe_type:4;
+#endif
+	};
+} cqe_rx_word0_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t pkt_len:16;
+		uint64_t l2_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l4_ptr:8;
+		uint64_t cq_pkt_len:8;
+		uint64_t align_pad:3;
+		uint64_t rsvd3:1;
+		uint64_t chan:12;
+#else
+		uint64_t chan:12;
+		uint64_t rsvd3:1;
+		uint64_t align_pad:3;
+		uint64_t cq_pkt_len:8;
+		uint64_t l4_ptr:8;
+		uint64_t l3_ptr:8;
+		uint64_t l2_ptr:8;
+		uint64_t pkt_len:16;
+#endif
+	};
+} cqe_rx_word1_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t rss_tag:32;
+		uint64_t vlan_tci:16;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+#else
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan_tci:16;
+		uint64_t rss_tag:32;
+#endif
+	};
+} cqe_rx_word2_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb3_sz;
+		uint16_t rb2_sz;
+		uint16_t rb1_sz;
+		uint16_t rb0_sz;
+#else
+		uint16_t rb0_sz;
+		uint16_t rb1_sz;
+		uint16_t rb2_sz;
+		uint16_t rb3_sz;
+#endif
+	};
+} cqe_rx_word3_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb7_sz;
+		uint16_t rb6_sz;
+		uint16_t rb5_sz;
+		uint16_t rb4_sz;
+#else
+		uint16_t rb4_sz;
+		uint16_t rb5_sz;
+		uint16_t rb6_sz;
+		uint16_t rb7_sz;
+#endif
+	};
+} cqe_rx_word4_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint16_t rb11_sz;
+		uint16_t rb10_sz;
+		uint16_t rb9_sz;
+		uint16_t rb8_sz;
+#else
+		uint16_t rb8_sz;
+		uint16_t rb9_sz;
+		uint16_t rb10_sz;
+		uint16_t rb11_sz;
+#endif
+	};
+} cqe_rx_word5_t;
+
+typedef union {
+	uint64_t u64;
+	struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		uint64_t vlan_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan2_stripped:1;
+		uint64_t rsvd2:3;
+		uint64_t inner_l2:1;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l3type:4;
+		uint64_t vlan_ptr:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t rsvd1:8;
+		uint64_t rsvd0:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t inner_l4ptr:8;
+#else
+		uint64_t inner_l4ptr:8;
+		uint64_t inner_l3ptr:8;
+		uint64_t rsvd0:8;
+		uint64_t rsvd1:8;
+		uint64_t vlan2_ptr:8;
+		uint64_t vlan_ptr:8;
+		uint64_t inner_l3type:4;
+		uint64_t inner_l4type:4;
+		uint64_t inner_l2:1;
+		uint64_t rsvd2:3;
+		uint64_t vlan2_stripped:1;
+		uint64_t vlan2_found:1;
+		uint64_t vlan_stripped:1;
+		uint64_t vlan_found:1;
+#endif
+	};
+} cqe_rx2_word6_t;
+
+struct cqe_rx_t {
+	cqe_rx_word0_t word0;
+	cqe_rx_word1_t word1;
+	cqe_rx_word2_t word2;
+	cqe_rx_word3_t word3;
+	cqe_rx_word4_t word4;
+	cqe_rx_word5_t word5;
+	cqe_rx2_word6_t word6; /* if NIC_PF_RX_CFG[CQE_RX2_ENA] set */
+};
+
+struct cqe_rx_tcp_err_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:60;
+
+	uint64_t   rsvd1:4; /* W1 */
+	uint64_t   partial_first:1;
+	uint64_t   rsvd2:27;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd3:24;
+#else
+	uint64_t   rsvd0:60;
+	uint64_t   cqe_type:4;
+
+	uint64_t   rsvd3:24;
+	uint64_t   rbdr_bytes:8;
+	uint64_t   rsvd2:27;
+	uint64_t   partial_first:1;
+	uint64_t   rsvd1:4;
+#endif
+};
+
+struct cqe_rx_tcp_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:52;
+	uint64_t   cq_tcp_status:8;
+
+	uint64_t   rsvd1:32; /* W1 */
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_err_bytes:16;
+#else
+	uint64_t   cq_tcp_status:8;
+	uint64_t   rsvd0:52;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   tcp_err_bytes:16;
+	uint64_t   rsvd2:8;
+	uint64_t   tcp_cntx_bytes:8;
+	uint64_t   rsvd1:32; /* W1 */
+#endif
+};
+
+struct cqe_send_t {
+#if defined(__BIG_ENDIAN_BITFIELD)
+	uint64_t   cqe_type:4; /* W0 */
+	uint64_t   rsvd0:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd1:4;
+	uint64_t   rsvd2:10;
+	uint64_t   sq_qs:7;
+	uint64_t   sq_idx:3;
+	uint64_t   rsvd3:8;
+	uint64_t   send_status:8;
+
+	uint64_t   ptp_timestamp:64; /* W1 */
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	uint64_t   send_status:8;
+	uint64_t   rsvd3:8;
+	uint64_t   sq_idx:3;
+	uint64_t   sq_qs:7;
+	uint64_t   rsvd2:10;
+	uint64_t   rsvd1:4;
+	uint64_t   sqe_ptr:16;
+	uint64_t   rsvd0:4;
+	uint64_t   cqe_type:4; /* W0 */
+
+	uint64_t   ptp_timestamp:64;
+#endif
+};
+
+struct cq_entry_type_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t cqe_type:4;
+	uint64_t __pad:60;
+#else
+	uint64_t __pad:60;
+	uint64_t cqe_type:4;
+#endif
+};
+
+union cq_entry_t {
+	uint64_t u[64];
+	struct cq_entry_type_t type;
+	struct cqe_rx_t rx_hdr;
+	struct cqe_rx_tcp_t rx_tcp_hdr;
+	struct cqe_rx_tcp_err_t rx_tcp_err_hdr;
+	struct cqe_send_t cqe_send;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union cq_entry_t) == 512);
+
+struct rbdr_entry_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	union {
+		struct {
+			uint64_t   rsvd0:15;
+			uint64_t   buf_addr:42;
+			uint64_t   cache_align:7;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#else
+	union {
+		struct {
+			uint64_t   cache_align:7;
+			uint64_t   buf_addr:42;
+			uint64_t   rsvd0:15;
+		};
+		nicvf_phys_addr_t full_addr;
+	};
+#endif
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct rbdr_entry_t) == sizeof(uint64_t));
+
+/* TCP reassembly context */
+struct rbe_tcp_cnxt_t {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t   tcp_pkt_cnt:12;
+	uint64_t   rsvd1:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   ptr_bytes:16;
+	uint64_t   rsvd2:24;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd0:54;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   tcp_status:4;
+#else
+	uint64_t   tcp_status:4;
+	uint64_t   tcp_end_reason:2;
+	uint64_t   rsvd0:54;
+	uint64_t   cqe_type:4;
+	uint64_t   rsvd2:24;
+	uint64_t   ptr_bytes:16;
+	uint64_t   align_ptr_bytes:4;
+	uint64_t   align_hdr_bytes:4;
+	uint64_t   rsvd1:4;
+	uint64_t   tcp_pkt_cnt:12;
+#endif
+};
+
+/* Always Big endian */
+struct rx_hdr_t {
+	uint64_t   opaque:32;
+	uint64_t   rss_flow:8;
+	uint64_t   skip_length:6;
+	uint64_t   disable_rss:1;
+	uint64_t   disable_tcp_reassembly:1;
+	uint64_t   nodrop:1;
+	uint64_t   dest_alg:2;
+	uint64_t   rsvd0:2;
+	uint64_t   dest_rq:11;
+};
+
+struct sq_crc_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    rsvd1:32;
+	uint64_t    crc_ival:32;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_alg:2;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_len:16;
+#else
+	uint64_t    crc_len:16;
+	uint64_t    hdr_start:16;
+	uint64_t    crc_insert_pos:16;
+	uint64_t    rsvd0:10;
+	uint64_t    crc_alg:2;
+	uint64_t    subdesc_type:4;
+	uint64_t    crc_ival:32;
+	uint64_t    rsvd1:32;
+#endif
+};
+
+struct sq_gather_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    ld_type:2;
+	uint64_t    rsvd0:42;
+	uint64_t    size:16;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    size:16;
+	uint64_t    rsvd0:42;
+	uint64_t    ld_type:2;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+/* SQ immediate subdescriptor */
+struct sq_imm_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    rsvd0:46;
+	uint64_t    len:14;
+
+	uint64_t    data:64; /* W1 */
+#else
+	uint64_t    len:14;
+	uint64_t    rsvd0:46;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    data:64; /* W1 */
+#endif
+};
+
+struct sq_mem_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4; /* W0 */
+	uint64_t    mem_alg:4;
+	uint64_t    mem_dsz:2;
+	uint64_t    wmem:1;
+	uint64_t    rsvd0:21;
+	uint64_t    offset:32;
+
+	uint64_t    rsvd1:15; /* W1 */
+	uint64_t    addr:49;
+#else
+	uint64_t    offset:32;
+	uint64_t    rsvd0:21;
+	uint64_t    wmem:1;
+	uint64_t    mem_dsz:2;
+	uint64_t    mem_alg:4;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    addr:49;
+	uint64_t    rsvd1:15; /* W1 */
+#endif
+};
+
+struct sq_hdr_subdesc {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t    subdesc_type:4;
+	uint64_t    tso:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    dont_send:1;
+	uint64_t    tstmp:1;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    csum_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    rsvd0:2;
+	uint64_t    l4_offset:8;
+	uint64_t    l3_offset:8;
+	uint64_t    rsvd1:4;
+	uint64_t    tot_len:20; /* W0 */
+
+	uint64_t    rsvd2:24;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    tso_start:8;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_max_paysize:14; /* W1 */
+#else
+	uint64_t    tot_len:20;
+	uint64_t    rsvd1:4;
+	uint64_t    l3_offset:8;
+	uint64_t    l4_offset:8;
+	uint64_t    rsvd0:2;
+	uint64_t    csum_inner_l3:1;
+	uint64_t    csum_inner_l4:2;
+	uint64_t    csum_l3:1;
+	uint64_t    csum_l4:2;
+	uint64_t    subdesc_cnt:8;
+	uint64_t    tstmp:1;
+	uint64_t    dont_send:1;
+	uint64_t    post_cqe:1; /* Post CQE on no error also */
+	uint64_t    tso:1;
+	uint64_t    subdesc_type:4; /* W0 */
+
+	uint64_t    tso_max_paysize:14;
+	uint64_t    rsvd3:2;
+	uint64_t    tso_start:8;
+	uint64_t    inner_l3_offset:8;
+	uint64_t    inner_l4_offset:8;
+	uint64_t    rsvd2:24; /* W1 */
+#endif
+};
+
+/* Each sq entry is 128 bits wide */
+union sq_entry_t {
+	uint64_t buff[2];
+	struct sq_hdr_subdesc hdr;
+	struct sq_imm_subdesc imm;
+	struct sq_gather_subdesc gather;
+	struct sq_crc_subdesc crc;
+	struct sq_mem_subdesc mem;
+};
+
+NICVF_STATIC_ASSERT(sizeof(union sq_entry_t) == 16);
+
+/* Queue config register formats */
+struct rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_2_63:62;
+	uint64_t ena:1;
+	uint64_t reserved_0:1;
+#else
+	uint64_t reserved_0:1;
+	uint64_t ena:1;
+	uint64_t reserved_2_63:62;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct cq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_43_63:21;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t caching:1;
+	uint64_t reserved_35_39:5;
+	uint64_t qsize:3;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_0_15:16;
+#else
+	uint64_t reserved_0_15:16;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:3;
+	uint64_t reserved_35_39:5;
+	uint64_t caching:1;
+	uint64_t reset:1;
+	uint64_t ena:1;
+	uint64_t reserved_43_63:21;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct sq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_20_63:44;
+	uint64_t ena:1;
+	uint64_t reserved_18_18:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_11_15:5;
+	uint64_t qsize:3;
+	uint64_t reserved_3_7:5;
+	uint64_t tstmp_bgx_intf:3;
+#else
+	uint64_t tstmp_bgx_intf:3;
+	uint64_t reserved_3_7:5;
+	uint64_t qsize:3;
+	uint64_t reserved_11_15:5;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t reserved_18_18:1;
+	uint64_t ena:1;
+	uint64_t reserved_20_63:44;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct rbdr_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_45_63:19;
+	uint64_t ena:1;
+	uint64_t reset:1;
+	uint64_t ldwb:1;
+	uint64_t reserved_36_41:6;
+	uint64_t qsize:4;
+	uint64_t reserved_25_31:7;
+	uint64_t avg_con:9;
+	uint64_t reserved_12_15:4;
+	uint64_t lines:12;
+#else
+	uint64_t lines:12;
+	uint64_t reserved_12_15:4;
+	uint64_t avg_con:9;
+	uint64_t reserved_25_31:7;
+	uint64_t qsize:4;
+	uint64_t reserved_36_41:6;
+	uint64_t ldwb:1;
+	uint64_t reset:1;
+	uint64_t ena: 1;
+	uint64_t reserved_45_63:19;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_qs_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved_32_63:32;
+	uint64_t ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t sq_ins_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t lock_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t be:1;
+	uint64_t reserved_7_15:9;
+	uint64_t vnic:7;
+#else
+	uint64_t vnic:7;
+	uint64_t reserved_7_15:9;
+	uint64_t be:1;
+	uint64_t send_tstmp_ena:1;
+	uint64_t lock_viol_cqe_ena:1;
+	uint64_t lock_ena:1;
+	uint64_t sq_ins_pos:6;
+	uint64_t sq_ins_ena:1;
+	uint64_t reserved_27_30:4;
+	uint64_t ena:1;
+	uint64_t reserved_32_63:32;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t reserved1:1;
+	uint64_t reserved0:34;
+	uint64_t strip_pre_l2:1;
+	uint64_t caching:2;
+	uint64_t cq_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_strt_idx:1;
+#else
+	uint64_t rbdr_strt_idx:1;
+	uint64_t rbdr_strt_qs:7;
+	uint64_t rbdr_cont_idx:1;
+	uint64_t rbdr_cont_qs:7;
+	uint64_t cq_idx:3;
+	uint64_t cq_qs:7;
+	uint64_t caching:2;
+	uint64_t strip_pre_l2:1;
+	uint64_t reserved0:34;
+	uint64_t reserved1:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
+struct pf_rq_drop_cfg { union { struct {
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	uint64_t rbdr_red:1;
+	uint64_t cq_red:1;
+	uint64_t reserved3:14;
+	uint64_t rbdr_pass:8;
+	uint64_t rbdr_drop:8;
+	uint64_t reserved2:8;
+	uint64_t cq_pass:8;
+	uint64_t cq_drop:8;
+	uint64_t reserved1:8;
+#else
+	uint64_t reserved1:8;
+	uint64_t cq_drop:8;
+	uint64_t cq_pass:8;
+	uint64_t reserved2:8;
+	uint64_t rbdr_drop:8;
+	uint64_t rbdr_pass:8;
+	uint64_t reserved3:14;
+	uint64_t cq_red:1;
+	uint64_t rbdr_red:1;
+#endif
+	};
+	uint64_t value;
+}; };
+
 #endif /* _THUNDERX_NICVF_HW_DEFS_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 03/27] net/thunderx/base: implement DPDK based platform abstraction
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 01/27] net/thunderx/base: add HW constants Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 02/27] net/thunderx/base: add HW register definitions Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication Jerin Jacob
                               ` (24 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

implement DPDK based platform abstraction for base code

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_plat.h | 129 +++++++++++++++++++++++++++++++++
 1 file changed, 129 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_plat.h

diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
new file mode 100644
index 0000000..33fef08
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -0,0 +1,129 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_H
+#define _THUNDERX_NICVF_H
+
+/* Platform/OS/arch specific abstractions */
+
+/* log */
+#include <rte_log.h>
+#include "../nicvf_logs.h"
+
+#define nicvf_log_error(s, ...) PMD_DRV_LOG(ERR, s, ##__VA_ARGS__)
+
+#define nicvf_log_debug(s, ...) PMD_DRV_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_mbox_log(s, ...) PMD_MBOX_LOG(DEBUG, s, ##__VA_ARGS__)
+
+#define nicvf_log(s, ...) fprintf(stderr, s, ##__VA_ARGS__)
+
+/* delay */
+#include <rte_cycles.h>
+#define nicvf_delay_us(x) rte_delay_us(x)
+
+/* barrier */
+#include <rte_atomic.h>
+#define nicvf_smp_wmb() rte_smp_wmb()
+#define nicvf_smp_rmb() rte_smp_rmb()
+
+/* utils */
+#include <rte_common.h>
+#define nicvf_min(x, y) RTE_MIN(x, y)
+
+/* byte order */
+#include <rte_byteorder.h>
+#define nicvf_cpu_to_be_64(x) rte_cpu_to_be_64(x)
+#define nicvf_be_to_cpu_64(x) rte_be_to_cpu_64(x)
+
+/* Constants */
+#include <rte_ether.h>
+#define NICVF_MAC_ADDR_SIZE ETHER_ADDR_LEN
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define nicvf_prefetch_store_keep(_ptr) ({\
+	asm volatile("prfm pstl1keep, %a0\n" : : "p" (_ptr)); })
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	asm volatile(
+		    "str %x[val], [%x[addr]]"
+		    :
+		    : [val] "r" (val), [addr] "r" (addr));
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	uint64_t val;
+
+	asm volatile(
+		    "ldr %x[val], [%x[addr]]"
+		    : [val] "=r" (val)
+		    : [addr] "r" (addr));
+	return val;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr) ({		\
+			asm volatile(			\
+			"ldp %x[x1], %x[x0], [%x[p1]]"	\
+			: [x1]"=r"(reg1), [x0]"=r"(reg2)\
+			: [p1]"r"(addr)			\
+			); })
+
+#else /* non optimized functions for building on non arm64 arch */
+
+#define nicvf_prefetch_store_keep(_ptr) do {} while (0)
+
+static inline void __attribute__((always_inline))
+nicvf_addr_write(uintptr_t addr, uint64_t val)
+{
+	*(volatile uint64_t *)addr = val;
+}
+
+static inline uint64_t __attribute__((always_inline))
+nicvf_addr_read(uintptr_t addr)
+{
+	return	*(volatile uint64_t *)addr;
+}
+
+#define NICVF_LOAD_PAIR(reg1, reg2, addr)		\
+do {							\
+	reg1 = nicvf_addr_read((uintptr_t)addr);	\
+	reg2 = nicvf_addr_read((uintptr_t)addr + 8);	\
+} while (0)
+
+#endif
+
+#endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (2 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 03/27] net/thunderx/base: implement DPDK based platform abstraction Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-21 13:41               ` Ferruh Yigit
  2016-06-17 13:29             ` [PATCH v6 05/27] net/thunderx/base: add hardware API Jerin Jacob
                               ` (23 subsequent siblings)
  27 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

DPDK nicvf driver doesn't have access to NIC's PF address space.
Introduce a mailbox mechanism to communicate with PF driver through
shared 128bit register interface.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_mbox.c | 418 +++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_mbox.h | 232 ++++++++++++++++++
 drivers/net/thunderx/base/nicvf_plat.h |   2 +
 3 files changed, 652 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.c
 create mode 100644 drivers/net/thunderx/base/nicvf_mbox.h

diff --git a/drivers/net/thunderx/base/nicvf_mbox.c b/drivers/net/thunderx/base/nicvf_mbox.c
new file mode 100644
index 0000000..3067331
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.c
@@ -0,0 +1,418 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "nicvf_plat.h"
+
+#define NICVF_MBOX_PF_RESPONSE_DELAY_US   (1000)
+
+static const char *mbox_message[NIC_MBOX_MSG_MAX] =  {
+	[NIC_MBOX_MSG_INVALID]            = "NIC_MBOX_MSG_INVALID",
+	[NIC_MBOX_MSG_READY]              = "NIC_MBOX_MSG_READY",
+	[NIC_MBOX_MSG_ACK]                = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_NACK]               = "NIC_MBOX_MSG_ACK",
+	[NIC_MBOX_MSG_QS_CFG]             = "NIC_MBOX_MSG_QS_CFG",
+	[NIC_MBOX_MSG_RQ_CFG]             = "NIC_MBOX_MSG_RQ_CFG",
+	[NIC_MBOX_MSG_SQ_CFG]             = "NIC_MBOX_MSG_SQ_CFG",
+	[NIC_MBOX_MSG_RQ_DROP_CFG]        = "NIC_MBOX_MSG_RQ_DROP_CFG",
+	[NIC_MBOX_MSG_SET_MAC]            = "NIC_MBOX_MSG_SET_MAC",
+	[NIC_MBOX_MSG_SET_MAX_FRS]        = "NIC_MBOX_MSG_SET_MAX_FRS",
+	[NIC_MBOX_MSG_CPI_CFG]            = "NIC_MBOX_MSG_CPI_CFG",
+	[NIC_MBOX_MSG_RSS_SIZE]           = "NIC_MBOX_MSG_RSS_SIZE",
+	[NIC_MBOX_MSG_RSS_CFG]            = "NIC_MBOX_MSG_RSS_CFG",
+	[NIC_MBOX_MSG_RSS_CFG_CONT]       = "NIC_MBOX_MSG_RSS_CFG_CONT",
+	[NIC_MBOX_MSG_RQ_BP_CFG]          = "NIC_MBOX_MSG_RQ_BP_CFG",
+	[NIC_MBOX_MSG_RQ_SW_SYNC]         = "NIC_MBOX_MSG_RQ_SW_SYNC",
+	[NIC_MBOX_MSG_BGX_LINK_CHANGE]    = "NIC_MBOX_MSG_BGX_LINK_CHANGE",
+	[NIC_MBOX_MSG_ALLOC_SQS]          = "NIC_MBOX_MSG_ALLOC_SQS",
+	[NIC_MBOX_MSG_LOOPBACK]           = "NIC_MBOX_MSG_LOOPBACK",
+	[NIC_MBOX_MSG_RESET_STAT_COUNTER] = "NIC_MBOX_MSG_RESET_STAT_COUNTER",
+	[NIC_MBOX_MSG_CFG_DONE]           = "NIC_MBOX_MSG_CFG_DONE",
+	[NIC_MBOX_MSG_SHUTDOWN]           = "NIC_MBOX_MSG_SHUTDOWN",
+};
+
+static inline const char *
+nicvf_mbox_msg_str(int msg)
+{
+	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
+	/* undefined messages */
+	if (mbox_message[msg] == NULL)
+		msg = 0;
+	return mbox_message[msg];
+}
+
+static inline void
+nicvf_mbox_send_msg_to_pf_raw(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	uint64_t *mbx_data;
+	uint64_t mbx_addr;
+	int i;
+
+	mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	mbx_data = (uint64_t *)mbx;
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		nicvf_reg_write(nic, mbx_addr, *mbx_data);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+	nicvf_mbox_log("msg sent %s (VF%d)",
+			nicvf_mbox_msg_str(mbx->msg.msg), nic->vf_id);
+}
+
+static inline void
+nicvf_mbox_send_async_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+	/* Messages without ack are racy!*/
+	nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+}
+
+static inline int
+nicvf_mbox_send_msg_to_pf(struct nicvf *nic, struct nic_mbx *mbx)
+{
+	long timeout;
+	long sleep = 10;
+	int i, retry = 5;
+
+	for (i = 0; i < retry; i++) {
+		nic->pf_acked = false;
+		nic->pf_nacked = false;
+		nicvf_smp_wmb();
+
+		nicvf_mbox_send_msg_to_pf_raw(nic, mbx);
+		/* Give some time to get PF response */
+		nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+		while (timeout > 0) {
+			/* Periodic poll happens from nicvf_interrupt() */
+			nicvf_smp_rmb();
+
+			if (nic->pf_nacked)
+				return -EINVAL;
+			if (nic->pf_acked)
+				return 0;
+
+			nicvf_delay_us(NICVF_MBOX_PF_RESPONSE_DELAY_US);
+			timeout -= sleep;
+		}
+		nicvf_log_error("PF didn't ack to msg 0x%02x %s VF%d (%d/%d)",
+				mbx->msg.msg, nicvf_mbox_msg_str(mbx->msg.msg),
+				nic->vf_id, i, retry);
+	}
+	return -EBUSY;
+}
+
+
+int
+nicvf_handle_mbx_intr(struct nicvf *nic)
+{
+	struct nic_mbx mbx;
+	uint64_t *mbx_data = (uint64_t *)&mbx;
+	uint64_t mbx_addr = NIC_VF_PF_MAILBOX_0_1;
+	size_t i;
+
+	for (i = 0; i < NIC_PF_VF_MAILBOX_SIZE; i++) {
+		*mbx_data = nicvf_reg_read(nic, mbx_addr);
+		mbx_data++;
+		mbx_addr += sizeof(uint64_t);
+	}
+
+	/* Overwrite the message so we won't receive it again */
+	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1, 0x0);
+
+	nicvf_mbox_log("msg received id=0x%hhx %s (VF%d)", mbx.msg.msg,
+			nicvf_mbox_msg_str(mbx.msg.msg), nic->vf_id);
+
+	switch (mbx.msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
+		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
+		nic->node = mbx.nic_cfg.node_id;
+		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
+		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
+		ether_addr_copy((struct ether_addr *)mbx.nic_cfg.mac_addr,
+				(struct ether_addr *)nic->mac_addr);
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_ACK:
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_NACK:
+		nic->pf_nacked = true;
+		break;
+	case NIC_MBOX_MSG_RSS_SIZE:
+		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
+		nic->pf_acked = true;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
+		nic->pf_acked = true;
+		break;
+	default:
+		nicvf_log_error("Invalid message from PF, msg_id=0x%hhx %s",
+				mbx.msg.msg, nicvf_mbox_msg_str(mbx.msg.msg));
+		break;
+	}
+	nicvf_smp_wmb();
+
+	return mbx.msg.msg;
+}
+
+/*
+ * Checks if VF is able to communicate with PF
+ * and also gets the VNIC number this VF is associated to.
+ */
+int
+nicvf_mbox_check_pf_ready(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = {.msg = NIC_MBOX_MSG_READY} };
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			const uint8_t mac[NICVF_MAC_ADDR_SIZE])
+{
+	struct nic_mbx mbx = { .msg = {0} };
+	int i;
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAC;
+	mbx.mac.vf_id = nic->vf_id;
+	for (i = 0; i < 6; i++)
+		mbx.mac.mac_addr[i] = mac[i];
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CPI_CFG;
+	mbx.cpi_cfg.vf_id = nic->vf_id;
+	mbx.cpi_cfg.cpi_alg = nic->cpi_alg;
+	mbx.cpi_cfg.rq_cnt = qcnt;
+
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_get_rss_size(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RSS_SIZE;
+	mbx.rss_size.vf_id = nic->vf_id;
+
+	/* Result will be stored in nic->rss_info.rss_size */
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_config_rss(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+	size_t tot_len = rss->rss_size;
+	size_t cur_len;
+	size_t cur_idx = 0;
+	size_t i;
+
+	mbx.rss_cfg.vf_id = nic->vf_id;
+	mbx.rss_cfg.hash_bits = rss->hash_bits;
+	mbx.rss_cfg.tbl_len = 0;
+	mbx.rss_cfg.tbl_offset = 0;
+
+	while (cur_idx < tot_len) {
+		cur_len = nicvf_min(tot_len - cur_idx,
+				(size_t)RSS_IND_TBL_LEN_PER_MBX_MSG);
+		mbx.msg.msg = (cur_idx > 0) ?
+			NIC_MBOX_MSG_RSS_CFG_CONT : NIC_MBOX_MSG_RSS_CFG;
+		mbx.rss_cfg.tbl_offset = cur_idx;
+		mbx.rss_cfg.tbl_len = cur_len;
+		for (i = 0; i < cur_len; i++)
+			mbx.rss_cfg.ind_tbl[i] = rss->ind_tbl[cur_idx++];
+
+		if (nicvf_mbox_send_msg_to_pf(nic, &mbx))
+			return NICVF_ERR_RSS_TBL_UPDATE;
+	}
+
+	return 0;
+}
+
+int
+nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+		     struct pf_rq_cfg *pf_rq_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = pf_rq_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SQ_CFG;
+	mbx.sq.qs_num = nic->vf_id;
+	mbx.sq.sq_num = qidx;
+	mbx.sq.sqs_mode = nic->sqs_mode;
+	mbx.sq.cfg = (nic->vf_id << 3) | qidx;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+	qs_cfg->be = 1;
+#endif
+	/* Send a mailbox msg to PF to config Qset */
+	mbx.msg.msg = NIC_MBOX_MSG_QS_CFG;
+	mbx.qs.num = nic->vf_id;
+	mbx.qs.cfg = qs_cfg->value;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+	struct pf_rq_drop_cfg *drop_cfg;
+
+	/* Enable CQ drop to reserve sufficient CQEs for all tx packets */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_DROP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	drop_cfg = (struct pf_rq_drop_cfg *)&mbx.rq.cfg;
+	drop_cfg->value = 0;
+	if (enable) {
+		drop_cfg->cq_red = 1;
+		drop_cfg->cq_drop = 2;
+	}
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SET_MAX_FRS;
+	mbx.frs.max_frs = mtu;
+	mbx.frs.vf_id = nic->vf_id;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_sync(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	/* Make sure all packets in the pipeline are written back into mem */
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_SW_SYNC;
+	mbx.rq.cfg = 0;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_RQ_BP_CFG;
+	mbx.rq.qs_num = nic->vf_id;
+	mbx.rq.rq_num = qidx;
+	mbx.rq.cfg = 0;
+	if (enable)
+		mbx.rq.cfg = (1ULL << 63) | (1ULL << 62) | (nic->vf_id << 0);
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_loopback_config(struct nicvf *nic, bool enable)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.lbk.msg = NIC_MBOX_MSG_LOOPBACK;
+	mbx.lbk.vf_id = nic->vf_id;
+	mbx.lbk.enable = enable;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+int
+nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+			       uint8_t tx_stat_mask, uint16_t rq_stat_mask,
+			       uint16_t sq_stat_mask)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.reset_stat.msg = NIC_MBOX_MSG_RESET_STAT_COUNTER;
+	mbx.reset_stat.rx_stat_mask = rx_stat_mask;
+	mbx.reset_stat.tx_stat_mask = tx_stat_mask;
+	mbx.reset_stat.rq_stat_mask = rq_stat_mask;
+	mbx.reset_stat.sq_stat_mask = sq_stat_mask;
+	return nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_shutdown(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+	nicvf_mbox_send_msg_to_pf(nic, &mbx);
+}
+
+void
+nicvf_mbox_cfg_done(struct nicvf *nic)
+{
+	struct nic_mbx mbx = { .msg = { 0 } };
+
+	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
+	nicvf_mbox_send_async_msg_to_pf(nic, &mbx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_mbox.h b/drivers/net/thunderx/base/nicvf_mbox.h
new file mode 100644
index 0000000..7c0c6a9
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_mbox.h
@@ -0,0 +1,232 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_MBOX__
+#define __THUNDERX_NICVF_MBOX__
+
+#include <stdint.h>
+
+#include "nicvf_plat.h"
+
+/* PF <--> VF Mailbox communication
+ * Two 64bit registers are shared between PF and VF for each VF
+ * Writing into second register means end of message.
+ */
+
+/* PF <--> VF mailbox communication */
+#define	NIC_PF_VF_MAILBOX_SIZE		2
+#define	NIC_MBOX_MSG_TIMEOUT		2000	/* ms */
+
+/* Mailbox message types */
+#define	NIC_MBOX_MSG_INVALID		0x00	/* Invalid message */
+#define	NIC_MBOX_MSG_READY		0x01	/* Is PF ready to rcv msgs */
+#define	NIC_MBOX_MSG_ACK		0x02	/* ACK the message received */
+#define	NIC_MBOX_MSG_NACK		0x03	/* NACK the message received */
+#define	NIC_MBOX_MSG_QS_CFG		0x04	/* Configure Qset */
+#define	NIC_MBOX_MSG_RQ_CFG		0x05	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SQ_CFG		0x06	/* Configure Send queue */
+#define	NIC_MBOX_MSG_RQ_DROP_CFG	0x07	/* Configure receive queue */
+#define	NIC_MBOX_MSG_SET_MAC		0x08	/* Add MAC ID to DMAC filter */
+#define	NIC_MBOX_MSG_SET_MAX_FRS	0x09	/* Set max frame size */
+#define	NIC_MBOX_MSG_CPI_CFG		0x0A	/* Config CPI, RSSI */
+#define	NIC_MBOX_MSG_RSS_SIZE		0x0B	/* Get RSS indir_tbl size */
+#define	NIC_MBOX_MSG_RSS_CFG		0x0C	/* Config RSS table */
+#define	NIC_MBOX_MSG_RSS_CFG_CONT	0x0D	/* RSS config continuation */
+#define	NIC_MBOX_MSG_RQ_BP_CFG		0x0E	/* RQ backpressure config */
+#define	NIC_MBOX_MSG_RQ_SW_SYNC		0x0F	/* Flush inflight pkts to RQ */
+#define	NIC_MBOX_MSG_BGX_LINK_CHANGE	0x11	/* BGX:LMAC link status */
+#define	NIC_MBOX_MSG_ALLOC_SQS		0x12	/* Allocate secondary Qset */
+#define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
+#define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+#define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
+#define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
+#define	NIC_MBOX_MSG_MAX		0x100	/* Maximum number of messages */
+
+/* Get vNIC VF configuration */
+struct nic_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    node_id;
+	bool	   tns_mode:1;
+	bool	   sqs_mode:1;
+	bool	   loopback_supported:1;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Qset configuration */
+struct qs_cfg_msg {
+	uint8_t    msg;
+	uint8_t    num;
+	uint8_t    sqs_count;
+	uint64_t   cfg;
+};
+
+/* Receive queue configuration */
+struct rq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    rq_num;
+	uint64_t   cfg;
+};
+
+/* Send queue configuration */
+struct sq_cfg_msg {
+	uint8_t    msg;
+	uint8_t    qs_num;
+	uint8_t    sq_num;
+	bool       sqs_mode;
+	uint64_t   cfg;
+};
+
+/* Set VF's MAC address */
+struct set_mac_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    mac_addr[NICVF_MAC_ADDR_SIZE];
+};
+
+/* Set Maximum frame size */
+struct set_frs_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   max_frs;
+};
+
+/* Set CPI algorithm type */
+struct cpi_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    rq_cnt;
+	uint8_t    cpi_alg;
+};
+
+/* Get RSS table size */
+struct rss_sz_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint16_t   ind_tbl_size;
+};
+
+/* Set RSS configuration */
+struct rss_cfg_msg {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	uint8_t    hash_bits;
+	uint8_t    tbl_len;
+	uint8_t    tbl_offset;
+#define RSS_IND_TBL_LEN_PER_MBX_MSG	8
+	uint8_t    ind_tbl[RSS_IND_TBL_LEN_PER_MBX_MSG];
+};
+
+/* Physical interface link status */
+struct bgx_link_status {
+	uint8_t    msg;
+	uint8_t    link_up;
+	uint8_t    duplex;
+	uint32_t   speed;
+};
+
+/* Set interface in loopback mode */
+struct set_loopback {
+	uint8_t    msg;
+	uint8_t    vf_id;
+	bool	   enable;
+};
+
+/* Reset statistics counters */
+struct reset_stat_cfg {
+	uint8_t    msg;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_RX_STAT(0..13) */
+	uint16_t   rx_stat_mask;
+	/* Bitmap to select NIC_PF_VNIC(vf_id)_TX_STAT(0..4) */
+	uint8_t    tx_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_RQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_RQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_RQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_RQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_RQ0_STAT(0..1)
+	 */
+	uint16_t   rq_stat_mask;
+	/* Bitmap to select NIC_PF_QS(0..127)_SQ(0..7)_STAT(0..1)
+	 * bit14, bit15 NIC_PF_QS(vf_id)_SQ7_STAT(0..1)
+	 * bit12, bit13 NIC_PF_QS(vf_id)_SQ6_STAT(0..1)
+	 * ..
+	 * bit2, bit3 NIC_PF_QS(vf_id)_SQ1_STAT(0..1)
+	 * bit0, bit1 NIC_PF_QS(vf_id)_SQ0_STAT(0..1)
+	 */
+	uint16_t   sq_stat_mask;
+};
+
+struct nic_mbx {
+/* 128 bit shared memory between PF and each VF */
+union {
+	struct { uint8_t msg; }	msg;
+	struct nic_cfg_msg	nic_cfg;
+	struct qs_cfg_msg	qs;
+	struct rq_cfg_msg	rq;
+	struct sq_cfg_msg	sq;
+	struct set_mac_msg	mac;
+	struct set_frs_msg	frs;
+	struct cpi_cfg_msg	cpi_cfg;
+	struct rss_sz_msg	rss_size;
+	struct rss_cfg_msg	rss_cfg;
+	struct bgx_link_status  link_status;
+	struct set_loopback	lbk;
+	struct reset_stat_cfg	reset_stat;
+};
+};
+
+NICVF_STATIC_ASSERT(sizeof(struct nic_mbx) <= 16);
+
+int nicvf_handle_mbx_intr(struct nicvf *nic);
+int nicvf_mbox_check_pf_ready(struct nicvf *nic);
+int nicvf_mbox_qset_config(struct nicvf *nic, struct pf_qs_cfg *qs_cfg);
+int nicvf_mbox_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct pf_rq_cfg *pf_rq_cfg);
+int nicvf_mbox_sq_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_mbox_rq_drop_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_rq_bp_config(struct nicvf *nic, uint16_t qidx, bool enable);
+int nicvf_mbox_set_mac_addr(struct nicvf *nic,
+			    const uint8_t mac[NICVF_MAC_ADDR_SIZE]);
+int nicvf_mbox_config_cpi(struct nicvf *nic, uint32_t qcnt);
+int nicvf_mbox_get_rss_size(struct nicvf *nic);
+int nicvf_mbox_config_rss(struct nicvf *nic);
+int nicvf_mbox_update_hw_max_frs(struct nicvf *nic, uint16_t mtu);
+int nicvf_mbox_rq_sync(struct nicvf *nic);
+int nicvf_mbox_loopback_config(struct nicvf *nic, bool enable);
+int nicvf_mbox_reset_stat_counters(struct nicvf *nic, uint16_t rx_stat_mask,
+	uint8_t tx_stat_mask, uint16_t rq_stat_mask, uint16_t sq_stat_mask);
+void nicvf_mbox_shutdown(struct nicvf *nic);
+void nicvf_mbox_cfg_done(struct nicvf *nic);
+
+#endif /* __THUNDERX_NICVF_MBOX__ */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
index 33fef08..fbf28ce 100644
--- a/drivers/net/thunderx/base/nicvf_plat.h
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -126,4 +126,6 @@ do {							\
 
 #endif
 
+#include "nicvf_mbox.h"
+
 #endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 05/27] net/thunderx/base: add hardware API
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (3 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 06/27] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
                               ` (22 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

add nicvf hardware specific APIs for initialization and configuration.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c   | 731 +++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h   | 176 ++++++++
 drivers/net/thunderx/base/nicvf_plat.h |   1 +
 3 files changed, 908 insertions(+)
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.c
 create mode 100644 drivers/net/thunderx/base/nicvf_hw.h

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
new file mode 100644
index 0000000..ec24f9c
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -0,0 +1,731 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <math.h>
+#include <errno.h>
+#include <stdarg.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+#include "nicvf_plat.h"
+
+struct nicvf_reg_info {
+	uint32_t offset;
+	const char *name;
+};
+
+#define NICVF_REG_POLL_ITER_NR   (10)
+#define NICVF_REG_POLL_DELAY_US  (2000)
+#define NICVF_REG_INFO(reg) {reg, #reg}
+
+static const struct nicvf_reg_info nicvf_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_VF_CFG),
+	NICVF_REG_INFO(NIC_VF_PF_MAILBOX_0_1),
+	NICVF_REG_INFO(NIC_VF_INT),
+	NICVF_REG_INFO(NIC_VF_INT_W1S),
+	NICVF_REG_INFO(NIC_VF_ENA_W1C),
+	NICVF_REG_INFO(NIC_VF_ENA_W1S),
+	NICVF_REG_INFO(NIC_VNIC_RSS_CFG),
+	NICVF_REG_INFO(NIC_VNIC_RQ_GEN_CFG),
+};
+
+static const struct nicvf_reg_info nicvf_multi_reg_tbl[] = {
+	{NIC_VNIC_RSS_KEY_0_4 + 0,  "NIC_VNIC_RSS_KEY_0"},
+	{NIC_VNIC_RSS_KEY_0_4 + 8,  "NIC_VNIC_RSS_KEY_1"},
+	{NIC_VNIC_RSS_KEY_0_4 + 16, "NIC_VNIC_RSS_KEY_2"},
+	{NIC_VNIC_RSS_KEY_0_4 + 24, "NIC_VNIC_RSS_KEY_3"},
+	{NIC_VNIC_RSS_KEY_0_4 + 32, "NIC_VNIC_RSS_KEY_4"},
+	{NIC_VNIC_TX_STAT_0_4 + 0,  "NIC_VNIC_STAT_TX_OCTS"},
+	{NIC_VNIC_TX_STAT_0_4 + 8,  "NIC_VNIC_STAT_TX_UCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 16,  "NIC_VNIC_STAT_TX_BCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 24,  "NIC_VNIC_STAT_TX_MCAST"},
+	{NIC_VNIC_TX_STAT_0_4 + 32,  "NIC_VNIC_STAT_TX_DROP"},
+	{NIC_VNIC_RX_STAT_0_13 + 0,  "NIC_VNIC_STAT_RX_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 8,  "NIC_VNIC_STAT_RX_UCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 16, "NIC_VNIC_STAT_RX_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 24, "NIC_VNIC_STAT_RX_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 32, "NIC_VNIC_STAT_RX_RED"},
+	{NIC_VNIC_RX_STAT_0_13 + 40, "NIC_VNIC_STAT_RX_RED_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 48, "NIC_VNIC_STAT_RX_ORUN"},
+	{NIC_VNIC_RX_STAT_0_13 + 56, "NIC_VNIC_STAT_RX_ORUN_OCTS"},
+	{NIC_VNIC_RX_STAT_0_13 + 64, "NIC_VNIC_STAT_RX_FCS"},
+	{NIC_VNIC_RX_STAT_0_13 + 72, "NIC_VNIC_STAT_RX_L2ERR"},
+	{NIC_VNIC_RX_STAT_0_13 + 80, "NIC_VNIC_STAT_RX_DRP_BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 88, "NIC_VNIC_STAT_RX_DRP_MCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 96, "NIC_VNIC_STAT_RX_DRP_L3BCAST"},
+	{NIC_VNIC_RX_STAT_0_13 + 104, "NIC_VNIC_STAT_RX_DRP_L3MCAST"},
+};
+
+static const struct nicvf_reg_info nicvf_qset_cq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_CFG2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_STATUS2),
+	NICVF_REG_INFO(NIC_QSET_CQ_0_7_DEBUG),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_sq_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_CFG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_THRESH),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_BASE),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_HEAD),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_TAIL),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DOOR),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_DEBUG),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_SQ_0_7_STATUS1),
+};
+
+static const struct nicvf_reg_info nicvf_qset_rbdr_reg_tbl[] = {
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_CFG),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_THRESH),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_BASE),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_HEAD),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_TAIL),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_DOOR),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS0),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_STATUS1),
+	NICVF_REG_INFO(NIC_QSET_RBDR_0_1_PRFCH_STATUS),
+};
+
+int
+nicvf_base_init(struct nicvf *nic)
+{
+	nic->hwcap = 0;
+	if (nic->subsystem_device_id == 0)
+		return NICVF_ERR_BASE_INIT;
+
+	if (nicvf_hw_version(nic) == NICVF_PASS2)
+		nic->hwcap |= NICVF_CAP_TUNNEL_PARSING;
+
+	return NICVF_OK;
+}
+
+/* dump on stdout if data is NULL */
+int
+nicvf_reg_dump(struct nicvf *nic,  uint64_t *data)
+{
+	uint32_t i, q;
+	bool dump_stdout;
+
+	dump_stdout = data ? 0 : 1;
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_reg_tbl[i].name,
+				nicvf_reg_read(nic, nicvf_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic, nicvf_reg_tbl[i].offset);
+
+	for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl); i++)
+		if (dump_stdout)
+			nicvf_log("%24s  = 0x%" PRIx64 "\n",
+				nicvf_multi_reg_tbl[i].name,
+				nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset));
+		else
+			*data++ = nicvf_reg_read(nic,
+					nicvf_multi_reg_tbl[i].offset);
+
+	for (q = 0; q < MAX_CMP_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_cq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_cq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_SND_QUEUES_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_sq_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_sq_reg_tbl[i].offset, q);
+
+	for (q = 0; q < MAX_RCV_BUF_DESC_RINGS_PER_QS; q++)
+		for (i = 0; i < NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl); i++)
+			if (dump_stdout)
+				nicvf_log("%30s(%d)  = 0x%" PRIx64 "\n",
+					nicvf_qset_rbdr_reg_tbl[i].name, q,
+					nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q));
+			else
+				*data++ = nicvf_queue_reg_read(nic,
+					nicvf_qset_rbdr_reg_tbl[i].offset, q);
+	return 0;
+}
+
+int
+nicvf_reg_get_count(void)
+{
+	int nr_regs;
+
+	nr_regs = NICVF_ARRAY_SIZE(nicvf_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_multi_reg_tbl);
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_cq_reg_tbl) *
+			MAX_CMP_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rq_reg_tbl) *
+			MAX_RCV_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_sq_reg_tbl) *
+			MAX_SND_QUEUES_PER_QS;
+	nr_regs += NICVF_ARRAY_SIZE(nicvf_qset_rbdr_reg_tbl) *
+			MAX_RCV_BUF_DESC_RINGS_PER_QS;
+
+	return nr_regs;
+}
+
+static int
+nicvf_qset_config_internal(struct nicvf *nic, bool enable)
+{
+	int ret;
+	struct pf_qs_cfg pf_qs_cfg = {.value = 0};
+
+	pf_qs_cfg.ena = enable ? 1 : 0;
+	pf_qs_cfg.vnic = nic->vf_id;
+	ret = nicvf_mbox_qset_config(nic, &pf_qs_cfg);
+	return ret ? NICVF_ERR_SET_QS : 0;
+}
+
+/* Requests PF to assign and enable Qset */
+int
+nicvf_qset_config(struct nicvf *nic)
+{
+	/* Enable Qset */
+	return nicvf_qset_config_internal(nic, true);
+}
+
+int
+nicvf_qset_reclaim(struct nicvf *nic)
+{
+	/* Disable Qset */
+	return nicvf_qset_config_internal(nic, false);
+}
+
+static int
+cmpfunc(const void *a, const void *b)
+{
+	return (*(const uint32_t *)a - *(const uint32_t *)b);
+}
+
+static uint32_t
+nicvf_roundup_list(uint32_t val, uint32_t list[], uint32_t entries)
+{
+	uint32_t i;
+
+	qsort(list, entries, sizeof(uint32_t), cmpfunc);
+	for (i = 0; i < entries; i++)
+		if (val <= list[i])
+			break;
+	/* Not in the list */
+	if (i >= entries)
+		return 0;
+	else
+		return list[i];
+}
+
+static void
+nicvf_handle_qset_err_intr(struct nicvf *nic)
+{
+	uint16_t qidx;
+	uint64_t status;
+
+	nicvf_log("%s (VF%d)\n", __func__, nic->vf_id);
+	nicvf_reg_dump(nic, NULL);
+
+	for (qidx = 0; qidx < MAX_CMP_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_CQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_CQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_CQ_WR_FULL)
+			nicvf_log("[%d]NICVF_CQ_WR_FULL\n", qidx);
+		if (status & NICVF_CQ_WR_DISABLE)
+			nicvf_log("[%d]NICVF_CQ_WR_DISABLE\n", qidx);
+		if (status & NICVF_CQ_WR_FAULT)
+			nicvf_log("[%d]NICVF_CQ_WR_FAULT\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_SND_QUEUES_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(
+				nic, NIC_QSET_SQ_0_7_STATUS, qidx);
+		if (!(status & NICVF_SQ_ERR_MASK))
+			continue;
+
+		if (status & NICVF_SQ_ERR_STOPPED)
+			nicvf_log("[%d]NICVF_SQ_ERR_STOPPED\n", qidx);
+		if (status & NICVF_SQ_ERR_SEND)
+			nicvf_log("[%d]NICVF_SQ_ERR_SEND\n", qidx);
+		if (status & NICVF_SQ_ERR_DPE)
+			nicvf_log("[%d]NICVF_SQ_ERR_DPE\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_STATUS, qidx, 0);
+	}
+
+	for (qidx = 0; qidx < MAX_RCV_BUF_DESC_RINGS_PER_QS; qidx++) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_STATUS0, qidx);
+		status &= NICVF_RBDR_FIFO_STATE_MASK;
+		status >>= NICVF_RBDR_FIFO_STATE_SHIFT;
+
+		if (status == RBDR_FIFO_STATE_FAIL)
+			nicvf_log("[%d]RBDR_FIFO_STATE_FAIL\n", qidx);
+		nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx, 0);
+	}
+
+	nicvf_disable_all_interrupts(nic);
+	abort();
+}
+
+/*
+ * Handle poll mode driver interested "mbox" and "queue-set error" interrupts.
+ * This function is not re-entrant.
+ * The caller should provide proper serialization.
+ */
+int
+nicvf_reg_poll_interrupts(struct nicvf *nic)
+{
+	int msg = 0;
+	uint64_t intr;
+
+	intr = nicvf_reg_read(nic, NIC_VF_INT);
+	if (intr & NICVF_INTR_MBOX_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_MBOX_MASK);
+		msg = nicvf_handle_mbx_intr(nic);
+	}
+	if (intr & NICVF_INTR_QS_ERR_MASK) {
+		nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_QS_ERR_MASK);
+		nicvf_handle_qset_err_intr(nic);
+	}
+	return msg;
+}
+
+static int
+nicvf_qset_poll_reg(struct nicvf *nic, uint16_t qidx, uint32_t offset,
+		    uint32_t bit_pos, uint32_t bits, uint64_t val)
+{
+	uint64_t bit_mask;
+	uint64_t reg_val;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+
+	bit_mask = (1ULL << bits) - 1;
+	bit_mask = (bit_mask << bit_pos);
+
+	while (timeout) {
+		reg_val = nicvf_queue_reg_read(nic, offset, qidx);
+		if (((reg_val & bit_mask) >> bit_pos) == val)
+			return NICVF_OK;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+	}
+	return NICVF_ERR_REG_POLL;
+}
+
+int
+nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t status;
+	int timeout = NICVF_REG_POLL_ITER_NR;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	/* Save head and tail pointers for freeing up buffers */
+	if (rbdr) {
+		rbdr->head = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_HEAD, qidx) >> 3;
+		rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL,	qidx) >> 3;
+		rbdr->next_tail = rbdr->tail;
+	}
+
+	/* Reset RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+				NICVF_RBDR_RESET);
+
+	/* Disable RBDR */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_RBDR_0_1_STATUS0,
+				62, 2, 0x00))
+		return NICVF_ERR_RBDR_DISABLE;
+
+	while (1) {
+		status = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_PRFCH_STATUS,	qidx);
+		if ((status & 0xFFFFFFFF) == ((status >> 32) & 0xFFFFFFFF))
+			break;
+		nicvf_delay_us(NICVF_REG_POLL_DELAY_US);
+		timeout--;
+		if (!timeout)
+			return NICVF_ERR_RBDR_PREFETCH;
+	}
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx,
+			NICVF_RBDR_RESET);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x02))
+		return NICVF_ERR_RBDR_RESET1;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, 0x00);
+	if (nicvf_qset_poll_reg(nic, qidx,
+			NIC_QSET_RBDR_0_1_STATUS0, 62, 2, 0x00))
+		return NICVF_ERR_RBDR_RESET2;
+
+	return NICVF_OK;
+}
+
+static int
+nicvf_qsize_regbit(uint32_t len, uint32_t len_shift)
+{
+	int val;
+
+	val = ((uint32_t)log2(len) - len_shift);
+	assert(val >= NICVF_QSIZE_MIN_VAL);
+	assert(val <= NICVF_QSIZE_MAX_VAL);
+	return val;
+}
+
+int
+nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx)
+{
+	int ret;
+	uint64_t head, tail;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	struct rbdr_cfg rbdr_cfg = {.value = 0};
+
+	ret = nicvf_qset_rbdr_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set descriptor base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_BASE, qidx, rbdr->phys);
+
+	/* Enable RBDR  & set queue size */
+	rbdr_cfg.ena = 1;
+	rbdr_cfg.reset = 0;
+	rbdr_cfg.ldwb = 0;
+	rbdr_cfg.qsize = nicvf_qsize_regbit(rbdr->qlen_mask + 1,
+						RBDR_SIZE_SHIFT);
+	rbdr_cfg.avg_con = 0;
+	rbdr_cfg.lines = rbdr->buffsz / 128;
+
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_CFG, qidx, rbdr_cfg.value);
+
+	/* Verify proper RBDR reset */
+	head = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_HEAD, qidx);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_TAIL, qidx);
+
+	if (head | tail)
+		return NICVF_ERR_RBDR_RESET;
+
+	return NICVF_OK;
+}
+
+uint32_t
+nicvf_qsize_rbdr_roundup(uint32_t val)
+{
+	uint32_t list[] = {RBDR_QUEUE_SZ_8K, RBDR_QUEUE_SZ_16K,
+			RBDR_QUEUE_SZ_32K, RBDR_QUEUE_SZ_64K,
+			RBDR_QUEUE_SZ_128K, RBDR_QUEUE_SZ_256K,
+			RBDR_QUEUE_SZ_512K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			  rbdr_pool_get_handler handler,
+			  void *opaque, uint32_t max_buffs)
+{
+	struct rbdr_entry_t *desc, *desc0;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+	uint32_t count;
+	nicvf_phys_addr_t phy;
+
+	assert(rbdr != NULL);
+	desc = rbdr->desc;
+	count = 0;
+	/* Don't fill beyond max numbers of desc */
+	while (count < rbdr->qlen_mask) {
+		if (count >= max_buffs)
+			break;
+		desc0 = desc + count;
+		phy = handler(opaque);
+		if (phy) {
+			desc0->full_addr = phy;
+			count++;
+		} else {
+			break;
+		}
+	}
+	nicvf_smp_wmb();
+	nicvf_queue_reg_write(nic, NIC_QSET_RBDR_0_1_DOOR, ridx, count);
+	rbdr->tail = nicvf_queue_reg_read(nic,
+				NIC_QSET_RBDR_0_1_TAIL, ridx) >> 3;
+	rbdr->next_tail = rbdr->tail;
+	nicvf_smp_rmb();
+	return 0;
+}
+
+int
+nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx)
+{
+	return nicvf_queue_reg_read(nic, NIC_QSET_RBDR_0_1_STATUS0, qidx);
+}
+
+int
+nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t head, tail;
+	struct sq_cfg sq_cfg;
+
+	sq_cfg.value = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_CFG, qidx);
+
+	/* Disable send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, 0);
+
+	/* Check if SQ is stopped */
+	if (sq_cfg.ena && nicvf_qset_poll_reg(nic, qidx, NIC_QSET_SQ_0_7_STATUS,
+				NICVF_SQ_STATUS_STOPPED_BIT, 1, 0x01))
+		return NICVF_ERR_SQ_DISABLE;
+
+	/* Reset send queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, NICVF_SQ_RESET);
+	head = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_HEAD, qidx) >> 4;
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_TAIL, qidx) >> 4;
+	if (head | tail)
+		return  NICVF_ERR_SQ_RESET;
+
+	return 0;
+}
+
+int
+nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_txq *txq)
+{
+	int ret;
+	struct sq_cfg sq_cfg = {.value = 0};
+
+	ret = nicvf_qset_sq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Send a mailbox msg to PF to config SQ */
+	if (nicvf_mbox_sq_config(nic, qidx))
+		return  NICVF_ERR_SQ_PF_CFG;
+
+	/* Set queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_BASE, qidx, txq->phys);
+
+	/* Enable send queue  & set queue size */
+	sq_cfg.ena = 1;
+	sq_cfg.reset = 0;
+	sq_cfg.ldwb = 0;
+	sq_cfg.qsize = nicvf_qsize_regbit(txq->qlen_mask + 1, SND_QSIZE_SHIFT);
+	sq_cfg.tstmp_bgx_intf = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_CFG, qidx, sq_cfg.value);
+
+	/* Ring doorbell so that H/W restarts processing SQEs */
+	nicvf_queue_reg_write(nic, NIC_QSET_SQ_0_7_DOOR, qidx, 0);
+
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_sq_roundup(uint32_t val)
+{
+	uint32_t list[] = {SND_QUEUE_SZ_1K, SND_QUEUE_SZ_2K,
+			SND_QUEUE_SZ_4K, SND_QUEUE_SZ_8K,
+			SND_QUEUE_SZ_16K, SND_QUEUE_SZ_32K,
+			SND_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+int
+nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	/* Disable receive queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, 0);
+	return nicvf_mbox_rq_sync(nic);
+}
+
+int
+nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	struct pf_rq_cfg pf_rq_cfg = {.value = 0};
+	struct rq_cfg rq_cfg = {.value = 0};
+
+	if (nicvf_qset_rq_reclaim(nic, qidx))
+		return NICVF_ERR_RQ_CLAIM;
+
+	pf_rq_cfg.strip_pre_l2 = 0;
+	/* First cache line of RBDR data will be allocated into L2C */
+	pf_rq_cfg.caching = RQ_CACHE_ALLOC_FIRST;
+	pf_rq_cfg.cq_qs = nic->vf_id;
+	pf_rq_cfg.cq_idx = qidx;
+	pf_rq_cfg.rbdr_cont_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_cont_idx = 0;
+	pf_rq_cfg.rbdr_strt_qs = nic->vf_id;
+	pf_rq_cfg.rbdr_strt_idx = 0;
+
+	/* Send a mailbox msg to PF to config RQ */
+	if (nicvf_mbox_rq_config(nic, qidx, &pf_rq_cfg))
+		return NICVF_ERR_RQ_PF_CFG;
+
+	/* Select Rx backpressure */
+	if (nicvf_mbox_rq_bp_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_BP_CFG;
+
+	/* Send a mailbox msg to PF to config RQ drop */
+	if (nicvf_mbox_rq_drop_config(nic, qidx, rxq->rx_drop_en))
+		return NICVF_ERR_RQ_DROP_CFG;
+
+	/* Enable Receive queue */
+	rq_cfg.ena = 1;
+	nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, rq_cfg.value);
+
+	return 0;
+}
+
+int
+nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx)
+{
+	uint64_t tail, head;
+
+	/* Disable completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, 0);
+	if (nicvf_qset_poll_reg(nic, qidx, NIC_QSET_CQ_0_7_CFG, 42, 1, 0))
+		return NICVF_ERR_CQ_DISABLE;
+
+	/* Reset completion queue */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, NICVF_CQ_RESET);
+	tail = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_TAIL, qidx) >> 9;
+	head = nicvf_queue_reg_read(nic, NIC_QSET_CQ_0_7_HEAD, qidx) >> 9;
+	if (head | tail)
+		return  NICVF_ERR_CQ_RESET;
+
+	/* Disable timer threshold (doesn't get reset upon CQ reset) */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+int
+nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx, struct nicvf_rxq *rxq)
+{
+	int ret;
+	struct cq_cfg cq_cfg = {.value = 0};
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		return ret;
+
+	/* Set completion queue base address */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_BASE, qidx, rxq->phys);
+
+	cq_cfg.ena = 1;
+	cq_cfg.reset = 0;
+	/* Writes of CQE will be allocated into L2C */
+	cq_cfg.caching = 1;
+	cq_cfg.qsize = nicvf_qsize_regbit(rxq->qlen_mask + 1, CMP_QSIZE_SHIFT);
+	cq_cfg.avg_con = 0;
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG, qidx, cq_cfg.value);
+
+	/* Set threshold value for interrupt generation */
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, 0);
+	nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, qidx, 0);
+	return 0;
+}
+
+uint32_t
+nicvf_qsize_cq_roundup(uint32_t val)
+{
+	uint32_t list[] = {CMP_QUEUE_SZ_1K, CMP_QUEUE_SZ_2K,
+			CMP_QUEUE_SZ_4K, CMP_QUEUE_SZ_8K,
+			CMP_QUEUE_SZ_16K, CMP_QUEUE_SZ_32K,
+			CMP_QUEUE_SZ_64K};
+	return nicvf_roundup_list(val, list, NICVF_ARRAY_SIZE(list));
+}
+
+
+void
+nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
+{
+	uint64_t val;
+
+	val = nicvf_reg_read(nic, NIC_VNIC_RQ_GEN_CFG);
+	if (enable)
+		val |= (STRIP_FIRST_VLAN << 25);
+	else
+		val &= ~((STRIP_SECOND_VLAN | STRIP_FIRST_VLAN) << 25);
+
+	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
+}
+
+int
+nicvf_loopback_config(struct nicvf *nic, bool enable)
+{
+	if (enable && nic->loopback_supported == 0)
+		return NICVF_ERR_LOOPBACK_CFG;
+
+	return nicvf_mbox_loopback_config(nic, enable);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
new file mode 100644
index 0000000..dc9f4f1
--- /dev/null
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -0,0 +1,176 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_HW_H
+#define _THUNDERX_NICVF_HW_H
+
+#include <stdint.h>
+
+#include "nicvf_hw_defs.h"
+
+#define	PCI_VENDOR_ID_CAVIUM			0x177D
+#define	PCI_DEVICE_ID_THUNDERX_PASS1_NICVF	0x0011
+#define	PCI_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA034
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF	0xA11E
+#define	PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF	0xA134
+
+#define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
+#define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
+
+#define NICVF_CAP_TUNNEL_PARSING          (1ULL << 0)
+
+enum nicvf_tns_mode {
+	NIC_TNS_BYPASS_MODE,
+	NIC_TNS_MODE,
+};
+
+enum nicvf_err_e {
+	NICVF_OK,
+	NICVF_ERR_SET_QS = -8191,/* -8191 */
+	NICVF_ERR_RESET_QS,      /* -8190 */
+	NICVF_ERR_REG_POLL,      /* -8189 */
+	NICVF_ERR_RBDR_RESET,    /* -8188 */
+	NICVF_ERR_RBDR_DISABLE,  /* -8187 */
+	NICVF_ERR_RBDR_PREFETCH, /* -8186 */
+	NICVF_ERR_RBDR_RESET1,   /* -8185 */
+	NICVF_ERR_RBDR_RESET2,   /* -8184 */
+	NICVF_ERR_RQ_CLAIM,      /* -8183 */
+	NICVF_ERR_RQ_PF_CFG,	 /* -8182 */
+	NICVF_ERR_RQ_BP_CFG,	 /* -8181 */
+	NICVF_ERR_RQ_DROP_CFG,	 /* -8180 */
+	NICVF_ERR_CQ_DISABLE,	 /* -8179 */
+	NICVF_ERR_CQ_RESET,	 /* -8178 */
+	NICVF_ERR_SQ_DISABLE,	 /* -8177 */
+	NICVF_ERR_SQ_RESET,	 /* -8176 */
+	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
+	NICVF_ERR_LOOPBACK_CFG,  /* -8174 */
+	NICVF_ERR_BASE_INIT,     /* -8173 */
+};
+
+typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
+
+/* Common structs used in DPDK and base layer are defined in DPDK layer */
+#include "../nicvf_struct.h"
+
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rbdr) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_txq) <= 128);
+NICVF_STATIC_ASSERT(sizeof(struct nicvf_rxq) <= 128);
+
+static inline void
+nicvf_reg_write(struct nicvf *nic, uint32_t offset, uint64_t val)
+{
+	nicvf_addr_write(nic->reg_base + offset, val);
+}
+
+static inline uint64_t
+nicvf_reg_read(struct nicvf *nic, uint32_t offset)
+{
+	return nicvf_addr_read(nic->reg_base + offset);
+}
+
+static inline uintptr_t
+nicvf_qset_base(struct nicvf *nic, uint32_t qidx)
+{
+	return nic->reg_base + (qidx << NIC_Q_NUM_SHIFT);
+}
+
+static inline void
+nicvf_queue_reg_write(struct nicvf *nic, uint32_t offset, uint32_t qidx,
+		      uint64_t val)
+{
+	nicvf_addr_write(nicvf_qset_base(nic, qidx) + offset, val);
+}
+
+static inline uint64_t
+nicvf_queue_reg_read(struct nicvf *nic, uint32_t offset, uint32_t qidx)
+{
+	return	nicvf_addr_read(nicvf_qset_base(nic, qidx) + offset);
+}
+
+static inline void
+nicvf_disable_all_interrupts(struct nicvf *nic)
+{
+	nicvf_reg_write(nic, NIC_VF_ENA_W1C, NICVF_INTR_ALL_MASK);
+	nicvf_reg_write(nic, NIC_VF_INT, NICVF_INTR_ALL_MASK);
+}
+
+static inline uint32_t
+nicvf_hw_version(struct nicvf *nic)
+{
+	return nic->subsystem_device_id;
+}
+
+static inline uint64_t
+nicvf_hw_cap(struct nicvf *nic)
+{
+	return nic->hwcap;
+}
+
+int nicvf_base_init(struct nicvf *nic);
+
+int nicvf_reg_get_count(void);
+int nicvf_reg_poll_interrupts(struct nicvf *nic);
+int nicvf_reg_dump(struct nicvf *nic, uint64_t *data);
+
+int nicvf_qset_config(struct nicvf *nic);
+int nicvf_qset_reclaim(struct nicvf *nic);
+
+int nicvf_qset_rbdr_config(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_reclaim(struct nicvf *nic, uint16_t qidx);
+int nicvf_qset_rbdr_precharge(struct nicvf *nic, uint16_t ridx,
+			      rbdr_pool_get_handler handler, void *opaque,
+			      uint32_t max_buffs);
+int nicvf_qset_rbdr_active(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_rq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_rq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_cq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_rxq *rxq);
+int nicvf_qset_cq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+int nicvf_qset_sq_config(struct nicvf *nic, uint16_t qidx,
+			 struct nicvf_txq *txq);
+int nicvf_qset_sq_reclaim(struct nicvf *nic, uint16_t qidx);
+
+uint32_t nicvf_qsize_rbdr_roundup(uint32_t val);
+uint32_t nicvf_qsize_cq_roundup(uint32_t val);
+uint32_t nicvf_qsize_sq_roundup(uint32_t val);
+
+void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
+
+int nicvf_loopback_config(struct nicvf *nic, bool enable);
+
+#endif /* _THUNDERX_NICVF_HW_H */
diff --git a/drivers/net/thunderx/base/nicvf_plat.h b/drivers/net/thunderx/base/nicvf_plat.h
index fbf28ce..83c1844 100644
--- a/drivers/net/thunderx/base/nicvf_plat.h
+++ b/drivers/net/thunderx/base/nicvf_plat.h
@@ -126,6 +126,7 @@ do {							\
 
 #endif
 
+#include "nicvf_hw.h"
 #include "nicvf_mbox.h"
 
 #endif /* _THUNDERX_NICVF_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 06/27] net/thunderx/base: add RSS and reta configuration HW APIs
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (4 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 05/27] net/thunderx/base: add hardware API Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 07/27] net/thunderx/base: add statistics get " Jerin Jacob
                               ` (21 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c | 129 +++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h |  20 ++++++
 2 files changed, 149 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
index ec24f9c..3366aa5 100644
--- a/drivers/net/thunderx/base/nicvf_hw.c
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -721,6 +721,135 @@ nicvf_vlan_hw_strip(struct nicvf *nic, bool enable)
 	nicvf_reg_write(nic, NIC_VNIC_RQ_GEN_CFG, val);
 }
 
+void
+nicvf_rss_set_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_cpu_to_be_64(*keyptr);
+		nicvf_reg_write(nic, addr, val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_get_key(struct nicvf *nic, uint8_t *key)
+{
+	int idx;
+	uint64_t addr, val;
+	uint64_t *keyptr = (uint64_t *)key;
+
+	addr = NIC_VNIC_RSS_KEY_0_4;
+	for (idx = 0; idx < RSS_HASH_KEY_SIZE; idx++) {
+		val = nicvf_reg_read(nic, addr);
+		*keyptr = nicvf_be_to_cpu_64(val);
+		addr += sizeof(uint64_t);
+		keyptr++;
+	}
+}
+
+void
+nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val)
+{
+	nicvf_reg_write(nic, NIC_VNIC_RSS_CFG, val);
+}
+
+uint64_t
+nicvf_rss_get_cfg(struct nicvf *nic)
+{
+	return nicvf_reg_read(nic, NIC_VNIC_RSS_CFG);
+}
+
+int
+nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		rss->ind_tbl[idx] = tbl[idx];
+
+	if (nicvf_mbox_config_rss(nic))
+		return NICVF_ERR_RSS_TBL_UPDATE;
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count)
+{
+	uint32_t idx;
+	struct nicvf_rss_reta_info *rss = &nic->rss_info;
+
+	/* result will be stored in nic->rss_info.rss_size */
+	if (nicvf_mbox_get_rss_size(nic))
+		return NICVF_ERR_RSS_GET_SZ;
+
+	assert(rss->rss_size > 0);
+	rss->hash_bits = (uint8_t)log2(rss->rss_size);
+	for (idx = 0; idx < rss->rss_size && idx < max_count; idx++)
+		tbl[idx] = rss->ind_tbl[idx];
+
+	return NICVF_OK;
+}
+
+int
+nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg)
+{
+	uint32_t idx;
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+	uint8_t default_key[RSS_HASH_KEY_BYTE_SIZE] = {
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+		0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
+	};
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	if (cfg == 0)
+		return -EINVAL;
+
+	/* Update default RSS key and cfg */
+	nicvf_rss_set_key(nic, default_key);
+	nicvf_rss_set_cfg(nic, cfg);
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = idx % qcnt;
+
+	return nicvf_rss_reta_update(nic, default_reta,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+int
+nicvf_rss_term(struct nicvf *nic)
+{
+	uint32_t idx;
+	uint8_t disable_rss[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	nicvf_rss_set_cfg(nic, 0);
+	/* Redirect the output to 0th queue  */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		disable_rss[idx] = 0;
+
+	return nicvf_rss_reta_update(nic, disable_rss,
+			NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 int
 nicvf_loopback_config(struct nicvf *nic, bool enable)
 {
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
index dc9f4f1..a7ae531 100644
--- a/drivers/net/thunderx/base/nicvf_hw.h
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -76,10 +76,18 @@ enum nicvf_err_e {
 	NICVF_ERR_SQ_PF_CFG,	 /* -8175 */
 	NICVF_ERR_LOOPBACK_CFG,  /* -8174 */
 	NICVF_ERR_BASE_INIT,     /* -8173 */
+	NICVF_ERR_RSS_TBL_UPDATE,/* -8172 */
+	NICVF_ERR_RSS_GET_SZ,    /* -8171 */
 };
 
 typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
 
+struct nicvf_rss_reta_info {
+	uint8_t hash_bits;
+	uint16_t rss_size;
+	uint8_t ind_tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+};
+
 /* Common structs used in DPDK and base layer are defined in DPDK layer */
 #include "../nicvf_struct.h"
 
@@ -171,6 +179,18 @@ uint32_t nicvf_qsize_sq_roundup(uint32_t val);
 
 void nicvf_vlan_hw_strip(struct nicvf *nic, bool enable);
 
+int nicvf_rss_config(struct nicvf *nic, uint32_t  qcnt, uint64_t cfg);
+int nicvf_rss_term(struct nicvf *nic);
+
+int nicvf_rss_reta_update(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+int nicvf_rss_reta_query(struct nicvf *nic, uint8_t *tbl, uint32_t max_count);
+
+void nicvf_rss_set_key(struct nicvf *nic, uint8_t *key);
+void nicvf_rss_get_key(struct nicvf *nic, uint8_t *key);
+
+void nicvf_rss_set_cfg(struct nicvf *nic, uint64_t val);
+uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
+
 int nicvf_loopback_config(struct nicvf *nic, bool enable);
 
 #endif /* _THUNDERX_NICVF_HW_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 07/27] net/thunderx/base: add statistics get HW APIs
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (5 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 06/27] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 08/27] net/thunderx: add pmd skeleton Jerin Jacob
                               ` (20 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/base/nicvf_hw.c | 45 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/base/nicvf_hw.h | 44 +++++++++++++++++++++++++++++++++++
 2 files changed, 89 insertions(+)

diff --git a/drivers/net/thunderx/base/nicvf_hw.c b/drivers/net/thunderx/base/nicvf_hw.c
index 3366aa5..001b0ed 100644
--- a/drivers/net/thunderx/base/nicvf_hw.c
+++ b/drivers/net/thunderx/base/nicvf_hw.c
@@ -858,3 +858,48 @@ nicvf_loopback_config(struct nicvf *nic, bool enable)
 
 	return nicvf_mbox_loopback_config(nic, enable);
 }
+
+void
+nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats)
+{
+	stats->rx_bytes = NICVF_GET_RX_STATS(RX_OCTS);
+	stats->rx_ucast_frames = NICVF_GET_RX_STATS(RX_UCAST);
+	stats->rx_bcast_frames = NICVF_GET_RX_STATS(RX_BCAST);
+	stats->rx_mcast_frames = NICVF_GET_RX_STATS(RX_MCAST);
+	stats->rx_fcs_errors = NICVF_GET_RX_STATS(RX_FCS);
+	stats->rx_l2_errors = NICVF_GET_RX_STATS(RX_L2ERR);
+	stats->rx_drop_red = NICVF_GET_RX_STATS(RX_RED);
+	stats->rx_drop_red_bytes = NICVF_GET_RX_STATS(RX_RED_OCTS);
+	stats->rx_drop_overrun = NICVF_GET_RX_STATS(RX_ORUN);
+	stats->rx_drop_overrun_bytes = NICVF_GET_RX_STATS(RX_ORUN_OCTS);
+	stats->rx_drop_bcast = NICVF_GET_RX_STATS(RX_DRP_BCAST);
+	stats->rx_drop_mcast = NICVF_GET_RX_STATS(RX_DRP_MCAST);
+	stats->rx_drop_l3_bcast = NICVF_GET_RX_STATS(RX_DRP_L3BCAST);
+	stats->rx_drop_l3_mcast = NICVF_GET_RX_STATS(RX_DRP_L3MCAST);
+
+	stats->tx_bytes_ok = NICVF_GET_TX_STATS(TX_OCTS);
+	stats->tx_ucast_frames_ok = NICVF_GET_TX_STATS(TX_UCAST);
+	stats->tx_bcast_frames_ok = NICVF_GET_TX_STATS(TX_BCAST);
+	stats->tx_mcast_frames_ok = NICVF_GET_TX_STATS(TX_MCAST);
+	stats->tx_drops = NICVF_GET_TX_STATS(TX_DROP);
+}
+
+void
+nicvf_hw_get_rx_qstats(struct nicvf *nic, struct nicvf_hw_rx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_rx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS0, qidx);
+	qstats->q_rx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_RQ_0_7_STATUS1, qidx);
+}
+
+void
+nicvf_hw_get_tx_qstats(struct nicvf *nic, struct nicvf_hw_tx_qstats *qstats,
+		       uint16_t qidx)
+{
+	qstats->q_tx_bytes =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS0, qidx);
+	qstats->q_tx_packets =
+		nicvf_queue_reg_read(nic, NIC_QSET_SQ_0_7_STATUS1, qidx);
+}
diff --git a/drivers/net/thunderx/base/nicvf_hw.h b/drivers/net/thunderx/base/nicvf_hw.h
index a7ae531..9db1d30 100644
--- a/drivers/net/thunderx/base/nicvf_hw.h
+++ b/drivers/net/thunderx/base/nicvf_hw.h
@@ -45,6 +45,11 @@
 
 #define NICVF_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
 
+#define NICVF_GET_RX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_RX_STAT_0_13 | (reg << 3))
+#define NICVF_GET_TX_STATS(reg) \
+	nicvf_reg_read(nic, NIC_VNIC_TX_STAT_0_4 | (reg << 3))
+
 #define NICVF_PASS1	(PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF)
 #define NICVF_PASS2	(PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF)
 
@@ -82,6 +87,39 @@ enum nicvf_err_e {
 
 typedef nicvf_phys_addr_t (*rbdr_pool_get_handler)(void *opaque);
 
+struct nicvf_hw_rx_qstats {
+	uint64_t q_rx_bytes;
+	uint64_t q_rx_packets;
+};
+
+struct nicvf_hw_tx_qstats {
+	uint64_t q_tx_bytes;
+	uint64_t q_tx_packets;
+};
+
+struct nicvf_hw_stats {
+	uint64_t rx_bytes;
+	uint64_t rx_ucast_frames;
+	uint64_t rx_bcast_frames;
+	uint64_t rx_mcast_frames;
+	uint64_t rx_fcs_errors;
+	uint64_t rx_l2_errors;
+	uint64_t rx_drop_red;
+	uint64_t rx_drop_red_bytes;
+	uint64_t rx_drop_overrun;
+	uint64_t rx_drop_overrun_bytes;
+	uint64_t rx_drop_bcast;
+	uint64_t rx_drop_mcast;
+	uint64_t rx_drop_l3_bcast;
+	uint64_t rx_drop_l3_mcast;
+
+	uint64_t tx_bytes_ok;
+	uint64_t tx_ucast_frames_ok;
+	uint64_t tx_bcast_frames_ok;
+	uint64_t tx_mcast_frames_ok;
+	uint64_t tx_drops;
+};
+
 struct nicvf_rss_reta_info {
 	uint8_t hash_bits;
 	uint16_t rss_size;
@@ -193,4 +231,10 @@ uint64_t nicvf_rss_get_cfg(struct nicvf *nic);
 
 int nicvf_loopback_config(struct nicvf *nic, bool enable);
 
+void nicvf_hw_get_stats(struct nicvf *nic, struct nicvf_hw_stats *stats);
+void nicvf_hw_get_rx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_rx_qstats *qstats, uint16_t qidx);
+void nicvf_hw_get_tx_qstats(struct nicvf *nic,
+			    struct nicvf_hw_tx_qstats *qstats, uint16_t qidx);
+
 #endif /* _THUNDERX_NICVF_HW_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 08/27] net/thunderx: add pmd skeleton
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (6 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 07/27] net/thunderx/base: add statistics get " Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 09/27] net/thunderx: add link status and link update support Jerin Jacob
                               ` (19 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.

By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 config/common_base                                 |  10 +
 config/defconfig_arm64-thunderx-linuxapp-gcc       |  10 +
 drivers/net/Makefile                               |   1 +
 drivers/net/thunderx/Makefile                      |  63 +++++
 drivers/net/thunderx/nicvf_ethdev.c                | 253 +++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h                |  48 ++++
 drivers/net/thunderx/nicvf_logs.h                  |  83 +++++++
 drivers/net/thunderx/nicvf_struct.h                | 124 ++++++++++
 .../thunderx/rte_pmd_thunderx_nicvf_version.map    |   4 +
 mk/rte.app.mk                                      |   1 +
 10 files changed, 597 insertions(+)
 create mode 100644 drivers/net/thunderx/Makefile
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.c
 create mode 100644 drivers/net/thunderx/nicvf_ethdev.h
 create mode 100644 drivers/net/thunderx/nicvf_logs.h
 create mode 100644 drivers/net/thunderx/nicvf_struct.h
 create mode 100644 drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map

diff --git a/config/common_base b/config/common_base
index d6e7a16..ead5984 100644
--- a/config/common_base
+++ b/config/common_base
@@ -264,6 +264,16 @@ CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
 CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS=0
 
 #
+# Compile burst-oriented Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
+
+#
 # Compile burst-oriented VIRTIO PMD driver
 #
 CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc
index 9818a2e..a5b1e24 100644
--- a/config/defconfig_arm64-thunderx-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx-linuxapp-gcc
@@ -36,3 +36,13 @@ CONFIG_RTE_MACHINE="thunderx"
 CONFIG_RTE_CACHE_LINE_SIZE=128
 CONFIG_RTE_MAX_NUMA_NODES=2
 CONFIG_RTE_MAX_LCORE=96
+
+#
+# Compile Cavium Thunderx NICVF PMD driver
+#
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD=y
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 3832706..bc93230 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -51,6 +51,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap
 DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += szedata2
+DIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += thunderx
 DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
 DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
new file mode 100644
index 0000000..eb9f100
--- /dev/null
+++ b/drivers/net/thunderx/Makefile
@@ -0,0 +1,63 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_thunderx_nicvf.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_thunderx_nicvf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_mempool lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
new file mode 100644
index 0000000..ba78ff2
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -0,0 +1,253 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <sys/queue.h>
+#include <sys/timerfd.h>
+
+#include <rte_alarm.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_pci.h>
+#include <rte_tailq.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+
+#include "nicvf_logs.h"
+
+static void
+nicvf_interrupt(void *arg)
+{
+	struct nicvf *nic = arg;
+
+	nicvf_reg_poll_interrupts(nic);
+
+	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+				nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_start(struct nicvf *nic)
+{
+	return rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
+					nicvf_interrupt, nic);
+}
+
+static int
+nicvf_periodic_alarm_stop(struct nicvf *nic)
+{
+	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
+}
+
+/* Initialize and register driver with DPDK Application */
+static const struct eth_dev_ops nicvf_eth_dev_ops = {
+};
+
+static int
+nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+	int ret;
+	struct rte_pci_device *pci_dev;
+	struct nicvf *nic = nicvf_pmd_priv(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &nicvf_eth_dev_ops;
+
+	pci_dev = eth_dev->pci_dev;
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	nic->device_id = pci_dev->id.device_id;
+	nic->vendor_id = pci_dev->id.vendor_id;
+	nic->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	nic->eth_dev = eth_dev;
+
+	PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	nic->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!nic->reg_base) {
+		PMD_INIT_LOG(ERR, "Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	nicvf_disable_all_interrupts(nic);
+
+	ret = nicvf_periodic_alarm_start(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to start period alarm");
+		goto fail;
+	}
+
+	ret = nicvf_mbox_check_pf_ready(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get ready message from PF");
+		goto alarm_fail;
+	} else {
+		PMD_INIT_LOG(INFO,
+			"node=%d vf=%d mode=%s sqs=%s loopback_supported=%s",
+			nic->node, nic->vf_id,
+			nic->tns_mode == NIC_TNS_MODE ? "tns" : "tns-bypass",
+			nic->sqs_mode ? "true" : "false",
+			nic->loopback_supported ? "true" : "false"
+			);
+	}
+
+	if (nic->sqs_mode) {
+		PMD_INIT_LOG(INFO, "Unsupported SQS VF detected, Detaching...");
+		/* Detach port by returning Positive error number */
+		ret = ENOTSUP;
+		goto alarm_fail;
+	}
+
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for mac addr");
+		ret = -ENOMEM;
+		goto alarm_fail;
+	}
+	if (is_zero_ether_addr((struct ether_addr *)nic->mac_addr))
+		eth_random_addr(&nic->mac_addr[0]);
+
+	ether_addr_copy((struct ether_addr *)nic->mac_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	ret = nicvf_mbox_set_mac_addr(nic, nic->mac_addr);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to set mac addr");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_base_init(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to execute nicvf_base_init");
+		goto malloc_fail;
+	}
+
+	ret = nicvf_mbox_get_rss_size(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to get rss table size");
+		goto malloc_fail;
+	}
+
+	PMD_INIT_LOG(INFO, "Port %d (%x:%x) mac=%02x:%02x:%02x:%02x:%02x:%02x",
+		eth_dev->data->port_id, nic->vendor_id, nic->device_id,
+		nic->mac_addr[0], nic->mac_addr[1], nic->mac_addr[2],
+		nic->mac_addr[3], nic->mac_addr[4], nic->mac_addr[5]);
+
+	return 0;
+
+malloc_fail:
+	rte_free(eth_dev->data->mac_addrs);
+alarm_fail:
+	nicvf_periodic_alarm_stop(nic);
+fail:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_nicvf_map[] = {
+	{
+		.class_id = RTE_CLASS_ANY_ID,
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS1_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS1_NICVF,
+	},
+	{
+		.class_id = RTE_CLASS_ANY_ID,
+		.vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.device_id = PCI_DEVICE_ID_THUNDERX_PASS2_NICVF,
+		.subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
+		.subsystem_device_id = PCI_SUB_DEVICE_ID_THUNDERX_PASS2_NICVF,
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct eth_driver rte_nicvf_pmd = {
+	.pci_drv = {
+		.name = "rte_nicvf_pmd",
+		.id_table = pci_id_nicvf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	},
+	.eth_dev_init = nicvf_eth_dev_init,
+	.dev_private_size = sizeof(struct nicvf),
+};
+
+static int
+rte_nicvf_pmd_init(const char *name __rte_unused, const char *para __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+	PMD_INIT_LOG(INFO, "librte_pmd_thunderx nicvf version %s",
+			THUNDERX_NICVF_PMD_VERSION);
+
+	rte_eth_driver_register(&rte_nicvf_pmd);
+	return 0;
+}
+
+static struct rte_driver rte_nicvf_driver = {
+	.name = "nicvf_driver",
+	.type = PMD_PDEV,
+	.init = rte_nicvf_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_nicvf_driver);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
new file mode 100644
index 0000000..d4d2071
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -0,0 +1,48 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_ETHDEV_H__
+#define __THUNDERX_NICVF_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+#define THUNDERX_NICVF_PMD_VERSION      "1.0"
+
+#define NICVF_INTR_POLL_INTERVAL_MS	50
+
+static inline struct nicvf *
+nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
+{
+	return eth_dev->data->dev_private;
+}
+
+#endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_logs.h b/drivers/net/thunderx/nicvf_logs.h
new file mode 100644
index 0000000..0667d46
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_logs.h
@@ -0,0 +1,83 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_LOGS__
+#define __THUNDERX_NICVF_LOGS__
+
+#include <assert.h>
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+#else
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_RX_ASSERT(x) assert(x)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_RX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define NICVF_TX_ASSERT(x) assert(x)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#define NICVF_TX_ASSERT(x) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX
+#define PMD_MBOX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_MBOX_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
+#define PMD_MBOX_FUNC_TRACE() do { } while (0)
+#endif
+
+#endif /* __THUNDERX_NICVF_LOGS__ */
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
new file mode 100644
index 0000000..c52545d
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -0,0 +1,124 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _THUNDERX_NICVF_STRUCT_H
+#define _THUNDERX_NICVF_STRUCT_H
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_interrupts.h>
+#include <rte_ethdev.h>
+#include <rte_memory.h>
+
+struct nicvf_rbdr {
+	uint64_t rbdr_status;
+	uint64_t rbdr_door;
+	struct rbdr_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	uint32_t buffsz;
+	uint32_t tail;
+	uint32_t next_tail;
+	uint32_t head;
+	uint32_t qlen_mask;
+} __rte_cache_aligned;
+
+struct nicvf_txq {
+	union sq_entry_t *desc;
+	nicvf_phys_addr_t phys;
+	struct rte_mbuf **txbuffs;
+	uint64_t sq_head;
+	uint64_t sq_door;
+	struct rte_mempool *pool;
+	struct nicvf *nic;
+	void (*pool_free)(struct nicvf_txq *sq);
+	uint32_t head;
+	uint32_t tail;
+	int32_t xmit_bufs;
+	uint32_t qlen_mask;
+	uint32_t txq_flags;
+	uint16_t queue_id;
+	uint16_t tx_free_thresh;
+} __rte_cache_aligned;
+
+struct nicvf_rxq {
+	uint64_t mbuf_phys_off;
+	uint64_t cq_status;
+	uint64_t cq_door;
+	nicvf_phys_addr_t phys;
+	union cq_entry_t *desc;
+	struct nicvf_rbdr *shared_rbdr;
+	struct nicvf *nic;
+	struct rte_mempool *pool;
+	uint32_t head;
+	uint32_t qlen_mask;
+	int32_t available_space;
+	int32_t recv_buffers;
+	uint16_t rx_free_thresh;
+	uint16_t queue_id;
+	uint16_t precharge_cnt;
+	uint8_t rx_drop_en;
+	uint8_t  port_id;
+	uint8_t  rbptr_offset;
+} __rte_cache_aligned;
+
+struct nicvf {
+	uint8_t vf_id;
+	uint8_t node;
+	uintptr_t reg_base;
+	bool tns_mode;
+	bool sqs_mode;
+	bool loopback_supported;
+	bool pf_acked:1;
+	bool pf_nacked:1;
+	uint64_t hwcap;
+	uint8_t link_up;
+	uint8_t	duplex;
+	uint32_t speed;
+	uint32_t msg_enable;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+	struct nicvf_rbdr *rbdr;
+	struct nicvf_rss_reta_info rss_info;
+	struct rte_eth_dev *eth_dev;
+	struct rte_intr_handle intr_handle;
+	uint8_t cpi_alg;
+	uint16_t mtu;
+	bool vlan_filter_en;
+	uint8_t mac_addr[ETHER_ADDR_LEN];
+} __rte_cache_aligned;
+
+#endif /* _THUNDERX_NICVF_STRUCT_H */
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
new file mode 100644
index 0000000..1901bcb
--- /dev/null
+++ b/drivers/net/thunderx/rte_pmd_thunderx_nicvf_version.map
@@ -0,0 +1,4 @@
+DPDK_16.07 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c62ba64..0284f55 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -120,6 +120,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP)       += -lrte_pmd_pcap -lpcap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD)       += -lrte_pmd_qede -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING)       += -lrte_pmd_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2)   += -lrte_pmd_szedata2 -lsze2
+_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf -lm
 _LDLIBS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD)     += -lrte_pmd_virtio
 ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VHOST)      += -lrte_pmd_vhost
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 09/27] net/thunderx: add link status and link update support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (7 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 08/27] net/thunderx: add pmd skeleton Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 10/27] net/thunderx: add registers dump support Jerin Jacob
                               ` (18 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Extended the nicvf_interrupt function to respond
NIC_MBOX_MSG_BGX_LINK_CHANGE mbox message from PF and update
struct rte_eth_link accordingly.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 53 ++++++++++++++++++++++++++++++++++++-
 drivers/net/thunderx/nicvf_ethdev.h |  4 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ba78ff2..ec5407b 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,12 +69,45 @@
 
 #include "nicvf_logs.h"
 
+static inline int
+nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
+			       struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+		*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline void
+nicvf_set_eth_link_status(struct nicvf *nic, struct rte_eth_link *link)
+{
+	link->link_status = nic->link_up;
+	link->link_duplex = ETH_LINK_AUTONEG;
+	if (nic->duplex == NICVF_HALF_DUPLEX)
+		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+	else if (nic->duplex == NICVF_FULL_DUPLEX)
+		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = nic->speed;
+	link->link_autoneg = ETH_LINK_SPEED_AUTONEG;
+}
+
 static void
 nicvf_interrupt(void *arg)
 {
 	struct nicvf *nic = arg;
 
-	nicvf_reg_poll_interrupts(nic);
+	if (nicvf_reg_poll_interrupts(nic) == NIC_MBOX_MSG_BGX_LINK_CHANGE) {
+		if (nic->eth_dev->data->dev_conf.intr_conf.lsc)
+			nicvf_set_eth_link_status(nic,
+					&nic->eth_dev->data->dev_link);
+		_rte_eth_dev_callback_process(nic->eth_dev,
+				RTE_ETH_EVENT_INTR_LSC);
+	}
 
 	rte_eal_alarm_set(NICVF_INTR_POLL_INTERVAL_MS * 1000,
 				nicvf_interrupt, nic);
@@ -93,8 +126,26 @@ nicvf_periodic_alarm_stop(struct nicvf *nic)
 	return rte_eal_alarm_cancel(nicvf_interrupt, nic);
 }
 
+/*
+ * Return 0 means link status changed, -1 means not changed
+ */
+static int
+nicvf_dev_link_update(struct rte_eth_dev *dev,
+		      int wait_to_complete __rte_unused)
+{
+	struct rte_eth_link link;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	memset(&link, 0, sizeof(link));
+	nicvf_set_eth_link_status(nic, &link);
+	return nicvf_atomic_write_link_status(dev, &link);
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.link_update              = nicvf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index d4d2071..8189856 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -38,6 +38,10 @@
 #define THUNDERX_NICVF_PMD_VERSION      "1.0"
 
 #define NICVF_INTR_POLL_INTERVAL_MS	50
+#define NICVF_HALF_DUPLEX		0x00
+#define NICVF_FULL_DUPLEX		0x01
+#define NICVF_UNKNOWN_DUPLEX		0xff
+
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 10/27] net/thunderx: add registers dump support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (8 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 09/27] net/thunderx: add link status and link update support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 11/27] net/thunderx: add ethdev configure support Jerin Jacob
                               ` (17 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ec5407b..6811718 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -143,9 +143,36 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 	return nicvf_atomic_write_link_status(dev, &link);
 }
 
+static int
+nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
+{
+	return nicvf_reg_get_count();
+}
+
+static int
+nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
+{
+	uint64_t *data = regs->data;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (data == NULL)
+		return -EINVAL;
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+		(regs->length == (uint32_t)nicvf_reg_get_count())) {
+		regs->version = nic->vendor_id << 16 | nic->device_id;
+		nicvf_reg_dump(nic, data);
+		return 0;
+	}
+	return -ENOTSUP;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.link_update              = nicvf_dev_link_update,
+	.get_reg_length           = nicvf_dev_get_reg_length,
+	.get_reg                  = nicvf_dev_get_regs,
 };
 
 static int
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 11/27] net/thunderx: add ethdev configure support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (9 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 10/27] net/thunderx: add registers dump support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 12/27] net/thunderx: add get device info support Jerin Jacob
                               ` (16 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 78 +++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 6811718..33344fd 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,8 +168,86 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct rte_eth_rxmode *rxmode = &conf->rxmode;
+	struct rte_eth_txmode *txmode = &conf->txmode;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (!rte_eal_has_hugepages()) {
+		PMD_INIT_LOG(INFO, "Huge page is not configured");
+		return -EINVAL;
+	}
+
+	if (txmode->mq_mode) {
+		PMD_INIT_LOG(INFO, "Tx mq_mode DCB or VMDq not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
+		return -EINVAL;
+	}
+
+	if (!rxmode->hw_strip_crc) {
+		PMD_INIT_LOG(NOTICE, "Can't disable hw crc strip");
+		rxmode->hw_strip_crc = 1;
+	}
+
+	if (rxmode->hw_ip_checksum) {
+		PMD_INIT_LOG(NOTICE, "Rxcksum not supported");
+		rxmode->hw_ip_checksum = 0;
+	}
+
+	if (rxmode->split_hdr_size) {
+		PMD_INIT_LOG(INFO, "Rxmode does not support split header");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_filter) {
+		PMD_INIT_LOG(INFO, "VLAN filter not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->hw_vlan_extend) {
+		PMD_INIT_LOG(INFO, "VLAN extended not supported");
+		return -EINVAL;
+	}
+
+	if (rxmode->enable_lro) {
+		PMD_INIT_LOG(INFO, "LRO not supported");
+		return -EINVAL;
+	}
+
+	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
+		return -EINVAL;
+	}
+
+	if (conf->dcb_capability_en) {
+		PMD_INIT_LOG(INFO, "DCB enable not supported");
+		return -EINVAL;
+	}
+
+	if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+		PMD_INIT_LOG(INFO, "Flow director not supported");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
+		dev->data->port_id, nicvf_hw_cap(nic));
+
+	return 0;
+}
+
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
+	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 12/27] net/thunderx: add get device info support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (10 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 11/27] net/thunderx: add ethdev configure support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 13/27] net/thunderx: add Rx queue setup and release support Jerin Jacob
                               ` (15 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 45 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h | 17 ++++++++++++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 33344fd..1bea851 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,50 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	dev_info->min_rx_bufsize = ETHER_MIN_MTU;
+	dev_info->max_rx_pktlen = NIC_HW_MAX_FRS;
+	dev_info->max_rx_queues = (uint16_t)MAX_RCV_QUEUES_PER_QS;
+	dev_info->max_tx_queues = (uint16_t)MAX_SND_QUEUES_PER_QS;
+	dev_info->max_mac_addrs = 1;
+	dev_info->max_vfs = dev->pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	dev_info->reta_size = nic->rss_info.rss_size;
+	dev_info->hash_key_size = RSS_HASH_KEY_BYTE_SIZE;
+	dev_info->flow_type_rss_offloads = NICVF_RSS_OFFLOAD_PASS1;
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
+		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
+		.txq_flags =
+			ETH_TXQ_FLAGS_NOMULTSEGS  |
+			ETH_TXQ_FLAGS_NOREFCOUNT  |
+			ETH_TXQ_FLAGS_NOMULTMEMP  |
+			ETH_TXQ_FLAGS_NOVLANOFFL  |
+			ETH_TXQ_FLAGS_NOXSUMSCTP,
+	};
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -249,6 +293,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.dev_infos_get            = nicvf_dev_info_get,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 8189856..e31657d 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -42,6 +42,23 @@
 #define NICVF_FULL_DUPLEX		0x01
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
+#define NICVF_RSS_OFFLOAD_PASS1 ( \
+	ETH_RSS_PORT | \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NICVF_RSS_OFFLOAD_TUNNEL ( \
+	ETH_RSS_VXLAN | \
+	ETH_RSS_GENEVE | \
+	ETH_RSS_NVGRE)
+
+#define NICVF_DEFAULT_RX_FREE_THRESH    224
+#define NICVF_DEFAULT_TX_FREE_THRESH    224
+#define NICVF_TX_FREE_MPOOL_THRESH      16
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 13/27] net/thunderx: add Rx queue setup and release support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (11 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 12/27] net/thunderx: add get device info support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 14/27] net/thunderx: add Tx " Jerin Jacob
                               ` (14 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 136 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |   2 +
 2 files changed, 138 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1bea851..52856ab 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,140 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static int
+nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union cq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size,
+					NICVF_CQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rxq->phys = rz->phys_addr;
+	rxq->desc = rz->addr;
+	rxq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static void
+nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
+{
+	rxq->head = 0;
+	rxq->available_space = 0;
+	rxq->recv_buffers = 0;
+}
+
+static void
+nicvf_dev_rx_queue_release(void *rx_queue)
+{
+	struct nicvf_rxq *rxq = rx_queue;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rxq)
+		rte_free(rxq);
+}
+
+static int
+nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	uint16_t rx_free_thresh;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Mempool memory should be contiguous */
+	if (mp->nb_mem_chunks != 1) {
+		PMD_INIT_LOG(ERR, "Non contiguous mempool, check huge page sz");
+		return -EINVAL;
+	}
+
+	/* Rx deferred start is not supported */
+	if (rx_conf->rx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Rx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_cq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value nb_desc beyond available hw cq qsize");
+		return -EINVAL;
+	}
+
+	/* Check rx_free_thresh upper bound */
+	rx_free_thresh = (uint16_t)((rx_conf->rx_free_thresh) ?
+				rx_conf->rx_free_thresh :
+				NICVF_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh > NICVF_MAX_RX_FREE_THRESH ||
+		rx_free_thresh >= nb_desc * .75) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh greater than expected %d",
+				rx_free_thresh);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->rx_queues[qidx] != NULL) {
+		PMD_RX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_rx_queue_release(dev->data->rx_queues[qidx]);
+		dev->data->rx_queues[qidx] = NULL;
+	}
+
+	/* Allocate rxq memory */
+	rxq = rte_zmalloc_socket("ethdev rx queue", sizeof(struct nicvf_rxq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate rxq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	rxq->nic = nic;
+	rxq->pool = mp;
+	rxq->queue_id = qidx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rx_drop_en = rx_conf->rx_drop_en;
+	rxq->cq_status = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_STATUS;
+	rxq->cq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_CQ_0_7_DOOR;
+	rxq->precharge_cnt = 0;
+	rxq->rbptr_offset = NICVF_CQE_RBPTR_WORD;
+
+	/* Alloc completion queue */
+	if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) {
+		PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id);
+		nicvf_dev_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	nicvf_rx_queue_reset(rxq);
+
+	PMD_RX_LOG(DEBUG, "[%d] rxq=%p pool=%s nb_desc=(%d/%d) phy=%" PRIx64,
+			qidx, rxq, mp->name, nb_desc,
+			rte_mempool_count(mp), rxq->phys);
+
+	dev->data->rx_queues[qidx] = rxq;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -294,6 +428,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
+	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index e31657d..afb875a 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -59,6 +59,8 @@
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
 #define NICVF_TX_FREE_MPOOL_THRESH      16
+#define NICVF_MAX_RX_FREE_THRESH        1024
+#define NICVF_MAX_TX_FREE_THRESH        1024
 
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 14/27] net/thunderx: add Tx queue setup and release support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (12 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 13/27] net/thunderx: add Rx queue setup and release support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 15/27] net/thunderx: add RSS and reta query and update support Jerin Jacob
                               ` (13 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 175 ++++++++++++++++++++++++++++++++++++
 1 file changed, 175 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 52856ab..e786481 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -191,6 +191,179 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
+		    uint32_t desc_cnt)
+{
+	const struct rte_memzone *rz;
+	uint32_t ring_size = desc_cnt * sizeof(union sq_entry_t);
+
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size,
+				NICVF_SQ_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	sq->phys = rz->phys_addr;
+	sq->desc = rz->addr;
+	sq->qlen_mask = desc_cnt - 1;
+
+	return 0;
+}
+
+static inline void
+nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
+{
+	uint32_t head;
+
+	head = txq->head;
+	while (head != txq->tail) {
+		if (txq->txbuffs[head]) {
+			rte_pktmbuf_free_seg(txq->txbuffs[head]);
+			txq->txbuffs[head] = NULL;
+		}
+		head++;
+		head = head & txq->qlen_mask;
+	}
+}
+
+static void
+nicvf_tx_queue_reset(struct nicvf_txq *txq)
+{
+	uint32_t txq_desc_cnt = txq->qlen_mask + 1;
+
+	memset(txq->desc, 0, sizeof(union sq_entry_t) * txq_desc_cnt);
+	memset(txq->txbuffs, 0, sizeof(struct rte_mbuf *) * txq_desc_cnt);
+	txq->tail = 0;
+	txq->head = 0;
+	txq->xmit_bufs = 0;
+}
+
+static void
+nicvf_dev_tx_queue_release(void *sq)
+{
+	struct nicvf_txq *txq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = (struct nicvf_txq *)sq;
+	if (txq) {
+		if (txq->txbuffs != NULL) {
+			nicvf_tx_queue_release_mbufs(txq);
+			rte_free(txq->txbuffs);
+			txq->txbuffs = NULL;
+		}
+		rte_free(txq);
+	}
+}
+
+static int
+nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	uint16_t tx_free_thresh;
+	uint8_t is_single_pool;
+	struct nicvf_txq *txq;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Socket id check */
+	if (socket_id != (unsigned int)SOCKET_ID_ANY && socket_id != nic->node)
+		PMD_DRV_LOG(WARNING, "socket_id expected %d, configured %d",
+		socket_id, nic->node);
+
+	/* Tx deferred start is not supported */
+	if (tx_conf->tx_deferred_start) {
+		PMD_INIT_LOG(ERR, "Tx deferred start not supported");
+		return -EINVAL;
+	}
+
+	/* Roundup nb_desc to available qsize and validate max number of desc */
+	nb_desc = nicvf_qsize_sq_roundup(nb_desc);
+	if (nb_desc == 0) {
+		PMD_INIT_LOG(ERR, "Value of nb_desc beyond available sq qsize");
+		return -EINVAL;
+	}
+
+	/* Validate tx_free_thresh */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+				tx_conf->tx_free_thresh :
+				NICVF_DEFAULT_TX_FREE_THRESH);
+
+	if (tx_free_thresh > (nb_desc) ||
+		tx_free_thresh > NICVF_MAX_TX_FREE_THRESH) {
+		PMD_INIT_LOG(ERR,
+			"tx_free_thresh must be less than the number of TX "
+			"descriptors. (tx_free_thresh=%u port=%d "
+			"queue=%d)", (unsigned int)tx_free_thresh,
+			(int)dev->data->port_id, (int)qidx);
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->tx_queues[qidx] != NULL) {
+		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				qidx);
+		nicvf_dev_tx_queue_release(dev->data->tx_queues[qidx]);
+		dev->data->tx_queues[qidx] = NULL;
+	}
+
+	/* Allocating tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nicvf_txq),
+					RTE_CACHE_LINE_SIZE, nic->node);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate txq=%d", qidx);
+		return -ENOMEM;
+	}
+
+	txq->nic = nic;
+	txq->queue_id = qidx;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->sq_head = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_HEAD;
+	txq->sq_door = nicvf_qset_base(nic, qidx) + NIC_QSET_SQ_0_7_DOOR;
+	is_single_pool = (txq->txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT &&
+				txq->txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	/* Choose optimum free threshold value for multipool case */
+	if (!is_single_pool) {
+		txq->tx_free_thresh = (uint16_t)
+		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
+				NICVF_TX_FREE_MPOOL_THRESH :
+				tx_conf->tx_free_thresh);
+	}
+
+	/* Allocate software ring */
+	txq->txbuffs = rte_zmalloc_socket("txq->txbuffs",
+				nb_desc * sizeof(struct rte_mbuf *),
+				RTE_CACHE_LINE_SIZE, nic->node);
+
+	if (txq->txbuffs == NULL) {
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx);
+		nicvf_dev_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	nicvf_tx_queue_reset(txq);
+
+	PMD_TX_LOG(DEBUG, "[%d] txq=%p nb_desc=%d desc=%p phys=0x%" PRIx64,
+			qidx, txq, nb_desc, txq->desc, txq->phys);
+
+	dev->data->tx_queues[qidx] = txq;
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return 0;
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -430,6 +603,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_infos_get            = nicvf_dev_info_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
+	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
 	.get_reg                  = nicvf_dev_get_regs,
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 15/27] net/thunderx: add RSS and reta query and update support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (13 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 14/27] net/thunderx: add Tx " Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 16/27] net/thunderx: add MTU set support Jerin Jacob
                               ` (12 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 172 ++++++++++++++++++++++++++++++++++++
 1 file changed, 172 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index e786481..08f65b3 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -168,6 +168,174 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static inline uint64_t
+nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
+{
+	uint64_t nic_rss = 0;
+
+	if (ethdev_rss & ETH_RSS_IPV4)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_IPV6)
+		nic_rss |= RSS_IP_ENA;
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
+
+	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
+
+	if (ethdev_rss & ETH_RSS_PORT)
+		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (ethdev_rss & ETH_RSS_VXLAN)
+			nic_rss |= RSS_TUN_VXLAN_ENA;
+
+		if (ethdev_rss & ETH_RSS_GENEVE)
+			nic_rss |= RSS_TUN_GENEVE_ENA;
+
+		if (ethdev_rss & ETH_RSS_NVGRE)
+			nic_rss |= RSS_TUN_NVGRE_ENA;
+	}
+
+	return nic_rss;
+}
+
+static inline uint64_t
+nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
+{
+	uint64_t ethdev_rss = 0;
+
+	if (nic_rss & RSS_IP_ENA)
+		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
+				ETH_RSS_NONFRAG_IPV6_TCP);
+
+	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
+		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
+				ETH_RSS_NONFRAG_IPV6_UDP);
+
+	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
+		ethdev_rss |= ETH_RSS_PORT;
+
+	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
+		if (nic_rss & RSS_TUN_VXLAN_ENA)
+			ethdev_rss |= ETH_RSS_VXLAN;
+
+		if (nic_rss & RSS_TUN_GENEVE_ENA)
+			ethdev_rss |= ETH_RSS_GENEVE;
+
+		if (nic_rss & RSS_TUN_NVGRE_ENA)
+			ethdev_rss |= ETH_RSS_NVGRE;
+	}
+	return ethdev_rss;
+}
+
+static int
+nicvf_dev_reta_query(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				reta_conf[i].reta[j] = tbl[j];
+	}
+
+	return 0;
+}
+
+static int
+nicvf_dev_reta_update(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint8_t tbl[NIC_MAX_RSS_IDR_TBL_SIZE];
+	int ret, i, j;
+
+	if (reta_size != NIC_MAX_RSS_IDR_TBL_SIZE) {
+		RTE_LOG(ERR, PMD, "The size of hash lookup table configured "
+			"(%d) doesn't match the number hardware can supported "
+			"(%d)", reta_size, NIC_MAX_RSS_IDR_TBL_SIZE);
+		return -EINVAL;
+	}
+
+	ret = nicvf_rss_reta_query(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+	if (ret)
+		return ret;
+
+	/* Copy RETA table */
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			if ((reta_conf[i].mask >> j) & 0x01)
+				tbl[j] = reta_conf[i].reta[j];
+	}
+
+	return nicvf_rss_reta_update(nic, tbl, NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
+static int
+nicvf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			    struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	if (rss_conf->rss_key)
+		nicvf_rss_get_key(nic, rss_conf->rss_key);
+
+	rss_conf->rss_key_len =  RSS_HASH_KEY_BYTE_SIZE;
+	rss_conf->rss_hf = nicvf_rss_nic_to_ethdev(nic, nicvf_rss_get_cfg(nic));
+	return 0;
+}
+
+static int
+nicvf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t nic_rss;
+
+	if (rss_conf->rss_key &&
+		rss_conf->rss_key_len != RSS_HASH_KEY_BYTE_SIZE) {
+		RTE_LOG(ERR, PMD, "Hash key size mismatch %d",
+				rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key)
+		nicvf_rss_set_key(nic, rss_conf->rss_key);
+
+	nic_rss = nicvf_rss_ethdev_to_nic(nic, rss_conf->rss_hf);
+	nicvf_rss_set_cfg(nic, nic_rss);
+	return 0;
+}
+
 static int
 nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx,
 		    uint32_t desc_cnt)
@@ -601,6 +769,10 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.reta_update              = nicvf_dev_reta_update,
+	.reta_query               = nicvf_dev_reta_query,
+	.rss_hash_update          = nicvf_dev_rss_hash_update,
+	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 16/27] net/thunderx: add MTU set support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (14 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 15/27] net/thunderx: add RSS and reta query and update support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 17/27] net/thunderx: add promiscuous enable support Jerin Jacob
                               ` (11 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 44 +++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_ethdev.h |  2 ++
 2 files changed, 46 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 08f65b3..65b14c8 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -144,6 +144,49 @@ nicvf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint32_t buffsz, frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (frame_size > NIC_HW_MAX_FRS)
+		return -EINVAL;
+
+	if (frame_size < NIC_HW_MIN_FRS)
+		return -EINVAL;
+
+	buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
+
+	/*
+	 * Refuse mtu that requires the support of scattered packets
+	 * when this feature has not been enabled before.
+	 */
+	if (!dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz))
+		return -EINVAL;
+
+	/* check <seg size> * <max_seg>  >= max_frame */
+	if (dev->data->scattered_rx &&
+		(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
+		return -EINVAL;
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	if (nicvf_mbox_update_hw_max_frs(nic, frame_size))
+		return -EINVAL;
+
+	/* Update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)frame_size;
+	nic->mtu = mtu;
+	return 0;
+}
+
+static int
 nicvf_dev_get_reg_length(struct rte_eth_dev *dev  __rte_unused)
 {
 	return nicvf_reg_get_count();
@@ -769,6 +812,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index afb875a..b1af468 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -62,6 +62,8 @@
 #define NICVF_MAX_RX_FREE_THRESH        1024
 #define NICVF_MAX_TX_FREE_THRESH        1024
 
+#define VLAN_TAG_SIZE                   4	/* 802.3ac tag */
+
 static inline struct nicvf *
 nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 {
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 17/27] net/thunderx: add promiscuous enable support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (15 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 16/27] net/thunderx: add MTU set support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 18/27] net/thunderx: add stats support Jerin Jacob
                               ` (10 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 65b14c8..9c95ce8 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -211,6 +211,12 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+/* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
+static void
+nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
 static inline uint64_t
 nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
@@ -811,6 +817,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 18/27] net/thunderx: add stats support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (16 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 17/27] net/thunderx: add promiscuous enable support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions Jerin Jacob
                               ` (9 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 66 +++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 9c95ce8..e20f0d9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -211,6 +211,70 @@ nicvf_dev_get_regs(struct rte_eth_dev *dev, struct rte_dev_reg_info *regs)
 	return -ENOTSUP;
 }
 
+static void
+nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t qidx;
+	struct nicvf_hw_rx_qstats rx_qstats;
+	struct nicvf_hw_tx_qstats tx_qstats;
+	struct nicvf_hw_stats port_stats;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	/* Reading per RX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_rx_qstats(nic, &rx_qstats, qidx);
+		stats->q_ibytes[qidx] = rx_qstats.q_rx_bytes;
+		stats->q_ipackets[qidx] = rx_qstats.q_rx_packets;
+	}
+
+	/* Reading per TX ring stats */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) {
+		if (qidx == RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			break;
+
+		nicvf_hw_get_tx_qstats(nic, &tx_qstats, qidx);
+		stats->q_obytes[qidx] = tx_qstats.q_tx_bytes;
+		stats->q_opackets[qidx] = tx_qstats.q_tx_packets;
+	}
+
+	nicvf_hw_get_stats(nic, &port_stats);
+	stats->ibytes = port_stats.rx_bytes;
+	stats->ipackets = port_stats.rx_ucast_frames;
+	stats->ipackets += port_stats.rx_bcast_frames;
+	stats->ipackets += port_stats.rx_mcast_frames;
+	stats->ierrors = port_stats.rx_l2_errors;
+	stats->imissed = port_stats.rx_drop_red;
+	stats->imissed += port_stats.rx_drop_overrun;
+	stats->imissed += port_stats.rx_drop_bcast;
+	stats->imissed += port_stats.rx_drop_mcast;
+	stats->imissed += port_stats.rx_drop_l3_bcast;
+	stats->imissed += port_stats.rx_drop_l3_mcast;
+
+	stats->obytes = port_stats.tx_bytes_ok;
+	stats->opackets = port_stats.tx_ucast_frames_ok;
+	stats->opackets += port_stats.tx_bcast_frames_ok;
+	stats->opackets += port_stats.tx_mcast_frames_ok;
+	stats->oerrors = port_stats.tx_drops;
+}
+
+static void
+nicvf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	int i;
+	uint16_t rxqs = 0, txqs = 0;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		rxqs |= (0x3 << (i * 2));
+	for (i = 0; i < dev->data->nb_tx_queues; i++)
+		txqs |= (0x3 << (i * 2));
+
+	nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, rxqs, txqs);
+}
+
 /* Promiscuous mode enabled by default in LMAC to VF 1:1 map configuration */
 static void
 nicvf_dev_promisc_enable(struct rte_eth_dev *dev __rte_unused)
@@ -817,6 +881,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
 	.link_update              = nicvf_dev_link_update,
+	.stats_get                = nicvf_dev_stats_get,
+	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (17 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 18/27] net/thunderx: add stats support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-21 13:34               ` Ferruh Yigit
  2016-06-17 13:29             ` [PATCH v6 20/27] net/thunderx: add single and multi segment Rx functions Jerin Jacob
                               ` (8 subsequent siblings)
  27 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/Makefile       |   2 +
 drivers/net/thunderx/nicvf_ethdev.c |   5 +-
 drivers/net/thunderx/nicvf_rxtx.c   | 255 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |  93 +++++++++++++
 4 files changed, 354 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.c
 create mode 100644 drivers/net/thunderx/nicvf_rxtx.h

diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
index eb9f100..9079b5b 100644
--- a/drivers/net/thunderx/Makefile
+++ b/drivers/net/thunderx/Makefile
@@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
 #
 # all source are stored in SRCS-y
 #
+SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
 SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
 
+CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index e20f0d9..c727ce0 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -66,7 +66,7 @@
 #include "base/nicvf_plat.h"
 
 #include "nicvf_ethdev.h"
-
+#include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
 static inline int
@@ -617,6 +617,9 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 		(tx_conf->tx_free_thresh == NICVF_DEFAULT_TX_FREE_THRESH ?
 				NICVF_TX_FREE_MPOOL_THRESH :
 				tx_conf->tx_free_thresh);
+		txq->pool_free = nicvf_multi_pool_free_xmited_buffers;
+	} else {
+		txq->pool_free = nicvf_single_pool_free_xmited_buffers;
 	}
 
 	/* Allocate software ring */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
new file mode 100644
index 0000000..88a5152
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -0,0 +1,255 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_log.h>
+#include <rte_mbuf.h>
+#include <rte_prefetch.h>
+
+#include "base/nicvf_plat.h"
+
+#include "nicvf_ethdev.h"
+#include "nicvf_rxtx.h"
+#include "nicvf_logs.h"
+
+static inline void __hot
+fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+	uint64_t ol_flags;
+
+	/* Fill SQ header descriptor */
+	sqe.buff[0] = 0;
+	sqe.hdr.subdesc_type = SQ_DESC_TYPE_HEADER;
+	/* Number of sub-descriptors following this one */
+	sqe.hdr.subdesc_cnt = pkt->nb_segs;
+	sqe.hdr.tot_len = pkt->pkt_len;
+
+	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
+	if (unlikely(ol_flags)) {
+		/* L4 cksum */
+		if (ol_flags & PKT_TX_TCP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
+		else if (ol_flags & PKT_TX_UDP_CKSUM)
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
+		else
+			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
+		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
+
+		/* L3 cksum */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			sqe.hdr.csum_l3 = 1;
+			sqe.hdr.l3_offset = pkt->l2_len;
+		}
+	}
+
+	entry->buff[0] = sqe.buff[0];
+}
+
+void __hot
+nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	int j = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	void *obj_p[NICVF_MAX_TX_FREE_THRESH] __rte_cache_aligned;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head])
+			obj_p[j++] = txbuffs[head];
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	rte_mempool_put_bulk(sq->pool, obj_p, j);
+	sq->head = curr_head;
+	sq->xmit_bufs -= j;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+void __hot
+nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq)
+{
+	uint32_t n = 0;
+	uint32_t curr_head;
+	uint32_t head = sq->head;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+
+	curr_head = nicvf_addr_read(sq->sq_head) >> 4;
+	while (head != curr_head) {
+		if (txbuffs[head]) {
+			rte_pktmbuf_free_seg(txbuffs[head]);
+			n++;
+		}
+
+		head = (head + 1) & sq->qlen_mask;
+	}
+
+	sq->head = curr_head;
+	sq->xmit_bufs -= n;
+	NICVF_TX_ASSERT(sq->xmit_bufs >= 0);
+}
+
+static inline uint32_t __hot
+nicvf_free_tx_desc(struct nicvf_txq *sq)
+{
+	return ((sq->head - sq->tail - 1) & sq->qlen_mask);
+}
+
+/* Send Header + Packet */
+#define TX_DESC_PER_PKT 2
+
+static inline uint32_t __hot
+nicvf_free_xmitted_buffers(struct nicvf_txq *sq, struct rte_mbuf **tx_pkts,
+			    uint16_t nb_pkts)
+{
+	uint32_t free_desc = nicvf_free_tx_desc(sq);
+
+	if (free_desc < nb_pkts * TX_DESC_PER_PKT ||
+			sq->xmit_bufs > sq->tx_free_thresh) {
+		if (unlikely(sq->pool == NULL))
+			sq->pool = tx_pkts[0]->pool;
+
+		sq->pool_free(sq);
+		/* Freed now, let see the number of free descs again */
+		free_desc = nicvf_free_tx_desc(sq);
+	}
+	return free_desc;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+	uint32_t free_desc;
+	uint32_t tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt;
+	uint32_t qlen_mask = sq->qlen_mask;
+
+	tail = sq->tail;
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts && (int)free_desc >= TX_DESC_PER_PKT; i++) {
+		pkt = tx_pkts[i];
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+		free_desc -= TX_DESC_PER_PKT;
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += i;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, i * TX_DESC_PER_PKT);
+	return i;
+}
+
+uint16_t __hot
+nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	int i, k;
+	uint32_t used_desc, next_used_desc, used_bufs, free_desc, tail;
+	struct nicvf_txq *sq = tx_queue;
+	union sq_entry_t *desc_ptr = sq->desc;
+	struct rte_mbuf **txbuffs = sq->txbuffs;
+	struct rte_mbuf *pkt, *seg;
+	uint32_t qlen_mask = sq->qlen_mask;
+	uint16_t nb_segs;
+
+	tail = sq->tail;
+	used_desc = 0;
+	used_bufs = 0;
+
+	free_desc = nicvf_free_xmitted_buffers(sq, tx_pkts, nb_pkts);
+
+	for (i = 0; i < nb_pkts; i++) {
+		pkt = tx_pkts[i];
+
+		nb_segs = pkt->nb_segs;
+
+		next_used_desc = used_desc + nb_segs + 1;
+		if (next_used_desc > free_desc)
+			break;
+		used_desc = next_used_desc;
+		used_bufs += nb_segs;
+
+		txbuffs[tail] = NULL;
+		fill_sq_desc_header(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		txbuffs[tail] = pkt;
+		fill_sq_desc_gather(desc_ptr + tail, pkt);
+		tail = (tail + 1) & qlen_mask;
+
+		seg = pkt->next;
+		for (k = 1; k < nb_segs; k++) {
+			txbuffs[tail] = seg;
+			fill_sq_desc_gather(desc_ptr + tail, seg);
+			tail = (tail + 1) & qlen_mask;
+			seg = seg->next;
+		}
+	}
+
+	sq->tail = tail;
+	sq->xmit_bufs += used_bufs;
+	rte_wmb();
+
+	/* Inform HW to xmit the packets */
+	nicvf_addr_write(sq->sq_door, used_desc);
+	return nb_pkts;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
new file mode 100644
index 0000000..b1fdc69
--- /dev/null
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -0,0 +1,93 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __THUNDERX_NICVF_RXTX_H__
+#define __THUNDERX_NICVF_RXTX_H__
+
+#include <rte_ethdev.h>
+
+#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+
+#ifndef __hot
+#define __hot	__attribute__((hot))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return (i & ~3) + 3 - (i & 3);
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	/* Local variable sqe to avoid read from sq desc memory*/
+	union sq_entry_t sqe;
+
+	/* Fill the SQ gather entry */
+	sqe.buff[0] = 0; sqe.buff[1] = 0;
+	sqe.gather.subdesc_type = SQ_DESC_TYPE_GATHER;
+	sqe.gather.ld_type = NIC_SEND_LD_TYPE_E_LDT;
+	sqe.gather.size = pkt->data_len;
+	sqe.gather.addr = rte_mbuf_data_dma_addr(pkt);
+
+	entry->buff[0] = sqe.buff[0];
+	entry->buff[1] = sqe.buff[1];
+}
+
+#else
+
+static inline uint16_t __attribute__((const))
+nicvf_frag_num(uint16_t i)
+{
+	return i;
+}
+
+static inline void __hot
+fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
+{
+	entry->buff[0] = (uint64_t)SQ_DESC_TYPE_GATHER << 60 |
+			 (uint64_t)NIC_SEND_LD_TYPE_E_LDT << 58 |
+			 pkt->data_len;
+	entry->buff[1] = rte_mbuf_data_dma_addr(pkt);
+}
+#endif
+
+uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
+uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
+				  uint16_t pkts);
+
+void nicvf_single_pool_free_xmited_buffers(struct nicvf_txq *sq);
+void nicvf_multi_pool_free_xmited_buffers(struct nicvf_txq *sq);
+
+#endif /* __THUNDERX_NICVF_RXTX_H__  */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 20/27] net/thunderx: add single and multi segment Rx functions
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (18 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 21/27] net/thunderx: add supported packet type get Jerin Jacob
                               ` (7 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
 drivers/net/thunderx/nicvf_ethdev.h |  33 ++++
 drivers/net/thunderx/nicvf_rxtx.c   | 317 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.h   |   5 +
 3 files changed, 355 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b1af468..59fa19c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -70,4 +70,37 @@ nicvf_pmd_priv(struct rte_eth_dev *eth_dev)
 	return eth_dev->data->dev_private;
 }
 
+static inline uint64_t
+nicvf_mempool_phy_offset(struct rte_mempool *mp)
+{
+	struct rte_mempool_memhdr *hdr;
+
+	hdr = STAILQ_FIRST(&mp->mem_list);
+	assert(hdr != NULL);
+	return (uint64_t)((uintptr_t)hdr->addr - hdr->phys_addr);
+}
+
+static inline uint16_t
+nicvf_mbuff_meta_length(struct rte_mbuf *mbuf)
+{
+	return (uint16_t)((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+}
+
+/*
+ * Simple phy2virt functions assuming mbufs are in a single huge page
+ * V = P + offset
+ * P = V - offset
+ */
+static inline uintptr_t
+nicvf_mbuff_phy2virt(phys_addr_t phy, uint64_t mbuf_phys_off)
+{
+	return (uintptr_t)(phy + mbuf_phys_off);
+}
+
+static inline uintptr_t
+nicvf_mbuff_virt2phy(uintptr_t virt, uint64_t mbuf_phys_off)
+{
+	return (phys_addr_t)(virt - mbuf_phys_off);
+}
+
 #endif /* __THUNDERX_NICVF_ETHDEV_H__  */
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 88a5152..fed0859 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -253,3 +253,320 @@ nicvf_xmit_pkts_multiseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 	nicvf_addr_write(sq->sq_door, used_desc);
 	return nb_pkts;
 }
+
+static const uint32_t ptype_table[16][16] __rte_cache_aligned = {
+	[L3_NONE][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_NONE][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_NONE][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_NONE][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_NONE][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_NONE][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_NONE][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_NONE][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4][L4_NONE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV4][L4_IPFRAG] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4][L4_IPCOMP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4][L4_TCP] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[L3_IPV4][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_GRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[L3_IPV4][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4][L4_NVGRE] = RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV4_OPT][L4_NONE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_L3_IPV4,
+	[L3_IPV4_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV4_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV4_OPT][L4_TCP] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV4_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_GRE] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV4_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV4_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV4_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV4_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV4_EXT |
+				RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6][L4_NONE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_IPSEC_ESP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L3_IPV4,
+	[L3_IPV6][L4_IPFRAG] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6][L4_IPCOMP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6][L4_TCP] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[L3_IPV6][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_GRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[L3_IPV6][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6][L4_NVGRE] = RTE_PTYPE_L3_IPV6 | RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_IPV6_OPT][L4_NONE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_IPSEC_ESP] =  RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_L3_IPV4,
+	[L3_IPV6_OPT][L4_IPFRAG] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_FRAG,
+	[L3_IPV6_OPT][L4_IPCOMP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_UNKNOWN,
+	[L3_IPV6_OPT][L4_TCP] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[L3_IPV6_OPT][L4_UDP_PASS1] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_GRE] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_TUNNEL_GRE,
+	[L3_IPV6_OPT][L4_UDP_PASS2] = RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[L3_IPV6_OPT][L4_UDP_GENEVE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_IPV6_OPT][L4_UDP_VXLAN] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_IPV6_OPT][L4_NVGRE] = RTE_PTYPE_L3_IPV6_EXT |
+					RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_ET_STOP][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_ET_STOP][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_ET_STOP][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_ET_STOP][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_ET_STOP][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_ET_STOP][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_ET_STOP][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_ET_STOP][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+
+	[L3_OTHER][L4_NONE] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPSEC_ESP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_IPFRAG] = RTE_PTYPE_L4_FRAG,
+	[L3_OTHER][L4_IPCOMP] = RTE_PTYPE_UNKNOWN,
+	[L3_OTHER][L4_TCP] = RTE_PTYPE_L4_TCP,
+	[L3_OTHER][L4_UDP_PASS1] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_GRE] = RTE_PTYPE_TUNNEL_GRE,
+	[L3_OTHER][L4_UDP_PASS2] = RTE_PTYPE_L4_UDP,
+	[L3_OTHER][L4_UDP_GENEVE] = RTE_PTYPE_TUNNEL_GENEVE,
+	[L3_OTHER][L4_UDP_VXLAN] = RTE_PTYPE_TUNNEL_VXLAN,
+	[L3_OTHER][L4_NVGRE] = RTE_PTYPE_TUNNEL_NVGRE,
+};
+
+static inline uint32_t __hot
+nicvf_rx_classify_pkt(cqe_rx_word0_t cqe_rx_w0)
+{
+	return ptype_table[cqe_rx_w0.l3_type][cqe_rx_w0.l4_type];
+}
+
+static inline int __hot
+nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill)
+{
+	int i;
+	uint32_t ltail, next_tail;
+	struct nicvf_rbdr *rbdr = rxq->shared_rbdr;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	struct rbdr_entry_t *desc = rbdr->desc;
+	uint32_t qlen_mask = rbdr->qlen_mask;
+	uintptr_t door = rbdr->rbdr_door;
+	void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned;
+
+	if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) {
+		rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill;
+		return 0;
+	}
+
+	NICVF_RX_ASSERT((unsigned int)to_fill <= (qlen_mask -
+		(nicvf_addr_read(rbdr->rbdr_status) & NICVF_RBDR_COUNT_MASK)));
+
+	next_tail = __atomic_fetch_add(&rbdr->next_tail, to_fill,
+					__ATOMIC_ACQUIRE);
+	ltail = next_tail;
+	for (i = 0; i < to_fill; i++) {
+		struct rbdr_entry_t *entry = desc + (ltail & qlen_mask);
+
+		entry->full_addr = nicvf_mbuff_virt2phy((uintptr_t)obj_p[i],
+							mbuf_phys_off);
+		ltail++;
+	}
+
+	while (__atomic_load_n(&rbdr->tail, __ATOMIC_RELAXED) != next_tail)
+		rte_pause();
+
+	__atomic_store_n(&rbdr->tail, ltail, __ATOMIC_RELEASE);
+	nicvf_addr_write(door, to_fill);
+	return to_fill;
+}
+
+static inline int32_t __hot
+nicvf_rx_pkts_to_process(struct nicvf_rxq *rxq, uint16_t nb_pkts,
+			 int32_t available_space)
+{
+	if (unlikely(available_space < nb_pkts))
+		rxq->available_space = nicvf_addr_read(rxq->cq_status)
+						& NICVF_CQ_CQE_COUNT_MASK;
+
+	return RTE_MIN(nb_pkts, available_space);
+}
+
+static inline void __hot
+nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
+		 struct rte_mbuf *pkt)
+{
+	if (likely(cqe_rx_w0.rss_alg)) {
+		pkt->hash.rss = cqe_rx_w2.rss_tag;
+		pkt->ol_flags |= PKT_RX_RSS_HASH;
+	}
+}
+
+uint16_t __hot
+nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	uint32_t i, to_process;
+	struct cqe_rx_t *cqe_rx;
+	struct rte_mbuf *pkt;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	cqe_rx_word3_t cqe_rx_w3;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t rb0_ptr, mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t cqe_head = rxq->head & cqe_mask;
+	int32_t available_space = rxq->available_space;
+	uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cqe_rx = (struct cqe_rx_t *)&desc[cqe_head];
+		NICVF_RX_ASSERT(((struct cq_entry_type_t *)cqe_rx)->cqe_type
+						 == CQE_TYPE_RX);
+
+		NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+		NICVF_LOAD_PAIR(cqe_rx_w2.u64, cqe_rx_w3.u64, &cqe_rx->word2);
+		rb0_ptr = *((uint64_t *)cqe_rx + rbptr_offset);
+		pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+				(rb0_ptr - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+		pkt->ol_flags = 0;
+		pkt->port = port_id;
+		pkt->data_len = cqe_rx_w3.rb0_sz;
+		pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+		pkt->nb_segs = 1;
+		pkt->pkt_len = cqe_rx_w3.rb0_sz;
+		pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+
+		nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+		rte_mbuf_refcnt_set(pkt, 1);
+		rx_pkts[i] = pkt;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(pkt);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += to_process;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
+						rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
+
+static inline uint16_t __hot
+nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
+			uint64_t mbuf_phys_off, uint8_t port_id,
+			struct rte_mbuf **rx_pkt, uint8_t rbptr_offset)
+{
+	struct rte_mbuf *pkt, *seg, *prev;
+	cqe_rx_word0_t cqe_rx_w0;
+	cqe_rx_word1_t cqe_rx_w1;
+	cqe_rx_word2_t cqe_rx_w2;
+	uint16_t *rb_sz, nb_segs, seg_idx;
+	uint64_t *rb_ptr;
+
+	NICVF_LOAD_PAIR(cqe_rx_w0.u64, cqe_rx_w1.u64, cqe_rx);
+	NICVF_RX_ASSERT(cqe_rx_w0.cqe_type == CQE_TYPE_RX);
+	cqe_rx_w2 = cqe_rx->word2;
+	rb_sz = &cqe_rx->word3.rb0_sz;
+	rb_ptr = (uint64_t *)cqe_rx + rbptr_offset;
+	nb_segs = cqe_rx_w0.rb_cnt;
+	pkt = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[0] - cqe_rx_w1.align_pad, mbuf_phys_off);
+
+	pkt->ol_flags = 0;
+	pkt->port = port_id;
+	pkt->data_off = RTE_PKTMBUF_HEADROOM + cqe_rx_w1.align_pad;
+	pkt->nb_segs = nb_segs;
+	pkt->pkt_len = cqe_rx_w1.pkt_len;
+	pkt->data_len = rb_sz[nicvf_frag_num(0)];
+	rte_mbuf_refcnt_set(pkt, 1);
+	pkt->packet_type = nicvf_rx_classify_pkt(cqe_rx_w0);
+	nicvf_rx_offload(cqe_rx_w0, cqe_rx_w2, pkt);
+
+	*rx_pkt = pkt;
+	prev = pkt;
+	for (seg_idx = 1; seg_idx < nb_segs; seg_idx++) {
+		seg = (struct rte_mbuf *)nicvf_mbuff_phy2virt
+			(rb_ptr[seg_idx], mbuf_phys_off);
+
+		prev->next = seg;
+		seg->data_len = rb_sz[nicvf_frag_num(seg_idx)];
+		seg->port = port_id;
+		seg->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_mbuf_refcnt_set(seg, 1);
+
+		prev = seg;
+	}
+	prev->next = NULL;
+	return nb_segs;
+}
+
+uint16_t __hot
+nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	union cq_entry_t *cq_entry;
+	struct cqe_rx_t *cqe_rx;
+	struct nicvf_rxq *rxq = rx_queue;
+	union cq_entry_t *desc = rxq->desc;
+	const uint64_t cqe_mask = rxq->qlen_mask;
+	uint64_t mbuf_phys_off = rxq->mbuf_phys_off;
+	uint32_t i, to_process, cqe_head, buffers_consumed = 0;
+	int32_t available_space = rxq->available_space;
+	uint16_t nb_segs;
+	const uint8_t port_id = rxq->port_id;
+	const uint8_t rbptr_offset = rxq->rbptr_offset;
+
+	cqe_head = rxq->head & cqe_mask;
+	to_process = nicvf_rx_pkts_to_process(rxq, nb_pkts, available_space);
+
+	for (i = 0; i < to_process; i++) {
+		rte_prefetch_non_temporal(&desc[cqe_head + 2]);
+		cq_entry = &desc[cqe_head];
+		cqe_rx = (struct cqe_rx_t *)cq_entry;
+		nb_segs = nicvf_process_cq_mseg_entry(cqe_rx, mbuf_phys_off,
+				port_id, rx_pkts + i, rbptr_offset);
+		buffers_consumed += nb_segs;
+		cqe_head = (cqe_head + 1) & cqe_mask;
+		nicvf_prefetch_store_keep(rx_pkts[i]);
+	}
+
+	if (likely(to_process)) {
+		rxq->available_space -= to_process;
+		rxq->head = cqe_head;
+		nicvf_addr_write(rxq->cq_door, to_process);
+		rxq->recv_buffers += buffers_consumed;
+		if (rxq->recv_buffers > rxq->rx_free_thresh) {
+			rxq->recv_buffers -=
+				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
+		}
+	}
+
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index b1fdc69..d2ca2c9 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -33,6 +33,7 @@
 #ifndef __THUNDERX_NICVF_RXTX_H__
 #define __THUNDERX_NICVF_RXTX_H__
 
+#include <rte_byteorder.h>
 #include <rte_ethdev.h>
 
 #define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
@@ -83,6 +84,10 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
+uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
+				  uint16_t nb_pkts);
+
 uint16_t nicvf_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts, uint16_t pkts);
 uint16_t nicvf_xmit_pkts_multiseg(void *txq, struct rte_mbuf **tx_pkts,
 				  uint16_t pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 21/27] net/thunderx: add supported packet type get
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (19 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 20/27] net/thunderx: add single and multi segment Rx functions Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 22/27] net/thunderx: add Rx queue count support Jerin Jacob
                               ` (6 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 40 +++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index c727ce0..46f0d7b 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -260,6 +260,45 @@ nicvf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->oerrors = port_stats.tx_drops;
 }
 
+static const uint32_t *
+nicvf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	size_t copied;
+	static uint32_t ptypes[32];
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	static const uint32_t ptypes_pass1[] = {
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+	};
+	static const uint32_t ptypes_pass2[] = {
+		RTE_PTYPE_TUNNEL_GRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+	};
+	static const uint32_t ptypes_end = RTE_PTYPE_UNKNOWN;
+
+	copied = sizeof(ptypes_pass1);
+	memcpy(ptypes, ptypes_pass1, copied);
+	if (nicvf_hw_version(nic) == NICVF_PASS2) {
+		memcpy((char *)ptypes + copied, ptypes_pass2,
+			sizeof(ptypes_pass2));
+		copied += sizeof(ptypes_pass2);
+	}
+
+	memcpy((char *)ptypes + copied, &ptypes_end, sizeof(ptypes_end));
+	if (dev->rx_pkt_burst == nicvf_recv_pkts ||
+		dev->rx_pkt_burst == nicvf_recv_pkts_multiseg)
+		return ptypes;
+
+	return NULL;
+}
+
 static void
 nicvf_dev_stats_reset(struct rte_eth_dev *dev)
 {
@@ -888,6 +927,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
 	.dev_infos_get            = nicvf_dev_info_get,
+	.dev_supported_ptypes_get = nicvf_dev_supported_ptypes_get,
 	.mtu_set                  = nicvf_dev_set_mtu,
 	.reta_update              = nicvf_dev_reta_update,
 	.reta_query               = nicvf_dev_reta_query,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 22/27] net/thunderx: add Rx queue count support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (20 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 21/27] net/thunderx: add supported packet type get Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 23/27] net/thunderx: add Rx queue start and stop support Jerin Jacob
                               ` (5 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 1 +
 drivers/net/thunderx/nicvf_rxtx.c   | 9 +++++++++
 drivers/net/thunderx/nicvf_rxtx.h   | 2 ++
 3 files changed, 12 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 46f0d7b..33d5fba 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -935,6 +935,7 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
+	.rx_queue_count           = nicvf_dev_rx_queue_count,
 	.tx_queue_setup           = nicvf_dev_tx_queue_setup,
 	.tx_queue_release         = nicvf_dev_tx_queue_release,
 	.get_reg_length           = nicvf_dev_get_reg_length,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index fed0859..1c6d6a8 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -570,3 +570,12 @@ nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	return to_process;
 }
+
+uint32_t
+nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d2ca2c9..ded87f3 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -84,6 +84,8 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 }
 #endif
 
+uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 23/27] net/thunderx: add Rx queue start and stop support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (21 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 22/27] net/thunderx: add Rx queue count support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 24/27] net/thunderx: add Tx " Jerin Jacob
                               ` (4 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 167 ++++++++++++++++++++++++++++++++++++
 drivers/net/thunderx/nicvf_rxtx.c   |  18 ++++
 drivers/net/thunderx/nicvf_rxtx.h   |   1 +
 3 files changed, 186 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 33d5fba..ed69147 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,54 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+
+static inline int
+nicvf_configure_cpi(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint16_t qidx, qcnt;
+	int ret;
+
+	/* Count started rx queues */
+	for (qidx = qcnt = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		if (dev->data->rx_queue_state[qidx] ==
+		    RTE_ETH_QUEUE_STATE_STARTED)
+			qcnt++;
+
+	nic->cpi_alg = CPI_ALG_NONE;
+	ret = nicvf_mbox_config_cpi(nic, qcnt);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure CPI %d", ret);
+
+	return ret;
+}
+
+static int
+nicvf_configure_rss_reta(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	unsigned int idx, qmap_size;
+	uint8_t qmap[RTE_MAX_QUEUES_PER_PORT];
+	uint8_t default_reta[NIC_MAX_RSS_IDR_TBL_SIZE];
+
+	if (nic->cpi_alg != CPI_ALG_NONE)
+		return -EINVAL;
+
+	/* Prepare queue map */
+	for (idx = 0, qmap_size = 0; idx < dev->data->nb_rx_queues; idx++) {
+		if (dev->data->rx_queue_state[idx] ==
+				RTE_ETH_QUEUE_STATE_STARTED)
+			qmap[qmap_size++] = idx;
+	}
+
+	/* Update default RSS RETA */
+	for (idx = 0; idx < NIC_MAX_RSS_IDR_TBL_SIZE; idx++)
+		default_reta[idx] = qmap[idx % qmap_size];
+
+	return nicvf_rss_reta_update(nic, default_reta,
+				     NIC_MAX_RSS_IDR_TBL_SIZE);
+}
+
 static void
 nicvf_dev_tx_queue_release(void *sq)
 {
@@ -687,6 +735,33 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	return 0;
 }
 
+static inline void
+nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq)
+{
+	uint32_t rxq_cnt;
+	uint32_t nb_pkts, released_pkts = 0;
+	uint32_t refill_cnt = 0;
+	struct rte_eth_dev *dev = rxq->nic->eth_dev;
+	struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH];
+
+	if (dev->rx_pkt_burst == NULL)
+		return;
+
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, rxq->queue_id))) {
+		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
+					NICVF_MAX_RX_FREE_THRESH);
+		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
+		while (nb_pkts) {
+			rte_pktmbuf_free_seg(rx_pkts[--nb_pkts]);
+			released_pkts++;
+		}
+	}
+
+	refill_cnt += nicvf_dev_rbdr_refill(dev, rxq->queue_id);
+	PMD_DRV_LOG(INFO, "free_cnt=%d  refill_cnt=%d",
+		    released_pkts, refill_cnt);
+}
+
 static void
 nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 {
@@ -695,6 +770,69 @@ nicvf_rx_queue_reset(struct nicvf_rxq *rxq)
 	rxq->recv_buffers = 0;
 }
 
+static inline int
+nicvf_start_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	/* Update rbdr pointer to all rxq */
+	rxq = dev->data->rx_queues[qidx];
+	rxq->shared_rbdr = nic->rbdr;
+
+	ret = nicvf_qset_rq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rq %d %d", qidx, ret);
+		goto config_rq_error;
+	}
+	ret = nicvf_qset_cq_config(nic, qidx, rxq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure cq %d %d", qidx, ret);
+		goto config_cq_error;
+	}
+
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+
+config_cq_error:
+	nicvf_qset_cq_reclaim(nic, qidx);
+config_rq_error:
+	nicvf_qset_rq_reclaim(nic, qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct nicvf_rxq *rxq;
+	int ret, other_error;
+
+	if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_rq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim rq %d %d", qidx, ret);
+
+	other_error = ret;
+	rxq = dev->data->rx_queues[qidx];
+	nicvf_rx_queue_release_mbufs(rxq);
+	nicvf_rx_queue_reset(rxq);
+
+	ret = nicvf_qset_cq_reclaim(nic, qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim cq %d %d", qidx, ret);
+
+	other_error |= ret;
+	dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return other_error;
+}
+
 static void
 nicvf_dev_rx_queue_release(void *rx_queue)
 {
@@ -707,6 +845,33 @@ nicvf_dev_rx_queue_release(void *rx_queue)
 }
 
 static int
+nicvf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_start_rx_queue(dev, qidx);
+	if (ret)
+		return ret;
+
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		return ret;
+
+	return nicvf_configure_rss_reta(dev);
+}
+
+static int
+nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	int ret;
+
+	ret = nicvf_stop_rx_queue(dev, qidx);
+	ret |= nicvf_configure_cpi(dev);
+	ret |= nicvf_configure_rss_reta(dev);
+	return ret;
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -933,6 +1098,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.reta_query               = nicvf_dev_reta_query,
 	.rss_hash_update          = nicvf_dev_rss_hash_update,
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
+	.rx_queue_start           = nicvf_dev_rx_queue_start,
+	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 1c6d6a8..eb51a72 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -579,3 +579,21 @@ nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
 	rxq = dev->data->rx_queues[queue_idx];
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
+
+uint32_t
+nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+	struct nicvf_rxq *rxq;
+	uint32_t to_process;
+	uint32_t rx_free;
+
+	rxq = dev->data->rx_queues[queue_idx];
+	to_process = rxq->recv_buffers;
+	while (rxq->recv_buffers > 0) {
+		rx_free = RTE_MIN(rxq->recv_buffers, NICVF_MAX_RX_FREE_THRESH);
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rx_free);
+	}
+
+	assert(rxq->recv_buffers == 0);
+	return to_process;
+}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index ded87f3..9dad8a5 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -85,6 +85,7 @@ fill_sq_desc_gather(union sq_entry_t *entry, struct rte_mbuf *pkt)
 #endif
 
 uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts, uint16_t pkts);
 uint16_t nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 24/27] net/thunderx: add Tx queue start and stop support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (22 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 23/27] net/thunderx: add Rx queue start and stop support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 25/27] net/thunderx: add device start, stop and close support Jerin Jacob
                               ` (3 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ed69147..fd5751e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -562,6 +562,51 @@ nicvf_tx_queue_reset(struct nicvf_txq *txq)
 	txq->xmit_bufs = 0;
 }
 
+static inline int
+nicvf_start_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
+		return 0;
+
+	txq = dev->data->tx_queues[qidx];
+	txq->pool = NULL;
+	ret = nicvf_qset_sq_config(nicvf_pmd_priv(dev), qidx, txq);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure sq %d %d", qidx, ret);
+		goto config_sq_error;
+	}
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+	return ret;
+
+config_sq_error:
+	nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	return ret;
+}
+
+static inline int
+nicvf_stop_tx_queue(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	struct nicvf_txq *txq;
+	int ret;
+
+	if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
+		return 0;
+
+	ret = nicvf_qset_sq_reclaim(nicvf_pmd_priv(dev), qidx);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim sq %d %d", qidx, ret);
+
+	txq = dev->data->tx_queues[qidx];
+	nicvf_tx_queue_release_mbufs(txq);
+	nicvf_tx_queue_reset(txq);
+
+	dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+	return ret;
+}
 
 static inline int
 nicvf_configure_cpi(struct rte_eth_dev *dev)
@@ -872,6 +917,18 @@ nicvf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
 }
 
 static int
+nicvf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_start_tx_queue(dev, qidx);
+}
+
+static int
+nicvf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+	return nicvf_stop_tx_queue(dev, qidx);
+}
+
+static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
 			 const struct rte_eth_rxconf *rx_conf,
@@ -1100,6 +1157,8 @@ static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.rss_hash_conf_get        = nicvf_dev_rss_hash_conf_get,
 	.rx_queue_start           = nicvf_dev_rx_queue_start,
 	.rx_queue_stop            = nicvf_dev_rx_queue_stop,
+	.tx_queue_start           = nicvf_dev_tx_queue_start,
+	.tx_queue_stop            = nicvf_dev_tx_queue_stop,
 	.rx_queue_setup           = nicvf_dev_rx_queue_setup,
 	.rx_queue_release         = nicvf_dev_rx_queue_release,
 	.rx_queue_count           = nicvf_dev_rx_queue_count,
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 25/27] net/thunderx: add device start, stop and close support
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (23 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 24/27] net/thunderx: add Tx " Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 26/27] net/thunderx: updated driver documentation and release notes Jerin Jacob
                               ` (2 subsequent siblings)
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj, Kamil Rytarowski, Zyta Szpak, Slawomir Rosek,
	Radoslaw Biernacki

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
---
 drivers/net/thunderx/nicvf_ethdev.c | 467 ++++++++++++++++++++++++++++++++++++
 1 file changed, 467 insertions(+)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fd5751e..d534312 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -69,6 +69,8 @@
 #include "nicvf_rxtx.h"
 #include "nicvf_logs.h"
 
+static void nicvf_dev_stop(struct rte_eth_dev *dev);
+
 static inline int
 nicvf_atomic_write_link_status(struct rte_eth_dev *dev,
 			       struct rte_eth_link *link)
@@ -534,6 +536,82 @@ nicvf_qset_sq_alloc(struct nicvf *nic,  struct nicvf_txq *sq, uint16_t qidx,
 	return 0;
 }
 
+static int
+nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz)
+{
+	struct nicvf_rbdr *rbdr;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+
+	assert(nic->rbdr == NULL);
+	rbdr = rte_zmalloc_socket("rbdr", sizeof(struct nicvf_rbdr),
+				  RTE_CACHE_LINE_SIZE, nic->node);
+	if (rbdr == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr");
+		return -ENOMEM;
+	}
+
+	ring_size = sizeof(struct rbdr_entry_t) * desc_cnt;
+	rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size,
+				   NICVF_RBDR_BASE_ALIGN_BYTES, nic->node);
+	if (rz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring");
+		return -ENOMEM;
+	}
+
+	memset(rz->addr, 0, ring_size);
+
+	rbdr->phys = rz->phys_addr;
+	rbdr->tail = 0;
+	rbdr->next_tail = 0;
+	rbdr->desc = rz->addr;
+	rbdr->buffsz = buffsz;
+	rbdr->qlen_mask = desc_cnt - 1;
+	rbdr->rbdr_status =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_STATUS0;
+	rbdr->rbdr_door =
+		nicvf_qset_base(nic, 0) + NIC_QSET_RBDR_0_1_DOOR;
+
+	nic->rbdr = rbdr;
+	return 0;
+}
+
+static void
+nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy)
+{
+	uint16_t qidx;
+	void *obj;
+	struct nicvf_rxq *rxq;
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		if (rxq->precharge_cnt) {
+			obj = (void *)nicvf_mbuff_phy2virt(phy,
+							   rxq->mbuf_phys_off);
+			rte_mempool_put(rxq->pool, obj);
+			rxq->precharge_cnt--;
+			break;
+		}
+	}
+}
+
+static inline void
+nicvf_rbdr_release_mbufs(struct nicvf *nic)
+{
+	uint32_t qlen_mask, head;
+	struct rbdr_entry_t *entry;
+	struct nicvf_rbdr *rbdr = nic->rbdr;
+
+	qlen_mask = rbdr->qlen_mask;
+	head = rbdr->head;
+	while (head != rbdr->tail) {
+		entry = rbdr->desc + head;
+		nicvf_rbdr_release_mbuf(nic, entry->full_addr);
+		head++;
+		head = head & qlen_mask;
+	}
+}
+
 static inline void
 nicvf_tx_queue_release_mbufs(struct nicvf_txq *txq)
 {
@@ -629,6 +707,31 @@ nicvf_configure_cpi(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static inline int
+nicvf_configure_rss(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	uint64_t rsshf;
+	int ret = -EINVAL;
+
+	rsshf = nicvf_rss_ethdev_to_nic(nic,
+			dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf);
+	PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64,
+		    dev->data->dev_conf.rxmode.mq_mode,
+		    nic->eth_dev->data->nb_rx_queues,
+		    nic->eth_dev->data->dev_conf.lpbk_mode, rsshf);
+
+	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+		ret = nicvf_rss_term(nic);
+	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		ret = nicvf_rss_config(nic,
+				       nic->eth_dev->data->nb_rx_queues, rsshf);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
+
+	return ret;
+}
+
 static int
 nicvf_configure_rss_reta(struct rte_eth_dev *dev)
 {
@@ -673,6 +776,48 @@ nicvf_dev_tx_queue_release(void *sq)
 	}
 }
 
+static void
+nicvf_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct nicvf_txq *txq;
+	size_t i;
+	bool multiseg = false;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if ((txq->txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS) == 0) {
+			multiseg = true;
+			break;
+		}
+	}
+
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (multiseg) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment tx callback");
+		dev->tx_pkt_burst = nicvf_xmit_pkts;
+	}
+
+	if (txq->pool_free == nicvf_single_pool_free_xmited_buffers)
+		PMD_DRV_LOG(DEBUG, "Using single-mempool tx free method");
+	else
+		PMD_DRV_LOG(DEBUG, "Using multi-mempool tx free method");
+}
+
+static void
+nicvf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using multi-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts_multiseg;
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using single-segment rx callback");
+		dev->rx_pkt_burst = nicvf_recv_pkts;
+	}
+}
+
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
@@ -1064,6 +1209,317 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static nicvf_phys_addr_t
+rbdr_rte_mempool_get(void *opaque)
+{
+	uint16_t qidx;
+	uintptr_t mbuf;
+	struct nicvf_rxq *rxq;
+	struct nicvf *nic = nicvf_pmd_priv((struct rte_eth_dev *)opaque);
+
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = nic->eth_dev->data->rx_queues[qidx];
+		/* Maintain equal buffer count across all pools */
+		if (rxq->precharge_cnt >= rxq->qlen_mask)
+			continue;
+		rxq->precharge_cnt++;
+		mbuf = (uintptr_t)rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf)
+			return nicvf_mbuff_virt2phy(mbuf, rxq->mbuf_phys_off);
+	}
+	return 0;
+}
+
+static int
+nicvf_dev_start(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	uint32_t buffsz = 0, rbdrsz = 0;
+	uint32_t total_rxq_desc, nb_rbdr_desc, exp_buffs;
+	uint64_t mbuf_phys_off = 0;
+	struct nicvf_rxq *rxq;
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	struct rte_mbuf *mbuf;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+	uint16_t mtu;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Userspace process exited without proper shutdown in last run */
+	if (nicvf_qset_rbdr_active(nic, 0))
+		nicvf_dev_stop(dev);
+
+	/*
+	 * Thunderx nicvf PMD can support more than one pool per port only when
+	 * 1) Data payload size is same across all the pools in given port
+	 * AND
+	 * 2) All mbuffs in the pools are from the same hugepage
+	 * AND
+	 * 3) Mbuff metadata size is same across all the pools in given port
+	 *
+	 * This is to support existing application that uses multiple pool/port.
+	 * But, the purpose of using multipool for QoS will not be addressed.
+	 *
+	 */
+
+	/* Validate RBDR buff size */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		mbp_priv = rte_mempool_get_priv(rxq->pool);
+		buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
+		if (buffsz % 128) {
+			PMD_INIT_LOG(ERR, "rxbuf size must be multiply of 128");
+			return -EINVAL;
+		}
+		if (rbdrsz == 0)
+			rbdrsz = buffsz;
+		if (rbdrsz != buffsz) {
+			PMD_INIT_LOG(ERR, "buffsz not same, qid=%d (%d/%d)",
+				     qidx, rbdrsz, buffsz);
+			return -EINVAL;
+		}
+	}
+
+	/* Validate mempool attributes */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool);
+		mbuf = rte_pktmbuf_alloc(rxq->pool);
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "Failed allocate mbuf qid=%d pool=%s",
+				     qidx, rxq->pool->name);
+			return -ENOMEM;
+		}
+		rxq->mbuf_phys_off -= nicvf_mbuff_meta_length(mbuf);
+		rxq->mbuf_phys_off -= RTE_PKTMBUF_HEADROOM;
+		rte_pktmbuf_free(mbuf);
+
+		if (mbuf_phys_off == 0)
+			mbuf_phys_off = rxq->mbuf_phys_off;
+		if (mbuf_phys_off != rxq->mbuf_phys_off) {
+			PMD_INIT_LOG(ERR, "pool params not same,%s %" PRIx64,
+				     rxq->pool->name, mbuf_phys_off);
+			return -EINVAL;
+		}
+	}
+
+	/* Check the level of buffers in the pool */
+	total_rxq_desc = 0;
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		rxq = dev->data->rx_queues[qidx];
+		/* Count total numbers of rxq descs */
+		total_rxq_desc += rxq->qlen_mask + 1;
+		exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh;
+		exp_buffs *= nic->eth_dev->data->nb_rx_queues;
+		if (rte_mempool_count(rxq->pool) < exp_buffs) {
+			PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)",
+				     rxq->pool->name,
+				     rte_mempool_count(rxq->pool),
+				     exp_buffs);
+			return -ENOENT;
+		}
+	}
+
+	/* Check RBDR desc overflow */
+	ret = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	if (ret == 0) {
+		PMD_INIT_LOG(ERR, "Reached RBDR desc limit, reduce nr desc");
+		return -ENOMEM;
+	}
+
+	/* Enable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to enable qset %d", ret);
+		return ret;
+	}
+
+	/* Allocate RBDR and RBDR ring desc */
+	nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc);
+	ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc");
+		goto qset_reclaim;
+	}
+
+	/* Enable and configure RBDR registers */
+	ret = nicvf_qset_rbdr_config(nic, 0);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure rbdr %d", ret);
+		goto qset_rbdr_free;
+	}
+
+	/* Fill rte_mempool buffers in RBDR pool and precharge it */
+	ret = nicvf_qset_rbdr_precharge(nic, 0, rbdr_rte_mempool_get,
+					dev, total_rxq_desc);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to fill rbdr %d", ret);
+		goto qset_rbdr_reclaim;
+	}
+
+	PMD_DRV_LOG(INFO, "Filled %d out of %d entries in RBDR",
+		     nic->rbdr->tail, nb_rbdr_desc);
+
+	/* Configure RX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) {
+		ret = nicvf_start_rx_queue(dev, qidx);
+		if (ret)
+			goto start_rxq_error;
+	}
+
+	/* Configure VLAN Strip */
+	nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip);
+
+	/* Configure TX queues */
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) {
+		ret = nicvf_start_tx_queue(dev, qidx);
+		if (ret)
+			goto start_txq_error;
+	}
+
+	/* Configure CPI algorithm */
+	ret = nicvf_configure_cpi(dev);
+	if (ret)
+		goto start_txq_error;
+
+	/* Configure RSS */
+	ret = nicvf_configure_rss(dev);
+	if (ret)
+		goto qset_rss_error;
+
+	/* Configure loopback */
+	ret = nicvf_loopback_config(nic, dev->data->dev_conf.lpbk_mode);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure loopback %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Reset all statistics counters attached to this port */
+	ret = nicvf_mbox_reset_stat_counters(nic, 0x3FFF, 0x1F, 0xFFFF, 0xFFFF);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to reset stat counters %d", ret);
+		goto qset_rss_error;
+	}
+
+	/* Setup scatter mode if needed by jumbo */
+	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * VLAN_TAG_SIZE > buffsz)
+		dev->data->scattered_rx = 1;
+	if (rx_conf->enable_scatter)
+		dev->data->scattered_rx = 1;
+
+	/* Setup MTU based on max_rx_pkt_len or default */
+	mtu = dev->data->dev_conf.rxmode.jumbo_frame ?
+		dev->data->dev_conf.rxmode.max_rx_pkt_len
+			-  ETHER_HDR_LEN - ETHER_CRC_LEN
+		: ETHER_MTU;
+
+	if (nicvf_dev_set_mtu(dev, mtu)) {
+		PMD_INIT_LOG(ERR, "Failed to set default mtu size");
+		return -EBUSY;
+	}
+
+	/* Configure callbacks based on scatter mode */
+	nicvf_set_tx_function(dev);
+	nicvf_set_rx_function(dev);
+
+	/* Done; Let PF make the BGX's RX and TX switches to ON position */
+	nicvf_mbox_cfg_done(nic);
+	return 0;
+
+qset_rss_error:
+	nicvf_rss_term(nic);
+start_txq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+start_rxq_error:
+	for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+qset_rbdr_reclaim:
+	nicvf_qset_rbdr_reclaim(nic, 0);
+	nicvf_rbdr_release_mbufs(nic);
+qset_rbdr_free:
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+qset_reclaim:
+	nicvf_qset_reclaim(nic);
+	return ret;
+}
+
+static void
+nicvf_dev_stop(struct rte_eth_dev *dev)
+{
+	int ret;
+	uint16_t qidx;
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Let PF make the BGX's RX and TX switches to OFF position */
+	nicvf_mbox_shutdown(nic);
+
+	/* Disable loopback */
+	ret = nicvf_loopback_config(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable loopback %d", ret);
+
+	/* Disable VLAN Strip */
+	nicvf_vlan_hw_strip(nic, 0);
+
+	/* Reclaim sq */
+	for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++)
+		nicvf_stop_tx_queue(dev, qidx);
+
+	/* Reclaim rq */
+	for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++)
+		nicvf_stop_rx_queue(dev, qidx);
+
+	/* Reclaim RBDR */
+	ret = nicvf_qset_rbdr_reclaim(nic, 0);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to reclaim RBDR %d", ret);
+
+	/* Move all charged buffers in RBDR back to pool */
+	if (nic->rbdr != NULL)
+		nicvf_rbdr_release_mbufs(nic);
+
+	/* Reclaim CPI configuration */
+	if (!nic->sqs_mode) {
+		ret = nicvf_mbox_config_cpi(nic, 0);
+		if (ret)
+			PMD_INIT_LOG(ERR, "Failed to reclaim CPI config");
+	}
+
+	/* Disable qset */
+	ret = nicvf_qset_config(nic);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to disable qset %d", ret);
+
+	/* Disable all interrupts */
+	nicvf_disable_all_interrupts(nic);
+
+	/* Free RBDR SW structure */
+	if (nic->rbdr) {
+		rte_free(nic->rbdr);
+		nic->rbdr = NULL;
+	}
+}
+
+static void
+nicvf_dev_close(struct rte_eth_dev *dev)
+{
+	struct nicvf *nic = nicvf_pmd_priv(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	nicvf_dev_stop(dev);
+	nicvf_periodic_alarm_stop(nic);
+}
+
 static int
 nicvf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -1144,7 +1600,10 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 /* Initialize and register driver with DPDK Application */
 static const struct eth_dev_ops nicvf_eth_dev_ops = {
 	.dev_configure            = nicvf_dev_configure,
+	.dev_start                = nicvf_dev_start,
+	.dev_stop                 = nicvf_dev_stop,
 	.link_update              = nicvf_dev_link_update,
+	.dev_close                = nicvf_dev_close,
 	.stats_get                = nicvf_dev_stats_get,
 	.stats_reset              = nicvf_dev_stats_reset,
 	.promiscuous_enable       = nicvf_dev_promisc_enable,
@@ -1179,6 +1638,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &nicvf_eth_dev_ops;
 
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		/* Setup callbacks for secondary process */
+		nicvf_set_tx_function(eth_dev);
+		nicvf_set_rx_function(eth_dev);
+		return 0;
+	}
+
 	pci_dev = eth_dev->pci_dev;
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 26/27] net/thunderx: updated driver documentation and release notes
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (24 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 25/27] net/thunderx: add device start, stop and close support Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-17 13:29             ` [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
  2016-06-20 11:28             ` [PATCH v6 00/27] DPDK PMD for ThunderX NIC device Bruce Richardson
  27 siblings, 0 replies; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Slawomir Rosek

Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes

Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/index.rst              |   1 +
 doc/guides/nics/overview.rst           |  96 ++++-----
 doc/guides/nics/thunderx.rst           | 354 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_16_07.rst |   1 +
 4 files changed, 404 insertions(+), 48 deletions(-)
 create mode 100644 doc/guides/nics/thunderx.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index ffe011e..99ee7f1 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -51,6 +51,7 @@ Network Interface Controller Drivers
     nfp
     qede
     szedata2
+    thunderx
     virtio
     vhost
     vmxnet3
diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
index 29a6163..c1ee67b 100644
--- a/doc/guides/nics/overview.rst
+++ b/doc/guides/nics/overview.rst
@@ -74,40 +74,40 @@ Most of these differences are summarized below.
 
 .. table:: Features availability in networking drivers
 
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
-   Feature              a b b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s v v v v x
-                        f n n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h i i m e
-                        p x x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e o r r x n
-                        a 2 2 t d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d s t t n v
-                        c x x   i e 0       . v v   f e e e e k k k k     e         v   a t i i e i
-                        k   v   n           . f f       . v v   . v v               f   t   o o t r
-                        e   f   g           .   .       . f f   . f f                   a     . 3 t
-                        t                   v   v       v   v   v   v                   2     v
-                                            e   e       e   e   e   e                         e
-                                            c   c       c   c   c   c                         c
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   Feature              a b b b b c e e e i i i i i i i i i i f f f f m m m n n p q q r s t v v v v x
+                        f n n n o x 1 n n 4 4 4 4 g g x x x x m m m m l l p f u c e e i z h h i i m e
+                        p x x x n g 0 a i 0 0 0 0 b b g g g g 1 1 1 1 x x i p l a d d n e u o r r x n
+                        a 2 2 t d b 0   c e e e e   v b b b b 0 0 0 0 4 5 p   l p e e g d n s t t n v
+                        c x x   i e 0       . v v   f e e e e k k k k     e         v   a d t i i e i
+                        k   v   n           . f f       . v v   . v v               f   t e   o o t r
+                        e   f   g           .   .       . f f   . f f                   a r     . 3 t
+                        t                   v   v       v   v   v   v                   2 x     v
+                                            e   e       e   e   e   e                           e
+                                            c   c       c   c   c   c                           c
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    Speed capabilities
-   Link status            Y Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y
-   Link status event      Y Y       Y     Y Y     Y   Y Y             Y Y         Y Y     Y
-   Queue status event                                                                     Y
+   Link status            Y Y Y   Y Y   Y Y Y     Y   Y Y Y Y         Y Y         Y Y   Y Y Y Y Y
+   Link status event      Y Y       Y     Y Y     Y   Y Y             Y Y         Y Y     Y Y
+   Queue status event                                                                       Y
    Rx interrupt                     Y     Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Queue start/stop           Y   Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y   Y Y
-   MTU update                     Y Y Y           Y   Y Y Y Y         Y Y
-   Jumbo frame                    Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   Scattered Rx                   Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y   Y
+   Queue start/stop           Y   Y   Y Y Y Y Y Y     Y Y     Y Y Y Y Y Y               Y Y   Y Y
+   MTU update                     Y Y Y           Y   Y Y Y Y         Y Y                 Y
+   Jumbo frame                    Y Y Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   Scattered Rx                   Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y               Y Y   Y
    LRO                                                Y Y Y Y
    TSO                            Y   Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y
-   Promiscuous mode       Y Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y   Y Y
-   Allmulticast mode              Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y   Y Y
-   Unicast MAC filter     Y Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
-   Multicast MAC filter   Y Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y       Y Y
-   RSS hash                       Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y
-   RSS key update                     Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
-   RSS reta update            Y       Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y
+   Promiscuous mode       Y Y Y   Y Y   Y Y Y Y Y Y Y Y Y     Y Y     Y Y         Y Y   Y Y   Y Y
+   Allmulticast mode              Y Y     Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y   Y Y   Y Y
+   Unicast MAC filter     Y Y Y     Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
+   Multicast MAC filter   Y Y Y         Y Y Y Y Y             Y Y     Y Y         Y Y         Y Y
+   RSS hash                       Y   Y Y Y Y Y Y Y   Y Y Y Y Y Y Y Y Y Y         Y Y     Y
+   RSS key update                     Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
+   RSS reta update            Y       Y   Y Y Y Y Y   Y Y Y Y Y Y Y Y   Y                 Y
    VMDq                                   Y Y     Y   Y Y     Y Y
-   SR-IOV                   Y         Y   Y Y     Y   Y Y             Y Y           Y
+   SR-IOV                   Y         Y   Y Y     Y   Y Y             Y Y           Y     Y
    DCB                                    Y Y     Y   Y Y
-   VLAN filter                      Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y       Y Y
+   VLAN filter                      Y   Y Y Y Y Y Y Y Y Y Y Y Y Y     Y Y         Y Y         Y Y
    Ethertype filter                       Y Y     Y   Y Y
    N-tuple filter                                 Y   Y Y
    SYN filter                                     Y   Y Y
@@ -118,37 +118,37 @@ Most of these differences are summarized below.
    Flow control                   Y Y     Y Y     Y   Y Y                         Y Y
    Rate limitation                                    Y Y
    Traffic mirroring                      Y Y         Y Y
-   CRC offload                    Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
-   VLAN offload                   Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y
+   CRC offload                    Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     Y
+   VLAN offload                   Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y   Y         Y Y     P
    QinQ offload                     Y     Y   Y   Y Y Y   Y
-   L3 checksum offload            Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
-   L4 checksum offload            Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y
+   L3 checksum offload            Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
+   L4 checksum offload            Y Y Y Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y                 Y
    Inner L3 checksum                  Y   Y   Y       Y   Y           Y
    Inner L4 checksum                  Y   Y   Y       Y   Y           Y
-   Packet type parsing            Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y
+   Packet type parsing            Y     Y Y   Y   Y Y Y   Y   Y Y Y Y Y Y         Y Y     Y
    Timesync                               Y Y     Y   Y Y
-   Basic stats            Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Extended stats         Y Y Y       Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y Y
-   Stats per queue                Y                   Y Y     Y Y Y Y Y Y         Y Y   Y   Y Y
+   Basic stats            Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y Y
+   Extended stats         Y Y Y       Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y                   Y   Y
+   Stats per queue                Y                   Y Y     Y Y Y Y Y Y         Y Y   Y Y   Y Y
    EEPROM dump                    Y               Y   Y Y
-   Registers dump                 Y               Y Y Y Y Y Y
-   Multiprocess aware                     Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y
-   BSD nic_uio                    Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
-   Linux UIO              Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y       Y Y
-   Linux VFIO                     Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                       Y Y
+   Registers dump                 Y               Y Y Y Y Y Y                             Y
+   Multiprocess aware                     Y Y Y Y     Y Y Y Y Y Y Y Y Y Y       Y Y Y     Y
+   BSD nic_uio                    Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                         Y Y
+   Linux UIO              Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y             Y Y         Y Y
+   Linux VFIO                     Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y                     Y   Y Y
    Other kdrv                                                         Y Y               Y
-   ARMv7                                                                        Y           Y Y
-   ARMv8                                              Y Y Y Y                   Y           Y Y
+   ARMv7                                                                        Y             Y Y
+   ARMv8                                              Y Y Y Y                   Y         Y   Y Y
    Power8                                                             Y Y       Y
    TILE-Gx                                                                      Y
-   x86-32                         Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y         Y Y Y
-   x86-64                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y Y Y Y
-   Usage doc              Y Y     Y     Y                             Y Y       Y Y Y   Y   Y
+   x86-32                         Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y           Y Y Y
+   x86-64                 Y Y Y   Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y       Y Y Y   Y   Y Y Y
+   Usage doc              Y Y     Y     Y                             Y Y       Y Y Y   Y Y   Y
    Design doc
    Perf doc
-   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
+   ==================== = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
 
 .. Note::
 
-   Features marked with "*" are partially supported. Refer to the appropriate
+   Features marked with "P" are partially supported. Refer to the appropriate
    NIC guide in the following sections for details.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
new file mode 100644
index 0000000..e38f260
--- /dev/null
+++ b/doc/guides/nics/thunderx.rst
@@ -0,0 +1,354 @@
+..  BSD LICENSE
+    Copyright (C) Cavium networks Ltd. 2016.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Cavium networks nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ThunderX NICVF Poll Mode Driver
+===============================
+
+The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
+support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
+as well as their virtual functions (VF) in SR-IOV context.
+
+More information can be found at `Cavium Networks Official Website
+<http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the ThunderX PMD are:
+
+- Multiple queues for TX and RX
+- Receive Side Scaling (RSS)
+- Packet type information
+- Checksum offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Scattered and gather for TX and RX
+- VLAN stripping
+- SR-IOV VF
+- NUMA support
+
+Supported ThunderX SoCs
+-----------------------
+- CN88xx
+
+Prerequisites
+-------------
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``n``)
+
+  By default it is enabled only for defconfig_arm64-thunderx-* config.
+  Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT`` (default ``n``)
+
+  Toggle display of initialization related messages.
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
+
+  Toggle display of receive fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
+
+  Toggle display of transmit fast path run-time message
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER`` (default ``n``)
+
+  Toggle display of generic debugging messages
+
+- ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX`` (default ``n``)
+
+  Toggle display of PF mailbox related run-time check messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the
+following “make” command:
+
+.. code-block:: console
+
+   cd <DPDK-source-directory>
+   make config T=arm64-thunderx-linuxapp-gcc install
+
+Linux
+-----
+
+.. _thunderx_testpmd_example:
+
+Running testpmd
+~~~~~~~~~~~~~~~
+
+This section demonstrates how to launch ``testpmd`` with ThunderX NIC VF device
+managed by ``librte_pmd_thunderx_nicvf`` in the Linux operating system.
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+   .. _thunderx_vfio_noiommu:
+
+#. Enable **VFIO-NOIOMMU** mode (optional):
+
+   .. code-block:: console
+
+      echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+
+   .. note::
+
+      **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
+      See also :ref:`SR-IOV: Prerequisites and sample Application Notes <thunderx_sriov_example>`.
+
+#. Bind the ThunderX NIC VF device to ``vfio-pci`` loaded in the previous step:
+
+   Setup VFIO permissions for regular users and then bind to ``vfio-pci``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Start ``testpmd`` with basic parameters:
+
+   .. code-block:: console
+
+      ./arm64-thunderx-linuxapp-gcc/app/testpmd -c 0xf -n 4 -w 0002:01:00.2 \
+        -- -i --disable-hw-vlan-filter --crc-strip --no-flush-rx \
+        --port-topology=loop
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+
+      PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
+
+      ...
+      EAL:   probe driver: 177d:11 rte_nicvf_pmd
+      EAL:   using IOMMU type 1 (Type 1)
+      EAL:   PCI memory mapped at 0x3ffade50000
+      EAL: Trying to map BAR 4 that contains the MSI-X table.
+           Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
+      EAL:   PCI memory mapped at 0x3ffadc60000
+      PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
+      PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
+           loopback_supported=true
+      PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
+      Interactive-mode selected
+      Configuring Port 0 (socket 0)
+      ...
+
+      PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
+      Port 0: A6:C6:D9:17:78:01
+      Checking link statuses...
+      Port 0 Link Up - speed 10000 Mbps - full-duplex
+      Done
+      testpmd>
+
+.. _thunderx_sriov_example:
+
+SR-IOV: Prerequisites and sample Application Notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
+automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
+This section provides instructions to configure SR-IOV with Linux OS.
+
+#. Verify PF devices capabilities using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
+              ...
+              Kernel driver in use: thunder-nic
+              ...
+
+   .. note::
+
+      Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
+
+#. Verify VF devices capabilities and drivers using ``lspci``:
+
+   .. code-block:: console
+
+      lspci -vvv
+
+   Example output:
+
+   .. code-block:: console
+
+      0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+      0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
+              ...
+              Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
+              ...
+              Kernel driver in use: thunder-nicvf
+              ...
+
+   .. note::
+
+      Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
+
+#. Verify PF/VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
+      0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
+      ...
+
+#. Load ``vfio-pci`` driver:
+
+   .. code-block:: console
+
+      modprobe vfio-pci
+
+#. Bind VF devices to ``vfio-pci`` using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.1
+      ./tools/dpdk_nic_bind.py --bind vfio-pci 0002:01:00.2
+
+#. Verify VF bind using ``dpdk_nic_bind.py``:
+
+   .. code-block:: console
+
+      ./tools/dpdk_nic_bind.py --status
+
+   Example output:
+
+   .. code-block:: console
+
+      ...
+      0002:01:00.1 'Device 0011' drv=vfio-pci unused=
+      0002:01:00.2 'Device 0011' drv=vfio-pci unused=
+      ...
+      0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
+      ...
+
+#. Pass VF device to VM context (PCIe Passthrough):
+
+   The VF devices may be passed through to the guest VM using qemu or
+   virt-manager or virsh etc.
+   ``librte_pmd_thunderx_nicvf`` or ``thunder-nicvf`` should be used to bind
+   the VF devices in the guest VM in :ref:`VFIO-NOIOMMU <thunderx_vfio_noiommu>` mode.
+
+   Example qemu guest launch command:
+
+   .. code-block:: console
+
+      sudo qemu-system-aarch64 -name vm1 \
+      -machine virt,gic_version=3,accel=kvm,usb=off \
+      -cpu host -m 4096 \
+      -smp 4,sockets=1,cores=8,threads=1 \
+      -nographic -nodefaults \
+      -kernel <kernel image> \
+      -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
+      -device vfio-pci,host=0002:01:00.1 \
+      -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw  \
+      -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
+      -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
+      -device virtio-net-device,netdev=net0 \
+      -serial stdio \
+      -mem-path /dev/huge
+
+#. Refer to section :ref:`Running testpmd <thunderx_testpmd_example>` for instruction
+   how to launch ``testpmd`` application.
+
+Limitations
+-----------
+
+CRC striping
+~~~~~~~~~~~~
+
+The ThunderX SoC family NICs strip the CRC for every packets coming into the
+host interface. So, CRC will be stripped even when the
+``rxmode.hw_strip_crc`` member is set to 0 in ``struct rte_eth_conf``.
+
+Maximum packet length
+~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
+is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
+up to 9200 bytes can still reach the host interface.
+
+Maximum packet segments
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs support up to 12 segments per packet when working
+in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
+frame size does not fit in the maximum number of segments.
+
+Limited VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ThunderX SoC family NICs has 128VFs and each VF has 8/8 queues
+for RX/TX respectively. Current driver implementation has one to one mapping
+between physical port and VF hence only limited VFs can be used.
diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst
index 8765ef8..570333b 100644
--- a/doc/guides/rel_notes/release_16_07.rst
+++ b/doc/guides/rel_notes/release_16_07.rst
@@ -83,6 +83,7 @@ New Features
   "Network Interface Controller Drivers" document for more details on this
   new driver.
 
+* **Added new poll-mode driver for ThunderX nicvf inbuit NIC device.**
 
 Resolved Issues
 ---------------
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (25 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 26/27] net/thunderx: updated driver documentation and release notes Jerin Jacob
@ 2016-06-17 13:29             ` Jerin Jacob
  2016-06-20 11:23               ` Bruce Richardson
  2016-06-20 11:28             ` [PATCH v6 00/27] DPDK PMD for ThunderX NIC device Bruce Richardson
  27 siblings, 1 reply; 204+ messages in thread
From: Jerin Jacob @ 2016-06-17 13:29 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, ferruh.yigit, Jerin Jacob,
	Maciej Czekaj

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index be09a98..e41ce13 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -343,6 +343,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
 F: drivers/net/qede/
 F: doc/guides/nics/qede.rst
 
+Cavium ThunderX nicvf
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
+F: drivers/net/thunderx/
+F: doc/guides/nics/thunderx.rst
+
 RedHat virtio
 M: Huawei Xie <huawei.xie@intel.com>
 M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 204+ messages in thread

* Re: [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD
  2016-06-17 13:29             ` [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-06-20 11:23               ` Bruce Richardson
  0 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-20 11:23 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit, Maciej Czekaj

On Fri, Jun 17, 2016 at 06:59:54PM +0530, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> ---
>  MAINTAINERS | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index be09a98..e41ce13 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -343,6 +343,12 @@ M: Sony Chacko <sony.chacko@qlogic.com>
>  F: drivers/net/qede/
>  F: doc/guides/nics/qede.rst
>  
> +Cavium ThunderX nicvf
> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> +M: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> +F: drivers/net/thunderx/
> +F: doc/guides/nics/thunderx.rst
> +
>  RedHat virtio
>  M: Huawei Xie <huawei.xie@intel.com>
>  M: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> -- 

Minor nit - these entries are sorted according to first letter of the title,
so this belongs after bnxt and before Chelsio cxgbe. I'll fix on apply.

/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v6 00/27] DPDK PMD for ThunderX NIC device
  2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
                               ` (26 preceding siblings ...)
  2016-06-17 13:29             ` [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
@ 2016-06-20 11:28             ` Bruce Richardson
  27 siblings, 0 replies; 204+ messages in thread
From: Bruce Richardson @ 2016-06-20 11:28 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, thomas.monjalon, ferruh.yigit

On Fri, Jun 17, 2016 at 06:59:27PM +0530, Jerin Jacob wrote:
> This patch set provides the initial version of DPDK PMD for the
> built-in NIC device in Cavium ThunderX SoC family.
> 
> Implemented features and ThunderX nicvf PMD documentation added
> in doc/guides/nics/overview.rst and doc/guides/nics/thunderx.rst
> respectively in this patch set.
> 
Patchset applied to dpdk-next-net/rel_16_07

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions
  2016-06-17 13:29             ` [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions Jerin Jacob
@ 2016-06-21 13:34               ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-21 13:34 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

Hi Jerin,

On 6/17/2016 2:29 PM, Jerin Jacob wrote:
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
> ---

...

> diff --git a/drivers/net/thunderx/Makefile b/drivers/net/thunderx/Makefile
> index eb9f100..9079b5b 100644
> --- a/drivers/net/thunderx/Makefile
> +++ b/drivers/net/thunderx/Makefile
> @@ -51,10 +51,12 @@ VPATH += $(SRCDIR)/base
>  #
>  # all source are stored in SRCS-y
>  #
> +SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_rxtx.c
>  SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_hw.c
>  SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_mbox.c
>  SRCS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += nicvf_ethdev.c
>  
> +CFLAGS_nicvf_rxtx.o += -fno-prefetch-loop-arrays -Ofast

With clang, getting following compile error, fyi:

== Build drivers/net/thunderx
  CC nicvf_rxtx.o
clang: error: optimization flag '-fno-prefetch-loop-arrays' is not supported
/root/development/dpdk-next-net/mk/internal/rte.compile-pre.mk:126:
recipe for target 'nicvf_rxtx.o' failed
make[4]: *** [nicvf_rxtx.o] Error 1

# clang --version
clang version 3.7.0 (tags/RELEASE_370/final)

^ permalink raw reply	[flat|nested] 204+ messages in thread

* Re: [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication
  2016-06-17 13:29             ` [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication Jerin Jacob
@ 2016-06-21 13:41               ` Ferruh Yigit
  0 siblings, 0 replies; 204+ messages in thread
From: Ferruh Yigit @ 2016-06-21 13:41 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Maciej Czekaj,
	Kamil Rytarowski, Zyta Szpak, Slawomir Rosek, Radoslaw Biernacki

On 6/17/2016 2:29 PM, Jerin Jacob wrote:
> DPDK nicvf driver doesn't have access to NIC's PF address space.
> Introduce a mailbox mechanism to communicate with PF driver through
> shared 128bit register interface.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
> Signed-off-by: Kamil Rytarowski <Kamil.Rytarowski@caviumnetworks.com>
> Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
> Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
> Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>

...

> +
> +static inline const char *
> +nicvf_mbox_msg_str(int msg)
> +{
> +	assert(msg >= 0 && msg < NIC_MBOX_MSG_MAX);
> +	/* undefined messages */
> +	if (mbox_message[msg] == NULL)
> +		msg = 0;
> +	return mbox_message[msg];
> +}

With clang getting following compile error:

== Build drivers/net/thunder
  CC nicvf_mbox.o
/root/development/dpdk-next-net/drivers/net/thunderx/base/nicvf_mbox.c:68:1:
error: unused function 'nicvf_mbox_msg_str' [-Werror,-Wunused-function]
nicvf_mbox_msg_str(int msg)
^
1 error generated.
/root/development/dpdk-next-net/mk/internal/rte.compile-pre.mk:126:
recipe for target 'nicvf_mbox.o' failed


It looks like nicvf_mbox_msg_str() only called within logging functions
which can depends on DEBUC_X config options.


Regards,
ferruh

^ permalink raw reply	[flat|nested] 204+ messages in thread

end of thread, other threads:[~2016-06-21 13:41 UTC | newest]

Thread overview: 204+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-07 15:16 [PATCH 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
2016-05-07 15:16 ` [PATCH 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
2016-05-09 17:38   ` Stephen Hemminger
2016-05-12 15:40   ` Pattan, Reshma
2016-05-07 15:16 ` [PATCH 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
2016-05-09 17:40   ` Stephen Hemminger
2016-05-09 17:41   ` Stephen Hemminger
2016-05-10  7:25     ` Jerin Jacob
2016-05-11  5:37   ` Panu Matilainen
2016-05-11 12:23   ` Pattan, Reshma
2016-05-07 15:16 ` [PATCH 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
2016-06-08 16:10   ` Ferruh Yigit
2016-05-07 15:16 ` [PATCH 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
2016-05-12 15:39   ` Pattan, Reshma
2016-05-13  8:14     ` Jerin Jacob
2016-05-07 15:16 ` [PATCH 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
2016-05-07 15:16 ` [PATCH 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
2016-05-13 13:52   ` Pattan, Reshma
2016-05-07 15:16 ` [PATCH 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
2016-05-19  9:30   ` Pattan, Reshma
2016-05-07 15:16 ` [PATCH 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
2016-05-19 12:19   ` Pattan, Reshma
2016-05-07 15:16 ` [PATCH 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
2016-05-07 15:16 ` [PATCH 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
2016-05-07 15:16 ` [PATCH 11/20] thunderx/nicvf: add stats support Jerin Jacob
2016-05-07 15:16 ` [PATCH 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
2016-05-07 15:16 ` [PATCH 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
2016-05-07 15:16 ` [PATCH 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
2016-05-07 15:16 ` [PATCH 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
2016-05-07 15:16 ` [PATCH 16/20] thunderx/nicvf: add tx " Jerin Jacob
2016-05-07 15:16 ` [PATCH 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
2016-05-07 15:16 ` [PATCH 18/20] thunderx/config: set max numa node to two Jerin Jacob
2016-05-07 15:16 ` [PATCH 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
2016-05-09  8:47   ` Thomas Monjalon
2016-05-09  9:35     ` Jerin Jacob
2016-05-17 16:31   ` Mcnamara, John
2016-05-19  6:19     ` Jerin Jacob
2016-05-07 15:16 ` [PATCH 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
2016-05-09  8:50   ` Thomas Monjalon
2016-05-29 16:46 ` [PATCH v2 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
2016-05-31 16:53     ` Stephen Hemminger
2016-06-01  9:14       ` Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
2016-05-29 16:46   ` [PATCH v2 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
2016-06-07 16:40   ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Jerin Jacob
2016-06-07 16:40     ` [PATCH v3 01/20] thunderx/nicvf/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
2016-06-08 12:18       ` Ferruh Yigit
2016-06-08 15:45       ` Ferruh Yigit
2016-06-13 13:55       ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 01/19] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
2016-06-13 15:09           ` Bruce Richardson
2016-06-14 13:52             ` Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 02/19] net/thunderx: add pmd skeleton Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 03/19] net/thunderx: add link status and link update support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 04/19] net/thunderx: add get_reg and get_reg_length support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 05/19] net/thunderx: add dev_configure support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 06/19] net/thunderx: add dev_infos_get support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 07/19] net/thunderx: add rx_queue_setup/release support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 08/19] net/thunderx: add tx_queue_setup/release support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 09/19] net/thunderx: add rss and reta query and update support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 10/19] net/thunderx: add mtu_set and promiscuous_enable support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 11/19] net/thunderx: add stats support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 12/19] net/thunderx: add single and multi segment tx functions Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 13/19] net/thunderx: add single and multi segment rx functions Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 14/19] net/thunderx: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 15/19] net/thunderx: add rx queue start and stop support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 16/19] net/thunderx: add tx " Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 17/19] net/thunderx: add device start, stop and close support Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 18/19] net/thunderx: updated driver documentation and release notes Jerin Jacob
2016-06-13 13:55         ` [PATCH v4 19/19] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
2016-06-13 15:46         ` [PATCH v4 00/19] DPDK PMD for ThunderX NIC device Bruce Richardson
2016-06-14 19:06         ` [PATCH v5 00/25] " Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 01/25] net/thunderx/base: add HW constants for ThunderX inbuilt NIC Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 02/25] net/thunderx/base: add register definition " Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 03/25] net/thunderx/base: implement DPDK based platform abstraction for base code Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 04/25] net/thunderx/base: add mbox API for ThunderX PF/VF driver communication Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 05/25] net/thunderx/base: add hardware API for ThunderX nicvf inbuilt NIC Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 06/25] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 07/25] net/thunderx/base: add statistics get " Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 08/25] net/thunderx: add pmd skeleton Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 09/25] net/thunderx: add link status and link update support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 10/25] net/thunderx: add registers dump support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 11/25] net/thunderx: add ethdev configure support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 12/25] net/thunderx: add get device info support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 13/25] net/thunderx: add Rx queue setup and release support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 14/25] net/thunderx: add Tx " Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 15/25] net/thunderx: add RSS and reta query and update support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 16/25] net/thunderx: add MTU set and promiscuous enable support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 17/25] net/thunderx: add stats support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 18/25] net/thunderx: add single and multi segment Tx functions Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 19/25] net/thunderx: add single and multi segment Rx functions Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 20/25] net/thunderx: implement supported ptype get and Rx queue count Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 21/25] net/thunderx: add Rx queue start and stop support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 22/25] net/thunderx: add Tx " Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 23/25] net/thunderx: add device start, stop and close support Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 24/25] net/thunderx: updated driver documentation and release notes Jerin Jacob
2016-06-14 19:06           ` [PATCH v5 25/25] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
2016-06-15 14:39           ` [PATCH v5 00/25] DPDK PMD for ThunderX NIC device Bruce Richardson
2016-06-16  9:31             ` Jerin Jacob
2016-06-16 10:58               ` Bruce Richardson
2016-06-16 11:17                 ` Jerin Jacob
2016-06-16 14:33                   ` Bruce Richardson
2016-06-17 13:29           ` [PATCH v6 00/27] " Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 01/27] net/thunderx/base: add HW constants Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 02/27] net/thunderx/base: add HW register definitions Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 03/27] net/thunderx/base: implement DPDK based platform abstraction Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 04/27] net/thunderx/base: add mbox APIs for PF/VF communication Jerin Jacob
2016-06-21 13:41               ` Ferruh Yigit
2016-06-17 13:29             ` [PATCH v6 05/27] net/thunderx/base: add hardware API Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 06/27] net/thunderx/base: add RSS and reta configuration HW APIs Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 07/27] net/thunderx/base: add statistics get " Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 08/27] net/thunderx: add pmd skeleton Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 09/27] net/thunderx: add link status and link update support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 10/27] net/thunderx: add registers dump support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 11/27] net/thunderx: add ethdev configure support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 12/27] net/thunderx: add get device info support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 13/27] net/thunderx: add Rx queue setup and release support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 14/27] net/thunderx: add Tx " Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 15/27] net/thunderx: add RSS and reta query and update support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 16/27] net/thunderx: add MTU set support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 17/27] net/thunderx: add promiscuous enable support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 18/27] net/thunderx: add stats support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 19/27] net/thunderx: add single and multi segment Tx functions Jerin Jacob
2016-06-21 13:34               ` Ferruh Yigit
2016-06-17 13:29             ` [PATCH v6 20/27] net/thunderx: add single and multi segment Rx functions Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 21/27] net/thunderx: add supported packet type get Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 22/27] net/thunderx: add Rx queue count support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 23/27] net/thunderx: add Rx queue start and stop support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 24/27] net/thunderx: add Tx " Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 25/27] net/thunderx: add device start, stop and close support Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 26/27] net/thunderx: updated driver documentation and release notes Jerin Jacob
2016-06-17 13:29             ` [PATCH v6 27/27] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
2016-06-20 11:23               ` Bruce Richardson
2016-06-20 11:28             ` [PATCH v6 00/27] DPDK PMD for ThunderX NIC device Bruce Richardson
2016-06-07 16:40     ` [PATCH v3 02/20] thunderx/nicvf: add pmd skeleton Jerin Jacob
2016-06-08 12:18       ` Ferruh Yigit
2016-06-08 16:06       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 03/20] thunderx/nicvf: add link status and link update support Jerin Jacob
2016-06-07 16:40     ` [PATCH v3 04/20] thunderx/nicvf: add get_reg and get_reg_length support Jerin Jacob
2016-06-08 16:16       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 05/20] thunderx/nicvf: add dev_configure support Jerin Jacob
2016-06-08 16:21       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 06/20] thunderx/nicvf: add dev_infos_get support Jerin Jacob
2016-06-08 16:23       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 07/20] thunderx/nicvf: add rx_queue_setup/release support Jerin Jacob
2016-06-08 16:42       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 08/20] thunderx/nicvf: add tx_queue_setup/release support Jerin Jacob
2016-06-08 12:24       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 09/20] thunderx/nicvf: add rss and reta query and update support Jerin Jacob
2016-06-08 16:45       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
2016-06-08 16:48       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 11/20] thunderx/nicvf: add stats support Jerin Jacob
2016-06-08 16:53       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
2016-06-08 12:11       ` Ferruh Yigit
2016-06-08 12:51       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
2016-06-08 17:04       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
2016-06-08 17:17       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
2016-06-08 17:42       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 16/20] thunderx/nicvf: add tx " Jerin Jacob
2016-06-08 17:46       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
2016-06-08 12:25       ` Ferruh Yigit
2016-06-07 16:40     ` [PATCH v3 18/20] thunderx/config: set max numa node to two Jerin Jacob
2016-06-08 17:54       ` Ferruh Yigit
2016-06-13 13:11         ` Jerin Jacob
2016-06-07 16:40     ` [PATCH v3 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
2016-06-08 12:08       ` Ferruh Yigit
2016-06-08 12:27         ` Jerin Jacob
2016-06-08 13:18           ` Bruce Richardson
2016-06-07 16:40     ` [PATCH v3 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob
2016-06-08 12:30     ` [PATCH v3 00/20] DPDK PMD for ThunderX NIC device Ferruh Yigit
2016-06-08 12:43       ` Jerin Jacob
2016-06-08 13:15         ` Ferruh Yigit
2016-06-08 13:22         ` Bruce Richardson
2016-06-08 13:32           ` Jerin Jacob
2016-06-08 13:51             ` Thomas Monjalon
2016-06-08 13:42         ` Thomas Monjalon
2016-06-08 15:08           ` Bruce Richardson
2016-06-09 10:49             ` Jerin Jacob
2016-06-09 14:02               ` Thomas Monjalon
2016-06-09 14:11                 ` Bruce Richardson
2016-05-29 16:53 ` [PATCH v2 10/20] thunderx/nicvf: add mtu_set and promiscuous_enable support Jerin Jacob
2016-05-29 16:54 ` [PATCH v2 11/20] thunderx/nicvf: add stats support Jerin Jacob
2016-05-29 16:54   ` [PATCH v2 12/20] thunderx/nicvf: add single and multi segment tx functions Jerin Jacob
2016-05-29 16:54   ` [PATCH v2 13/20] thunderx/nicvf: add single and multi segment rx functions Jerin Jacob
2016-05-29 16:54   ` [PATCH v2 14/20] thunderx/nicvf: add dev_supported_ptypes_get and rx_queue_count support Jerin Jacob
2016-05-29 16:54   ` [PATCH v2 15/20] thunderx/nicvf: add rx queue start and stop support Jerin Jacob
2016-05-29 16:54   ` [PATCH v2 16/20] thunderx/nicvf: add tx " Jerin Jacob
2016-05-29 16:57 ` [PATCH v2 17/20] thunderx/nicvf: add device start, stop and close support Jerin Jacob
2016-05-29 16:57   ` [PATCH v2 18/20] thunderx/config: set max numa node to two Jerin Jacob
2016-05-29 16:57   ` [PATCH v2 19/20] thunderx/nicvf: updated driver documentation and release notes Jerin Jacob
2016-05-29 16:57   ` [PATCH v2 20/20] maintainers: claim responsibility for the ThunderX nicvf PMD Jerin Jacob

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.