linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] ThunderX Embedded switch support
@ 2016-12-21  8:46 Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file Satha Koteswara Rao
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

Background
==========

Proposed patch configures programmable ThunderX embedded network switch to
emulate functions of e-switch found in most industry standard NICs.

Embedded switch is pre-configured by loading firmware image which exposes
several firmware defined tables allowing configuration of VLAN and MAC filters.

Embedded switch configuration profile and the driver introduce the following
features:

* Support of configurable number of VFs per physical port (see num_vfs below)

* VLAN filters per VF

* Unicast MAC-DA filters per VF

* Multicast MAC-DA filters per VF

* Support of dedicated VF allowing packet mirroring of all traffic traversing
  physical port (such VF is attached to the interface representing physical
  port)

* Administrative VLAN enforcement per VF (i.e. inserting/overwriting VLAN tag
  on all traffic originated by particular VF)

Each VF operates in two modes: a) full filter mode, where it receives only
registered MAC-DA/VLAN packets and b) multicast promiscuous mode. The latter
is enabled when VF reaches it's maximum MAC-DA filter limit: in this mode VF
receives all multicast and registered unicast MAC frames.

Special effort is made to track association of interface switching groups
to underlying physical ports: entry of /sys/class/net/<intf>/phys_port_name
contains string describing underlying physical port the interface is attached
to in a form <node-id>-<port-group-id>-<port>.

Set of patches include following changes:

1) Patch to original NIC drivers to enable internal switch, and load firmware
   image.

2) Modification of VF driver to subscribe to interface MAC/VLAN ADD/DELETE
   notifications and send them to the PF driver.

3) Modification of PF driver to receive MBOX interrupts from VF for ADD/DELETE
   MAC/VLAN registrations.

4) E-switch initialization code

5) API to access firmware-defined tables embedded switch tables.

The following new parameter is introduced by the driver:

num_vfs: Number of VFs attached to each physical port, default value of this
         parameter is 0, in which case driver operates in backward-compatible
         switch bypass mode.

Set of patches uses below git branch

git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git

patchset generated against below commit

commit 69973b830859bc6529a7a0468ba0d80ee5117826
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Dec 11 11:17:54 2016 -0800

    Linux 4.9

Thank You!

-------------------------------------------------------------------------------

Satha Koteswara Rao (7):

* Patch 1:
  * PF driver modified to enable HW filter support, changes works in
    backward compatibility mode. Enable required things in Makefile.
    Enable LZ4 dependecy inside config file

* Patch 2:
  * VF driver changes to enable hooks to get kernel notifications

* Patch 3:
  * Enable pause frame support

* Patch 4:
  * HW Filter Initialization code and register access APIs

* Patch 5:
  * Multiple VF's grouped together under single physical port called PF
    group. PF Group maintainance API's.

* Patch 6:
  * HW Filter Table access API's

* Patch 7:
  * Get notifications from PF driver and configure filter block based on
    requested data.


 drivers/net/ethernet/cavium/Kconfig               |    1 +
 drivers/net/ethernet/cavium/thunder/Makefile      |    2 +-
 drivers/net/ethernet/cavium/thunder/nic.h         |  203 ++-
 drivers/net/ethernet/cavium/thunder/nic_main.c    |  735 ++++++++-
 drivers/net/ethernet/cavium/thunder/nicvf_main.c  |  579 ++++++-
 drivers/net/ethernet/cavium/thunder/pf_filter.c   | 1678 +++++++++++++++++++++
 drivers/net/ethernet/cavium/thunder/pf_globals.h  |   78 +
 drivers/net/ethernet/cavium/thunder/pf_locals.h   |  365 +++++
 drivers/net/ethernet/cavium/thunder/pf_reg.c      |  660 ++++++++
 drivers/net/ethernet/cavium/thunder/pf_vf.c       |  207 +++
 drivers/net/ethernet/cavium/thunder/tbl_access.c  |  262 ++++
 drivers/net/ethernet/cavium/thunder/tbl_access.h  |   61 +
 drivers/net/ethernet/cavium/thunder/thunder_bgx.c |   25 +
 drivers/net/ethernet/cavium/thunder/thunder_bgx.h |    7 +
 14 files changed, 4712 insertions(+), 151 deletions(-)
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_filter.c
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_globals.h
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_locals.h
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_reg.c
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_vf.c
 create mode 100644 drivers/net/ethernet/cavium/thunder/tbl_access.c
 create mode 100644 drivers/net/ethernet/cavium/thunder/tbl_access.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21 13:05   ` Sunil Kovvuri
  2016-12-21  8:46 ` [RFC PATCH 2/7] VF driver changes to enable hooks to get kernel notifications Satha Koteswara Rao
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/Kconfig            |   1 +
 drivers/net/ethernet/cavium/thunder/Makefile   |   2 +-
 drivers/net/ethernet/cavium/thunder/nic.h      | 203 ++++---
 drivers/net/ethernet/cavium/thunder/nic_main.c | 735 ++++++++++++++++++++++---
 4 files changed, 804 insertions(+), 137 deletions(-)

diff --git a/drivers/net/ethernet/cavium/Kconfig b/drivers/net/ethernet/cavium/Kconfig
index 92f411c..e4855a0 100644
--- a/drivers/net/ethernet/cavium/Kconfig
+++ b/drivers/net/ethernet/cavium/Kconfig
@@ -17,6 +17,7 @@ config THUNDER_NIC_PF
 	tristate "Thunder Physical function driver"
 	depends on 64BIT
 	select THUNDER_NIC_BGX
+        select CRYPTO_LZ4
 	---help---
 	  This driver supports Thunder's NIC physical function.
 	  The NIC provides the controller and DMA engines to
diff --git a/drivers/net/ethernet/cavium/thunder/Makefile b/drivers/net/ethernet/cavium/thunder/Makefile
index 6b4d4ad..30e4417 100644
--- a/drivers/net/ethernet/cavium/thunder/Makefile
+++ b/drivers/net/ethernet/cavium/thunder/Makefile
@@ -7,6 +7,6 @@ obj-$(CONFIG_THUNDER_NIC_BGX) += thunder_bgx.o
 obj-$(CONFIG_THUNDER_NIC_PF) += nicpf.o
 obj-$(CONFIG_THUNDER_NIC_VF) += nicvf.o
 
-nicpf-y := nic_main.o
+nicpf-y := pf_vf.o pf_reg.o pf_filter.o tbl_access.o nic_main.o
 nicvf-y := nicvf_main.o nicvf_queues.o
 nicvf-y += nicvf_ethtool.o
diff --git a/drivers/net/ethernet/cavium/thunder/nic.h b/drivers/net/ethernet/cavium/thunder/nic.h
index 86bd93c..17a29e7 100644
--- a/drivers/net/ethernet/cavium/thunder/nic.h
+++ b/drivers/net/ethernet/cavium/thunder/nic.h
@@ -54,6 +54,7 @@
 
 /* Max when CPI_ALG is IP diffserv */
 #define	NIC_MAX_CPI_PER_LMAC		64
+#define NIC_TNS_CPI_PER_LMAC		16
 
 /* NIC VF Interrupts */
 #define	NICVF_INTR_CQ			0
@@ -111,6 +112,7 @@
  * 1 tick per 0.025usec
  */
 #define NICPF_CLK_PER_INT_TICK		1
+#define NICPF_TNS_CLK_PER_INT_TICK	2
 
 /* Time to wait before we decide that a SQ is stuck.
  *
@@ -129,6 +131,7 @@ struct nicvf_cq_poll {
 
 #define NIC_MAX_RSS_HASH_BITS		8
 #define NIC_MAX_RSS_IDR_TBL_SIZE	(1 << NIC_MAX_RSS_HASH_BITS)
+#define NIC_TNS_RSS_IDR_TBL_SIZE	5
 #define RSS_HASH_KEY_SIZE		5 /* 320 bit key */
 
 struct nicvf_rss_info {
@@ -255,74 +258,6 @@ struct nicvf_drv_stats {
 	struct u64_stats_sync   syncp;
 };
 
-struct nicvf {
-	struct nicvf		*pnicvf;
-	struct net_device	*netdev;
-	struct pci_dev		*pdev;
-	void __iomem		*reg_base;
-#define	MAX_QUEUES_PER_QSET			8
-	struct queue_set	*qs;
-	struct nicvf_cq_poll	*napi[8];
-	u8			vf_id;
-	u8			sqs_id;
-	bool                    sqs_mode;
-	bool			hw_tso;
-	bool			t88;
-
-	/* Receive buffer alloc */
-	u32			rb_page_offset;
-	u16			rb_pageref;
-	bool			rb_alloc_fail;
-	bool			rb_work_scheduled;
-	struct page		*rb_page;
-	struct delayed_work	rbdr_work;
-	struct tasklet_struct	rbdr_task;
-
-	/* Secondary Qset */
-	u8			sqs_count;
-#define	MAX_SQS_PER_VF_SINGLE_NODE		5
-#define	MAX_SQS_PER_VF				11
-	struct nicvf		*snicvf[MAX_SQS_PER_VF];
-
-	/* Queue count */
-	u8			rx_queues;
-	u8			tx_queues;
-	u8			max_queues;
-
-	u8			node;
-	u8			cpi_alg;
-	bool			link_up;
-	u8			duplex;
-	u32			speed;
-	bool			tns_mode;
-	bool			loopback_supported;
-	struct nicvf_rss_info	rss_info;
-	struct tasklet_struct	qs_err_task;
-	struct work_struct	reset_task;
-
-	/* Interrupt coalescing settings */
-	u32			cq_coalesce_usecs;
-	u32			msg_enable;
-
-	/* Stats */
-	struct nicvf_hw_stats   hw_stats;
-	struct nicvf_drv_stats  __percpu *drv_stats;
-	struct bgx_stats	bgx_stats;
-
-	/* MSI-X  */
-	bool			msix_enabled;
-	u8			num_vec;
-	struct msix_entry	msix_entries[NIC_VF_MSIX_VECTORS];
-	char			irq_name[NIC_VF_MSIX_VECTORS][IFNAMSIZ + 15];
-	bool			irq_allocated[NIC_VF_MSIX_VECTORS];
-	cpumask_var_t		affinity_mask[NIC_VF_MSIX_VECTORS];
-
-	/* VF <-> PF mailbox communication */
-	bool			pf_acked;
-	bool			pf_nacked;
-	bool			set_mac_pending;
-} ____cacheline_aligned_in_smp;
-
 /* PF <--> VF Mailbox communication
  * Eight 64bit registers are shared between PF and VF.
  * Separate set for each VF.
@@ -357,6 +292,18 @@ struct nicvf {
 #define	NIC_MBOX_MSG_SNICVF_PTR		0x15	/* Send sqet nicvf ptr to PVF */
 #define	NIC_MBOX_MSG_LOOPBACK		0x16	/* Set interface in loopback */
 #define	NIC_MBOX_MSG_RESET_STAT_COUNTER 0x17	/* Reset statistics counters */
+/* Communicate regarding the added/deleted unicast/multicase address */
+#define NIC_MBOX_MSG_UC_MC              0x18
+/* Communicate regarding the setting of Promisc mode */
+#define NIC_MBOX_MSG_PROMISC		0x19
+/* Communicate with vf regarding admin vlan */
+#define NIC_MBOX_MSG_ADMIN_VLAN         0x20
+/* Communicate regarding the added/deleted VLAN */
+#define NIC_MBOX_MSG_VLAN               0x21
+/* Communicate to Pf that the VF carrier and tx queue are turned on */
+#define NIC_MBOX_MSG_OP_UP		0x22
+/* Communicate to Pf that the VF carrier and tx queue are turned off */
+#define NIC_MBOX_MSG_OP_DOWN		0x23
 #define	NIC_MBOX_MSG_CFG_DONE		0xF0	/* VF configuration done */
 #define	NIC_MBOX_MSG_SHUTDOWN		0xF1	/* VF is being shutdown */
 
@@ -367,6 +314,29 @@ struct nic_cfg_msg {
 	u8    tns_mode:1;
 	u8    sqs_mode:1;
 	u8    loopback_supported:1;
+	u8    pf_up:1;
+	bool  is_pf;
+	bool  veb_enabled;
+	bool  bgx_id;
+	u8    lmac;
+	u8    chan;
+	u8    mac_addr[ETH_ALEN];
+};
+
+/* VLAN INFO */
+struct vlan_msg {
+	u8 msg;
+	u8 vf_id;
+	bool vlan_add:1;
+	u16 vlan_id;
+};
+
+struct uc_mc_msg {
+	u8 msg;
+	u8 vf_id;
+	uint64_t addr_type:1;
+	uint64_t is_flush:1;
+	uint64_t is_add:1;
 	u8    mac_addr[ETH_ALEN];
 };
 
@@ -446,6 +416,7 @@ struct bgx_stats_msg {
 /* Physical interface link status */
 struct bgx_link_status {
 	u8    msg;
+	u8    lmac;
 	u8    link_up;
 	u8    duplex;
 	u32   speed;
@@ -498,9 +469,18 @@ struct reset_stat_cfg {
 	u16   sq_stat_mask;
 };
 
+struct promisc_info {
+	u8    msg;
+	u8    vf_id;
+	bool  on;
+};
+
 /* 128 bit shared memory between PF and each VF */
 union nic_mbx {
 	struct { u8 msg; }	msg;
+	struct promisc_info     promisc_cfg;
+	struct vlan_msg		vlan_cfg;
+	struct uc_mc_msg	uc_mc_cfg;
 	struct nic_cfg_msg	nic_cfg;
 	struct qs_cfg_msg	qs;
 	struct rq_cfg_msg	rq;
@@ -518,6 +498,93 @@ struct reset_stat_cfg {
 	struct reset_stat_cfg	reset_stat;
 };
 
+struct nicvf {
+	struct nicvf		*pnicvf;
+	struct net_device	*netdev;
+	struct pci_dev		*pdev;
+	void __iomem		*reg_base;
+#define	MAX_QUEUES_PER_QSET			8
+	struct queue_set	*qs;
+	struct nicvf_cq_poll	*napi[8];
+	u8			vf_id;
+	u8			sqs_id;
+	bool                    sqs_mode;
+	bool			hw_tso;
+	bool			t88;
+
+	/* Receive buffer alloc */
+	u32			rb_page_offset;
+	u16			rb_pageref;
+	bool			rb_alloc_fail;
+	bool			rb_work_scheduled;
+	struct page		*rb_page;
+	struct delayed_work	rbdr_work;
+	struct tasklet_struct	rbdr_task;
+
+	/* Secondary Qset */
+	u8			sqs_count;
+#define	MAX_SQS_PER_VF_SINGLE_NODE		5
+#define	MAX_SQS_PER_VF				11
+	struct nicvf		*snicvf[MAX_SQS_PER_VF];
+
+	/* Queue count */
+	u8			rx_queues;
+	u8			tx_queues;
+	u8			max_queues;
+
+	u8			node;
+	u8			cpi_alg;
+	u16			mtu;
+	bool			link_up;
+	u8			duplex;
+	u32			speed;
+	bool			tns_mode;
+	bool			loopback_supported;
+	/* In VEB mode, true_vf directely attached to physical port,
+	 * it acts as PF in this VF group (set of VF's attached to same
+	 * physical port).
+	 */
+	bool			true_vf;
+	struct nicvf_rss_info	rss_info;
+	struct tasklet_struct	qs_err_task;
+	struct work_struct	reset_task;
+
+	/* Interrupt coalescing settings */
+	u32			cq_coalesce_usecs;
+	u32			msg_enable;
+
+	/* Stats */
+	struct nicvf_hw_stats   hw_stats;
+	struct nicvf_drv_stats  __percpu *drv_stats;
+	struct bgx_stats	bgx_stats;
+
+	/* MSI-X  */
+	bool			msix_enabled;
+	u8			num_vec;
+	struct msix_entry	msix_entries[NIC_VF_MSIX_VECTORS];
+	char			irq_name[NIC_VF_MSIX_VECTORS][IFNAMSIZ + 15];
+	bool			irq_allocated[NIC_VF_MSIX_VECTORS];
+	cpumask_var_t		affinity_mask[NIC_VF_MSIX_VECTORS];
+
+	char			phys_port_name[IFNAMSIZ + 15];
+	/* VF <-> PF mailbox communication */
+	bool			pf_acked;
+	bool			pf_nacked;
+	bool			set_mac_pending;
+	struct netdev_hw_addr_list uc_shadow;
+	struct netdev_hw_addr_list mc_shadow;
+
+	/* work queue for handling UC MC mailbox messages */
+	bool			send_op_link_status;
+	struct delayed_work	dwork;
+	struct workqueue_struct *uc_mc_msg;
+
+	/* Admin vlan id */
+	int			admin_vlan_id;
+	bool			pf_ack_waiting;
+	bool			wait_for_ack;
+} ____cacheline_aligned_in_smp;
+
 #define NIC_NODE_ID_MASK	0x03
 #define NIC_NODE_ID_SHIFT	44
 
diff --git a/drivers/net/ethernet/cavium/thunder/nic_main.c b/drivers/net/ethernet/cavium/thunder/nic_main.c
index 6677b96..42299320 100644
--- a/drivers/net/ethernet/cavium/thunder/nic_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nic_main.c
@@ -17,6 +17,7 @@
 #include "nic.h"
 #include "q_struct.h"
 #include "thunder_bgx.h"
+#include "pf_globals.h"
 
 #define DRV_NAME	"thunder-nic"
 #define DRV_VERSION	"1.0"
@@ -70,7 +71,52 @@ struct nicpf {
 	struct msix_entry	*msix_entries;
 	bool			irq_allocated[NIC_PF_MSIX_VECTORS];
 	char			irq_name[NIC_PF_MSIX_VECTORS][20];
-};
+	bool			vf_op_enabled[MAX_NUM_VFS_SUPPORTED];
+	bool			admin_vlan[MAX_NUM_VFS_SUPPORTED];
+	u8			vnic_intf_map[MAX_NUM_VFS_SUPPORTED];
+	u64			mac[MAX_NUM_VFS_SUPPORTED];
+	struct delayed_work	notification_dwork;
+	struct workqueue_struct *notification_msg;
+
+#define MAX_VF_MBOX_MESSAGE	(2 * MAX_NUM_VFS_SUPPORTED)
+	union nic_mbx		vf_mbx_msg[MAX_VF_MBOX_MESSAGE];
+	bool			valid_vf_mbx_msg[MAX_VF_MBOX_MESSAGE];
+	/* Protect different notification messages */
+	spinlock_t		vf_mbx_msg_lock;
+} ____cacheline_aligned_in_smp;
+
+static unsigned int num_vfs;
+module_param(num_vfs, uint, 0644);
+MODULE_PARM_DESC(num_vfs, "Non zero positive value, specifies number of VF's per physical port");
+
+static u8 link_lmac[MAX_NUMNODES][TNS_MAX_LMAC];
+static int pf_speed[MAX_NUMNODES][TNS_MAX_LMAC];
+static int pf_duplex[MAX_NUMNODES][TNS_MAX_LMAC];
+
+static void nic_send_msg_to_vf(struct nicpf *nic, int vf, union nic_mbx *mbx);
+static int veb_enabled;
+
+void send_link_change_to_vf(struct nicpf *nic, void *arg)
+{
+	int start_vf, end_vf;
+	int i;
+	union nic_mbx *mbx = (union nic_mbx *)arg;
+
+	get_vf_group(nic->node, mbx->link_status.lmac, &start_vf, &end_vf);
+
+	for (i = start_vf; i <= end_vf; i++) {
+		union nic_mbx lmbx = {};
+
+		if (!nic->vf_enabled[i])
+			continue;
+		if (!nic->mbx_lock[i])
+			nic_send_msg_to_vf(nic, i, mbx);
+		lmbx.mac.vf_id = i;
+		lmbx.msg.msg = mbx->link_status.link_up ? NIC_MBOX_MSG_OP_UP :
+							 NIC_MBOX_MSG_OP_DOWN;
+		pf_notify_msg_handler(nic->node, (void *)(&lmbx));
+	}
+}
 
 /* Supported devices */
 static const struct pci_device_id nic_id_table[] = {
@@ -135,6 +181,53 @@ static u64 nic_get_mbx_addr(int vf)
 	return NIC_PF_VF_0_127_MAILBOX_0_1 + (vf << NIC_VF_NUM_SHIFT);
 }
 
+/* Set RBDR Backpressure (RBDR_BP) and CQ backpressure (CQ_BP) of vnic queues
+ * to 129 each
+ * @vf: vf to which bp needs to be set
+ * @rcv_id: receive queue of the vf
+ */
+void set_rbdr_cq_bp(struct nicpf *nic, u8 vf, u8 rcv_id)
+{
+	union nic_pf_qsx_rqx_bp_cfg bp_info;
+	u64 offset = 0;
+
+	offset = (vf & 127) * 0x200000ull + (rcv_id & 7) * 0x40000ull;
+	bp_info.u = nic_reg_read(nic,  NIC_PF_QSX_RQX_BP_CFG + offset);
+	bp_info.s.rbdr_bp = RBDR_CQ_BP;
+	bp_info.s.cq_bp = RBDR_CQ_BP;
+	nic_reg_write(nic, NIC_PF_QSX_RQX_BP_CFG + offset, bp_info.u);
+}
+
+/* Set backpressure configuratin  on the NIC TNS receive interface
+ * @intf: NIC interface
+ * @tns_mode: if the NIC is in  TNS/BY-PASS mode
+ */
+void set_bp_id(struct nicpf *nic, u8 intf, u8 tns_mode)
+{
+	union nic_pf_intfx_bp_cfg bp_conf;
+	u8 offset = (intf & 1) * 0x100ull;
+
+	bp_conf.u = nic_reg_read(nic, NIC_PF_INTFX_BP_CFG + offset);
+	bp_conf.s.bp_id = (intf) ? ((tns_mode) ? 0x7 : 0x9) :
+					((tns_mode) ? 0x6 : 0x8);
+	nic_reg_write(nic,  NIC_PF_INTFX_BP_CFG + offset, bp_conf.u);
+}
+
+/* enable the BP bus for this interface
+ * @intf: NIC interface
+ */
+void bp_enable(struct nicpf *nic, u8 intf)
+{
+	union nic_pf_intfx_bp_cfg bp_conf;
+	u8 offset = (intf & 1) * 0x100ull;
+
+	bp_conf.u = nic_reg_read(nic, NIC_PF_INTFX_BP_CFG + offset);
+	if (!bp_conf.s.bp_ena)
+		bp_conf.s.bp_ena = 1;
+
+	nic_reg_write(nic, NIC_PF_INTFX_BP_CFG + offset, bp_conf.u);
+}
+
 /* Send a mailbox message to VF
  * @vf: vf to which this message to be sent
  * @mbx: Message to be sent
@@ -169,24 +262,53 @@ static void nic_mbx_send_ready(struct nicpf *nic, int vf)
 	union nic_mbx mbx = {};
 	int bgx_idx, lmac;
 	const char *mac;
+	int nid = nic->node;
 
 	mbx.nic_cfg.msg = NIC_MBOX_MSG_READY;
 	mbx.nic_cfg.vf_id = vf;
 
-	mbx.nic_cfg.tns_mode = NIC_TNS_BYPASS_MODE;
+	if (veb_enabled)
+		mbx.nic_cfg.tns_mode = NIC_TNS_MODE;
+	else
+		mbx.nic_cfg.tns_mode = NIC_TNS_BYPASS_MODE;
 
-	if (vf < nic->num_vf_en) {
+	if (!veb_enabled && vf < nic->num_vf_en) {
 		bgx_idx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
 		lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
 
-		mac = bgx_get_lmac_mac(nic->node, bgx_idx, lmac);
+		mac = bgx_get_lmac_mac(nid, bgx_idx, lmac);
 		if (mac)
 			ether_addr_copy((u8 *)&mbx.nic_cfg.mac_addr, mac);
+	} else if (veb_enabled) {
+		int lmac = 0, bgx_idx = 0;
+
+		if (get_bgx_id(nid, vf, &bgx_idx, &lmac))
+			dev_err(&nic->pdev->dev, "!!ERROR!!Wrong BGX values\n");
+
+		if (is_pf(nid, vf)) {
+			mac = bgx_get_lmac_mac(nid, bgx_idx, lmac);
+			if (mac)
+				ether_addr_copy((u8 *)&nic->mac[vf], mac);
+		} else if (is_zero_ether_addr((u8 *)&nic->mac[vf])) {
+			eth_random_addr((u8 *)&nic->mac[vf]);
+		}
+
+		ether_addr_copy((u8 *)&mbx.nic_cfg.mac_addr,
+				(u8 *)&nic->mac[vf]);
+		mbx.nic_cfg.is_pf = is_pf(nid, vf);
+		mbx.nic_cfg.lmac = lmac;
+		mbx.nic_cfg.bgx_id = bgx_idx;
+		mbx.nic_cfg.chan = (vf < 64) ? vf : (64 + vf);
 	}
-	mbx.nic_cfg.sqs_mode = (vf >= nic->num_vf_en) ? true : false;
-	mbx.nic_cfg.node_id = nic->node;
+	mbx.nic_cfg.veb_enabled = (veb_enabled == 0) ? 0 : 1;
+	mbx.nic_cfg.node_id = nid;
 
-	mbx.nic_cfg.loopback_supported = vf < nic->num_vf_en;
+	if (veb_enabled) {
+		mbx.nic_cfg.pf_up = link_lmac[nid][vf_to_pport(nid, vf)];
+	} else {
+		mbx.nic_cfg.loopback_supported = vf < nic->num_vf_en;
+		mbx.nic_cfg.sqs_mode = (vf >= nic->num_vf_en) ? true : false;
+	}
 
 	nic_send_msg_to_vf(nic, vf, &mbx);
 }
@@ -242,8 +364,15 @@ static void nic_get_bgx_stats(struct nicpf *nic, struct bgx_stats_msg *bgx)
 	int bgx_idx, lmac;
 	union nic_mbx mbx = {};
 
-	bgx_idx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[bgx->vf_id]);
-	lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[bgx->vf_id]);
+	if (veb_enabled) {
+		if (get_bgx_id(nic->node, bgx->vf_id, &bgx_idx, &lmac))
+			dev_err(&nic->pdev->dev, "Unable to get BGX index\n");
+	} else {
+		bgx_idx = NIC_GET_BGX_FROM_VF_LMAC_MAP(
+				nic->vf_lmac_map[bgx->vf_id]);
+		lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(
+				nic->vf_lmac_map[bgx->vf_id]);
+	}
 
 	mbx.bgx_stats.msg = NIC_MBOX_MSG_BGX_STATS;
 	mbx.bgx_stats.vf_id = bgx->vf_id;
@@ -267,6 +396,16 @@ static int nic_update_hw_frs(struct nicpf *nic, int new_frs, int vf)
 	if ((new_frs > NIC_HW_MAX_FRS) || (new_frs < NIC_HW_MIN_FRS))
 		return 1;
 
+	if (veb_enabled) {
+		new_frs += ETH_HLEN;
+		if (new_frs <= nic->pkind.maxlen)
+			return 0;
+
+		nic->pkind.maxlen = new_frs;
+		nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG, *(u64 *)&nic->pkind);
+		return 0;
+	}
+
 	bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
 	lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
 	lmac += bgx * MAX_LMAC_PER_BGX;
@@ -302,8 +441,14 @@ static void nic_set_tx_pkt_pad(struct nicpf *nic, int size)
 	 * Hence set this value to lessthan min pkt size of MAC+IP+TCP
 	 * headers, BGX will do the padding to transmit 64 byte pkt.
 	 */
-	if (size > 52)
-		size = 52;
+	if (size > 52) {
+		if (veb_enabled) {
+			if (size > 60)
+				size = 60;
+		} else {
+			size = 52;
+		}
+	}
 
 	pci_read_config_word(nic->pdev, PCI_SUBSYSTEM_ID, &sdevid);
 	/* 81xx's RGX has only one LMAC */
@@ -331,6 +476,10 @@ static void nic_set_lmac_vf_mapping(struct nicpf *nic)
 	u64 lmac_credit;
 
 	nic->num_vf_en = 0;
+	if (veb_enabled) {
+		nic->num_vf_en = PF_END;
+		return;
+	}
 
 	for (bgx = 0; bgx < nic->hw->bgx_cnt; bgx++) {
 		if (!(bgx_map & (1 << bgx)))
@@ -386,7 +535,8 @@ static int nic_get_hw_info(struct nicpf *nic)
 		hw->chans_per_bgx = 128;
 		hw->cpi_cnt = 2048;
 		hw->rssi_cnt = 4096;
-		hw->rss_ind_tbl_size = NIC_MAX_RSS_IDR_TBL_SIZE;
+		hw->rss_ind_tbl_size = veb_enabled ? NIC_TNS_RSS_IDR_TBL_SIZE :
+						     NIC_MAX_RSS_IDR_TBL_SIZE;
 		hw->tl3_cnt = 256;
 		hw->tl2_cnt = 64;
 		hw->tl1_cnt = 2;
@@ -451,6 +601,9 @@ static int nic_init_hw(struct nicpf *nic)
 	int i, err;
 	u64 cqm_cfg;
 
+	/* Reset NIC, in case the driver is repeatedly inserted and removed */
+	nic_reg_write(nic, NIC_PF_SOFT_RESET, 1);
+
 	/* Get HW capability info */
 	err = nic_get_hw_info(nic);
 	if (err)
@@ -462,23 +615,36 @@ static int nic_init_hw(struct nicpf *nic)
 	/* Enable backpressure */
 	nic_reg_write(nic, NIC_PF_BP_CFG, (1ULL << 6) | 0x03);
 
-	/* TNS and TNS bypass modes are present only on 88xx */
-	if (nic->pdev->subsystem_device == PCI_SUBSYS_DEVID_88XX_NIC_PF) {
-		/* Disable TNS mode on both interfaces */
+	if (veb_enabled) {
 		nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG,
-			      (NIC_TNS_BYPASS_MODE << 7) | BGX0_BLOCK);
+			      (NIC_TNS_MODE << 7) | (0x03ULL << 4) | 0x06);
 		nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG | (1 << 8),
-			      (NIC_TNS_BYPASS_MODE << 7) | BGX1_BLOCK);
+			      (NIC_TNS_MODE << 7) | (0x03ULL << 4) | 0x07);
+		nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG,
+			      (1ULL << 63) | (1ULL << 4) | 0x09);
+		nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG + (1 << 8),
+			      (1ULL << 63) | (1ULL << 4) | 0x09);
+	} else {
+		/* TNS and TNS bypass modes are present only on 88xx */
+		if (nic->pdev->subsystem_device ==
+		    PCI_SUBSYS_DEVID_88XX_NIC_PF) {
+			/* Disable TNS mode on both interfaces */
+			nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG,
+				      (NIC_TNS_BYPASS_MODE << 7) | BGX0_BLOCK);
+			nic_reg_write(nic, NIC_PF_INTF_0_1_SEND_CFG | (1 << 8),
+				      (NIC_TNS_BYPASS_MODE << 7) | BGX1_BLOCK);
+		}
+		nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG,
+			      (1ULL << 63) | BGX0_BLOCK);
+		nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG + (1 << 8),
+			      (1ULL << 63) | BGX1_BLOCK);
 	}
 
-	nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG,
-		      (1ULL << 63) | BGX0_BLOCK);
-	nic_reg_write(nic, NIC_PF_INTF_0_1_BP_CFG + (1 << 8),
-		      (1ULL << 63) | BGX1_BLOCK);
-
 	/* PKIND configuration */
 	nic->pkind.minlen = 0;
-	nic->pkind.maxlen = NIC_HW_MAX_FRS + VLAN_ETH_HLEN + ETH_FCS_LEN + 4;
+	nic->pkind.maxlen = NIC_HW_MAX_FRS + ETH_HLEN;
+	if (!veb_enabled)
+		nic->pkind.maxlen += VLAN_HLEN + ETH_FCS_LEN + 4;
 	nic->pkind.lenerr_en = 1;
 	nic->pkind.rx_hdr = 0;
 	nic->pkind.hdr_sl = 0;
@@ -508,7 +674,7 @@ static int nic_init_hw(struct nicpf *nic)
 static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg)
 {
 	struct hw_info *hw = nic->hw;
-	u32 vnic, bgx, lmac, chan;
+	u32 vnic, bgx, lmac, chan = 0;
 	u32 padd, cpi_count = 0;
 	u64 cpi_base, cpi, rssi_base, rssi;
 	u8  qset, rq_idx = 0;
@@ -517,8 +683,17 @@ static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg)
 	bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vnic]);
 	lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vnic]);
 
-	chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx);
-	cpi_base = vnic * NIC_MAX_CPI_PER_LMAC;
+	if (veb_enabled) {
+		if (nic->vnic_intf_map[vnic] == 0)
+			chan = vnic;
+		else if (nic->vnic_intf_map[vnic] == 1)
+			chan = 128 + (vnic - 64);
+		cpi_base = vnic * NIC_TNS_CPI_PER_LMAC;
+	} else {
+		chan = (lmac * hw->chans_per_lmac) +
+			(bgx * hw->chans_per_bgx);
+		cpi_base = vnic * NIC_MAX_CPI_PER_LMAC;
+	}
 	rssi_base = vnic * hw->rss_ind_tbl_size;
 
 	/* Rx channel configuration */
@@ -534,7 +709,8 @@ static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg)
 	else if (cfg->cpi_alg == CPI_ALG_VLAN16) /* 3 bits PCP + DEI */
 		cpi_count = 16;
 	else if (cfg->cpi_alg == CPI_ALG_DIFF) /* 6bits DSCP */
-		cpi_count = NIC_MAX_CPI_PER_LMAC;
+		cpi_count = veb_enabled ? NIC_TNS_CPI_PER_LMAC :
+					  NIC_MAX_CPI_PER_LMAC;
 
 	/* RSS Qset, Qidx mapping */
 	qset = cfg->vf_id;
@@ -542,6 +718,8 @@ static void nic_config_cpi(struct nicpf *nic, struct cpi_cfg_msg *cfg)
 	for (; rssi < (rssi_base + cfg->rq_cnt); rssi++) {
 		nic_reg_write(nic, NIC_PF_RSSI_0_4097_RQ | (rssi << 3),
 			      (qset << 3) | rq_idx);
+		if (veb_enabled)
+			set_rbdr_cq_bp(nic, vnic, rq_idx);
 		rq_idx++;
 	}
 
@@ -652,8 +830,8 @@ static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic,
 			       struct sq_cfg_msg *sq)
 {
 	struct hw_info *hw = nic->hw;
-	u32 bgx, lmac, chan;
-	u32 tl2, tl3, tl4;
+	u32 bgx, lmac, chan = 0;
+	u32 tl2, tl3, tl4 = 0;
 	u32 rr_quantum;
 	u8 sq_idx = sq->sq_num;
 	u8 pqs_vnic;
@@ -670,10 +848,19 @@ static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic,
 	/* 24 bytes for FCS, IPG and preamble */
 	rr_quantum = ((NIC_HW_MAX_FRS + 24) / 4);
 
-	/* For 88xx 0-511 TL4 transmits via BGX0 and
-	 * 512-1023 TL4s transmit via BGX1.
-	 */
-	if (hw->tl1_per_bgx) {
+	if (veb_enabled) {
+		if (nic->vnic_intf_map[vnic] == 0) {
+			tl4 = (hw->tl4_cnt / hw->chans_per_bgx) * vnic;
+			chan = vnic;
+		} else if (nic->vnic_intf_map[vnic] == 1) {
+			tl4 = (hw->tl4_cnt / hw->bgx_cnt) +
+			      (hw->tl4_cnt / hw->chans_per_bgx) * (vnic - 64);
+			chan = 128 + (vnic - 64);
+		}
+	} else if (hw->tl1_per_bgx) {
+		/* For 88xx 0-511 TL4 transmits via BGX0 and
+		 * 512-1023 TL4s transmit via BGX1.
+		 */
 		tl4 = bgx * (hw->tl4_cnt / hw->bgx_cnt);
 		if (!sq->sqs_mode) {
 			tl4 += (lmac * MAX_QUEUES_PER_QSET);
@@ -686,8 +873,10 @@ static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic,
 			tl4 += (lmac * MAX_QUEUES_PER_QSET * MAX_SQS_PER_VF);
 			tl4 += (svf * MAX_QUEUES_PER_QSET);
 		}
+		chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx);
 	} else {
 		tl4 = (vnic * MAX_QUEUES_PER_QSET);
+		chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx);
 	}
 	tl4 += sq_idx;
 
@@ -706,7 +895,6 @@ static void nic_tx_channel_cfg(struct nicpf *nic, u8 vnic,
 	 * On 81xx/83xx TL3_CHAN reg should be configured with channel
 	 * within LMAC i.e 0-7 and not the actual channel number like on 88xx
 	 */
-	chan = (lmac * hw->chans_per_lmac) + (bgx * hw->chans_per_bgx);
 	if (hw->tl1_per_bgx)
 		nic_reg_write(nic, NIC_PF_TL3_0_255_CHAN | (tl3 << 3), chan);
 	else
@@ -874,6 +1062,30 @@ static void nic_enable_tunnel_parsing(struct nicpf *nic, int vf)
 		      ((0xfULL << 60) | vxlan_prot_def));
 }
 
+void send_notifications(struct work_struct *work)
+{
+	struct nicpf *nic;
+	int i;
+
+	nic = container_of(work, struct nicpf, notification_dwork.work);
+	spin_lock(&nic->vf_mbx_msg_lock);
+	for (i = 0; i < MAX_VF_MBOX_MESSAGE; i++) {
+		union nic_mbx *mbx = &nic->vf_mbx_msg[i];
+
+		if (!nic->valid_vf_mbx_msg[i])
+			continue;
+
+		spin_unlock(&nic->vf_mbx_msg_lock);
+		if (mbx->link_status.msg == NIC_MBOX_MSG_BGX_LINK_CHANGE)
+			send_link_change_to_vf(nic, (void *)mbx);
+		else
+			pf_notify_msg_handler(nic->node, (void *)mbx);
+		spin_lock(&nic->vf_mbx_msg_lock);
+		nic->valid_vf_mbx_msg[i] = false;
+	}
+	spin_unlock(&nic->vf_mbx_msg_lock);
+}
+
 static void nic_enable_vf(struct nicpf *nic, int vf, bool enable)
 {
 	int bgx, lmac;
@@ -889,6 +1101,187 @@ static void nic_enable_vf(struct nicpf *nic, int vf, bool enable)
 	bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, enable);
 }
 
+static int nic_submit_msg_notification(struct nicpf *nic, int vf,
+				       union nic_mbx *mbx)
+{
+	int i, ret = 0;
+
+	if (!veb_enabled)
+		return ret;
+
+	/* PF<->VF Communication Work, and request Validation */
+	switch (mbx->msg.msg) {
+	case NIC_MBOX_MSG_VLAN:
+		if (mbx->vlan_cfg.vlan_add &&
+		    (tns_filter_valid_entry(nic->node, NIC_MBOX_MSG_VLAN, vf,
+						mbx->vlan_cfg.vlan_id) ||
+		     nic->admin_vlan[vf])) {
+			nic_mbx_send_nack(nic, vf);
+			return 1;
+		}
+		break;
+	case NIC_MBOX_MSG_ADMIN_VLAN:
+		if ((mbx->vlan_cfg.vlan_add &&
+		     (tns_filter_valid_entry(nic->node, NIC_MBOX_MSG_ADMIN_VLAN,
+						vf, mbx->vlan_cfg.vlan_id) ||
+		      nic->admin_vlan[mbx->vlan_cfg.vf_id])) ||
+		    (!is_pf(nic->node, vf) ||
+			(get_pf(nic->node, mbx->vlan_cfg.vf_id) != vf))) {
+			nic_mbx_send_nack(nic, vf);
+			return 1;
+		}
+		break;
+	case NIC_MBOX_MSG_UC_MC:
+		if (mbx->uc_mc_cfg.is_add &&
+		    tns_filter_valid_entry(nic->node, NIC_MBOX_MSG_UC_MC,
+					   vf, 0)) {
+			dev_err(&nic->pdev->dev, "MAC filter max reached\n");
+			nic_mbx_send_nack(nic, vf);
+			return 1;
+		}
+		break;
+	case NIC_MBOX_MSG_OP_UP:
+		if (!nic->vf_enabled[vf])
+			return 0;
+		break;
+	case NIC_MBOX_MSG_OP_DOWN:
+		if (!(nic->vf_enabled[vf] && nic->vf_op_enabled[vf]))
+			return 0;
+		break;
+	case NIC_MBOX_MSG_CFG_DONE:
+	{
+		int port = vf_to_pport(nic->node, vf);
+
+		/* Last message of VF config msg sequence */
+		nic->vf_enabled[vf] = true;
+		if (is_pf(nic->node, vf)) {
+			int bgx_id, lmac;
+
+			if (get_bgx_id(nic->node, vf, &bgx_id, &lmac))
+				dev_err(&nic->pdev->dev, "Unable to get BGX index\n");
+
+			/* ENABLE PAUSE FRAME GENERATION */
+			enable_pause_frames(nic->node, bgx_id, lmac);
+
+			bgx_lmac_rx_tx_enable(nic->node, bgx_id, lmac, true);
+		}
+		if (link_lmac[nic->node][port]) {
+			union nic_mbx mbx = {};
+
+			mbx.link_status.msg = NIC_MBOX_MSG_CFG_DONE;
+			mbx.link_status.link_up = 1;
+			mbx.link_status.duplex = pf_duplex[nic->node][port];
+			mbx.link_status.speed = pf_speed[nic->node][port];
+			nic_send_msg_to_vf(nic, vf, &mbx);
+		} else {
+			nic_mbx_send_ack(nic, vf);
+		}
+
+		if (is_pf(nic->node, vf) && link_lmac[nic->node][port]) {
+			mbx->link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
+			mbx->link_status.link_up = 1;
+			mbx->link_status.speed = pf_speed[nic->node][port];
+			mbx->link_status.duplex = pf_duplex[nic->node][port];
+			mbx->link_status.lmac = port;
+			break;
+		}
+		return 1;
+	}
+	}
+
+	spin_lock(&nic->vf_mbx_msg_lock);
+	for (i = 0; i < MAX_VF_MBOX_MESSAGE; i++)
+		if (!nic->valid_vf_mbx_msg[i])
+			break;
+	if (i == MAX_VF_MBOX_MESSAGE) {
+		spin_unlock(&nic->vf_mbx_msg_lock);
+		dev_err(&nic->pdev->dev, "Notification array full msg: %d\n",
+			mbx->msg.msg);
+		return -1;
+	}
+
+	memcpy(&nic->vf_mbx_msg[i], mbx, sizeof(union nic_mbx));
+
+	switch (mbx->msg.msg) {
+	case NIC_MBOX_MSG_READY:
+		nic->vf_mbx_msg[i].msg.msg = NIC_MBOX_MSG_SET_MAC;
+		ether_addr_copy((u8 *)&nic->vf_mbx_msg[i].mac.mac_addr,
+				(u8 *)&nic->mac[vf]);
+		/* fall-through */
+	case NIC_MBOX_MSG_SET_MAC:
+		nic->vf_mbx_msg[i].mac.vf_id = vf;
+		break;
+	case NIC_MBOX_MSG_ADMIN_VLAN:
+		nic_send_msg_to_vf(nic, mbx->vlan_cfg.vf_id, mbx);
+		nic->admin_vlan[mbx->vlan_cfg.vf_id] = mbx->vlan_cfg.vlan_add;
+		break;
+	case NIC_MBOX_MSG_PROMISC:
+		ret = 1;
+	case NIC_MBOX_MSG_VLAN:
+	case NIC_MBOX_MSG_UC_MC:
+		break;
+	case NIC_MBOX_MSG_OP_UP:
+		nic->vf_op_enabled[vf] = true;
+		nic->vf_mbx_msg[i].mac.vf_id = vf;
+		break;
+	case NIC_MBOX_MSG_OP_DOWN:
+		nic->vf_mbx_msg[i].mac.vf_id = vf;
+		nic->vf_op_enabled[vf] = false;
+		break;
+	case NIC_MBOX_MSG_SHUTDOWN:
+	{
+		int submit_work = 0;
+
+#ifdef VNIC_MULTI_QSET_SUPPORT
+		if (vf >= nic->num_vf_en)
+			nic->sqs_used[vf - nic->num_vf_en] = false;
+		nic->pqs_vf[vf] = 0;
+#endif
+		if (is_pf(nic->node, vf)) {
+			int bgx_idx, lmac_idx;
+
+			if (get_bgx_id(nic->node, vf, &bgx_idx, &lmac_idx))
+				dev_err(&nic->pdev->dev, "Unable to get BGX\n");
+
+			bgx_lmac_rx_tx_enable(nic->node, bgx_idx, lmac_idx,
+					      false);
+		}
+
+		if (is_pf(nic->node, vf) &&
+		    link_lmac[nic->node][vf_to_pport(nic->node, vf)]) {
+			union nic_mbx *lmbx = &nic->vf_mbx_msg[i + 1];
+
+			lmbx->link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
+			lmbx->link_status.lmac = vf_to_pport(nic->node, vf);
+			lmbx->link_status.link_up = 0;
+			nic->valid_vf_mbx_msg[i + 1] = true;
+			submit_work = 1;
+		}
+
+		if (nic->vf_enabled[vf] && nic->vf_op_enabled[vf]) {
+			nic->vf_mbx_msg[i].mac.vf_id = vf;
+			nic->vf_enabled[vf] = false;
+			submit_work = 1;
+		}
+		if (submit_work)
+			break;
+
+		/* First msg in VF teardown sequence */
+		nic->vf_enabled[vf] = false;
+		spin_unlock(&nic->vf_mbx_msg_lock);
+		return 0;
+	}
+	default:
+		break;
+	}
+
+	nic->valid_vf_mbx_msg[i] = true;
+	spin_unlock(&nic->vf_mbx_msg_lock);
+	queue_delayed_work(nic->notification_msg, &nic->notification_dwork, 0);
+
+	return ret;
+}
+
 /* Interrupt handler to handle mailbox messages from VFs */
 static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
 {
@@ -916,12 +1309,32 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
 		__func__, mbx.msg.msg, vf);
 	switch (mbx.msg.msg) {
 	case NIC_MBOX_MSG_READY:
+		if (veb_enabled) {
+			if (!is_pf(nic->node, vf) &&
+			    (vf > (get_pf(nic->node, vf) + veb_enabled))) {
+				nic_mbx_send_nack(nic, vf);
+				goto unlock;
+			}
+		}
 		nic_mbx_send_ready(nic, vf);
 		if (vf < nic->num_vf_en) {
 			nic->link[vf] = 0;
 			nic->duplex[vf] = 0;
 			nic->speed[vf] = 0;
 		}
+		//PF assigning MAC address for VF, as part of VF probe init
+		//We need to notify this to filter as VF set MAC
+		if (veb_enabled)
+			nic_submit_msg_notification(nic, vf, &mbx);
+		goto unlock;
+	case NIC_MBOX_MSG_VLAN:
+	case NIC_MBOX_MSG_ADMIN_VLAN:
+	case NIC_MBOX_MSG_UC_MC:
+		if (nic_submit_msg_notification(nic, vf, &mbx))
+			goto unlock;
+		break;
+	case NIC_MBOX_MSG_PROMISC:
+		nic_submit_msg_notification(nic, vf, &mbx);
 		goto unlock;
 	case NIC_MBOX_MSG_QS_CFG:
 		reg_addr = NIC_PF_QSET_0_127_CFG |
@@ -977,10 +1390,18 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
 			ret = -1; /* NACK */
 			break;
 		}
-		lmac = mbx.mac.vf_id;
-		bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lmac]);
-		lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[lmac]);
-		bgx_set_lmac_mac(nic->node, bgx, lmac, mbx.mac.mac_addr);
+		if (veb_enabled) {
+			nic_submit_msg_notification(nic, vf, &mbx);
+		} else {
+			int vf_lmac;
+
+			lmac = mbx.mac.vf_id;
+			vf_lmac = nic->vf_lmac_map[lmac];
+			bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(vf_lmac);
+			lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(vf_lmac);
+			bgx_set_lmac_mac(nic->node, bgx, lmac,
+					 mbx.mac.mac_addr);
+		}
 		break;
 	case NIC_MBOX_MSG_SET_MAX_FRS:
 		ret = nic_update_hw_frs(nic, mbx.frs.max_frs,
@@ -996,16 +1417,28 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
 	case NIC_MBOX_MSG_RSS_CFG_CONT:
 		nic_config_rss(nic, &mbx.rss_cfg);
 		break;
+	case NIC_MBOX_MSG_OP_UP:
+	case NIC_MBOX_MSG_OP_DOWN:
+		if (nic_submit_msg_notification(nic, vf, &mbx))
+			goto unlock;
+		break;
 	case NIC_MBOX_MSG_CFG_DONE:
 		/* Last message of VF config msg sequence */
-		nic_enable_vf(nic, vf, true);
+		if (veb_enabled)
+			nic_submit_msg_notification(nic, vf, &mbx);
+		else
+			nic_enable_vf(nic, vf, true);
 		goto unlock;
 	case NIC_MBOX_MSG_SHUTDOWN:
-		/* First msg in VF teardown sequence */
-		if (vf >= nic->num_vf_en)
-			nic->sqs_used[vf - nic->num_vf_en] = false;
-		nic->pqs_vf[vf] = 0;
-		nic_enable_vf(nic, vf, false);
+		if (veb_enabled) {
+			nic_submit_msg_notification(nic, vf, &mbx);
+		} else {
+			/* First msg in VF teardown sequence */
+			if (vf >= nic->num_vf_en)
+				nic->sqs_used[vf - nic->num_vf_en] = false;
+			nic->pqs_vf[vf] = 0;
+			nic_enable_vf(nic, vf, false);
+		}
 		break;
 	case NIC_MBOX_MSG_ALLOC_SQS:
 		nic_alloc_sqs(nic, &mbx.sqs_alloc);
@@ -1228,47 +1661,148 @@ static void nic_poll_for_link(struct work_struct *work)
 	union nic_mbx mbx = {};
 	struct nicpf *nic;
 	struct bgx_link_status link;
-	u8 vf, bgx, lmac;
+	int vf, bgx, lmac;
 
 	nic = container_of(work, struct nicpf, dwork.work);
 
 	mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
 
-	for (vf = 0; vf < nic->num_vf_en; vf++) {
-		/* Poll only if VF is UP */
-		if (!nic->vf_enabled[vf])
-			continue;
+	if (veb_enabled) {
+		int port = 0, i;
+		union nic_mbx *mbxp;
 
-		/* Get BGX, LMAC indices for the VF */
-		bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
-		lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
-		/* Get interface link status */
-		bgx_get_lmac_link_state(nic->node, bgx, lmac, &link);
+		for (port = 0; port < TNS_MAX_LMAC; port++) {
+			int start_vf, end_vf;
 
-		/* Inform VF only if link status changed */
-		if (nic->link[vf] == link.link_up)
-			continue;
+			if (phy_port_to_bgx_lmac(nic->node, port, &bgx, &lmac))
+				continue;
 
-		if (!nic->mbx_lock[vf]) {
-			nic->link[vf] = link.link_up;
-			nic->duplex[vf] = link.duplex;
-			nic->speed[vf] = link.speed;
+			get_vf_group(nic->node, port, &start_vf, &end_vf);
+			if (!nic->vf_enabled[start_vf])
+				continue;
 
-			/* Send a mbox message to VF with current link status */
-			mbx.link_status.link_up = link.link_up;
-			mbx.link_status.duplex = link.duplex;
-			mbx.link_status.speed = link.speed;
-			nic_send_msg_to_vf(nic, vf, &mbx);
+			bgx_get_lmac_link_state(nic->node, bgx, lmac, &link);
+
+			if (link_lmac[nic->node][port] == link.link_up)
+				continue;
+
+			link_lmac[nic->node][port] = link.link_up;
+			pf_speed[nic->node][port] = link.speed;
+			pf_duplex[nic->node][port] = link.duplex;
+
+			spin_lock(&nic->vf_mbx_msg_lock);
+			for (i = 0; i < MAX_VF_MBOX_MESSAGE; i++)
+				if (!nic->valid_vf_mbx_msg[i])
+					break;
+
+			if (i == MAX_VF_MBOX_MESSAGE) {
+				spin_unlock(&nic->vf_mbx_msg_lock);
+				return;
+			}
+
+			mbxp = &nic->vf_mbx_msg[i];
+			nic->valid_vf_mbx_msg[i] = true;
+			mbxp->link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
+			mbxp->link_status.link_up = link.link_up;
+			mbxp->link_status.speed = link.speed;
+			mbxp->link_status.duplex = link.duplex;
+			mbxp->link_status.lmac = port;
+			spin_unlock(&nic->vf_mbx_msg_lock);
+			queue_delayed_work(nic->notification_msg,
+					   &nic->notification_dwork, 0);
+
+			break;
+		}
+	} else {
+		for (vf = 0; vf < nic->num_vf_en; vf++) {
+			int vf_lmac = nic->vf_lmac_map[vf];
+
+			/* Poll only if VF is UP */
+			if (!nic->vf_enabled[vf])
+				continue;
+
+			/* Get BGX, LMAC indices for the VF */
+			bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(vf_lmac);
+			lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(vf_lmac);
+			/* Get interface link status */
+			bgx_get_lmac_link_state(nic->node, bgx, lmac, &link);
+
+			/* Inform VF only if link status changed */
+			if (nic->link[vf] == link.link_up)
+				continue;
+
+			if (!nic->mbx_lock[vf]) {
+				nic->link[vf] = link.link_up;
+				nic->duplex[vf] = link.duplex;
+				nic->speed[vf] = link.speed;
+
+				/* Send a mbox message to VF with current
+				 * link status
+				 */
+				mbx.link_status.link_up = link.link_up;
+				mbx.link_status.duplex = link.duplex;
+				mbx.link_status.speed = link.speed;
+				nic_send_msg_to_vf(nic, vf, &mbx);
+			}
 		}
 	}
 	queue_delayed_work(nic->check_link, &nic->dwork, HZ * 2);
 }
 
+static void set_tns_config(struct nicpf *nic)
+{
+	int i;
+	u32 vf_count;
+
+	bp_enable(nic, 0);
+	bp_enable(nic, 1);
+	vf_count = PF_END;
+	for (i = 0; i < vf_count; i++) {
+		if (i < 64)
+			nic->vnic_intf_map[i]		= 0;
+		else
+			nic->vnic_intf_map[i]		= 1;
+	}
+
+	set_bp_id(nic, 0, 1);
+	set_bp_id(nic, 1, 1);
+
+	nic->num_vf_en = vf_count;
+}
+
+static inline bool firmware_image_available(const struct firmware **fw,
+					    struct device *dev)
+{
+	int ret = 0;
+
+	ret = request_firmware(fw, FW_NAME, dev);
+	if (ret) {
+		dev_err(dev, "firmware file %s not found\n", FW_NAME);
+		dev_err(dev, "Fall back to backward compatible mode\n");
+		return false;
+	}
+
+	return true;
+}
+
+static int tns_init_done;
+
+void nic_enable_valid_vf(int max_vf_cnt)
+{
+	if (veb_enabled > (max_vf_cnt - 1)) {
+		veb_enabled = max_vf_cnt - 1;
+		pr_info("Number of VF's per physical port set to %d\n",
+			veb_enabled);
+		num_vfs = veb_enabled;
+	}
+}
+
 static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
 	struct device *dev = &pdev->dev;
 	struct nicpf *nic;
 	int    err;
+	const struct firmware *fw;
 
 	BUILD_BUG_ON(sizeof(union nic_mbx) > 16);
 
@@ -1319,6 +1853,19 @@ static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		goto err_release_regions;
 	}
 
+	if (veb_enabled && !tns_init_done) {
+		u16 sdevid;
+
+		pci_read_config_word(nic->pdev, PCI_SUBSYSTEM_ID, &sdevid);
+		if (sdevid == PCI_SUBSYS_DEVID_88XX_NIC_PF &&
+		    firmware_image_available(&fw, dev)) {
+			pr_info("Number Of VF's %d enabled per physical port\n",
+				num_vfs);
+		} else {
+			veb_enabled = 0;
+			num_vfs = 0;
+		}
+	}
 	nic->node = nic_get_node_id(pdev);
 
 	/* Initialize hardware */
@@ -1326,13 +1873,48 @@ static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (err)
 		goto err_release_regions;
 
-	nic_set_lmac_vf_mapping(nic);
+	if (veb_enabled) {
+		nic_set_pf_vf_mapping(nic->node);
+		/* init TNS function pointers */
+		set_tns_config(nic);
+	} else {
+		nic_set_lmac_vf_mapping(nic);
+	}
 
 	/* Register interrupts */
 	err = nic_register_interrupts(nic);
 	if (err)
 		goto err_release_regions;
 
+	if (veb_enabled) {
+		int i;
+
+		for (i = 0; i < TNS_MAX_LMAC; i++)
+			link_lmac[nic->node][i] = 0;
+
+		spin_lock_init(&nic->vf_mbx_msg_lock);
+
+		nic->notification_msg = alloc_workqueue("notification_work",
+						WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
+		if (!nic->notification_msg) {
+			err = -ENOMEM;
+			goto err_unregister_interrupts;
+		}
+		INIT_DELAYED_WORK(&nic->notification_dwork, send_notifications);
+		if (!tns_init_done) {
+			if (tns_init(fw, dev)) {
+				dev_err(dev, "Failed to init filter block\n");
+				err = -ENODEV;
+				goto err_unregister_interrupts;
+			}
+			tns_init_done = 1;
+			if (pf_filter_init()) {
+				pr_info("Failed to configure HW filter\n");
+				goto err_unregister_interrupts;
+			}
+		}
+	}
+
 	/* Configure SRIOV */
 	err = nic_sriov_init(pdev, nic);
 	if (err)
@@ -1356,6 +1938,10 @@ static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		pci_disable_sriov(pdev);
 err_unregister_interrupts:
 	nic_unregister_interrupts(nic);
+	if (veb_enabled && nic->notification_msg) {
+		cancel_delayed_work_sync(&nic->notification_dwork);
+		destroy_workqueue(nic->notification_msg);
+	}
 err_release_regions:
 	pci_release_regions(pdev);
 err_disable_device:
@@ -1379,10 +1965,14 @@ static void nic_remove(struct pci_dev *pdev)
 		cancel_delayed_work_sync(&nic->dwork);
 		destroy_workqueue(nic->check_link);
 	}
-
 	nic_unregister_interrupts(nic);
 	pci_release_regions(pdev);
 
+	if (veb_enabled && nic->notification_msg) {
+		cancel_delayed_work_sync(&nic->notification_dwork);
+		destroy_workqueue(nic->notification_msg);
+	}
+
 	nic_free_lmacmem(nic);
 	devm_kfree(&pdev->dev, nic->hw);
 	devm_kfree(&pdev->dev, nic);
@@ -1402,11 +1992,20 @@ static int __init nic_init_module(void)
 {
 	pr_info("%s, ver %s\n", DRV_NAME, DRV_VERSION);
 
+	veb_enabled = num_vfs;
+	if (veb_enabled)
+		nic_init_pf_vf_mapping();
+
 	return pci_register_driver(&nic_driver);
 }
 
 static void __exit nic_cleanup_module(void)
 {
+	if (veb_enabled) {
+		tns_init_done = 0;
+		tns_exit();
+	}
+
 	pci_unregister_driver(&nic_driver);
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 2/7] VF driver changes to enable hooks to get kernel notifications
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 3/7] Enable pause frame support Satha Koteswara Rao
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/nicvf_main.c | 579 ++++++++++++++++++++++-
 1 file changed, 565 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index 8a37012..8f00bc7 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -52,6 +52,11 @@
 MODULE_VERSION(DRV_VERSION);
 MODULE_DEVICE_TABLE(pci, nicvf_id_table);
 
+static int veb_enabled;
+
+int uc_mc_list;
+module_param(uc_mc_list, int, 0644);
+
 static int debug = 0x00;
 module_param(debug, int, 0644);
 MODULE_PARM_DESC(debug, "Debug message level bitmap");
@@ -61,6 +66,132 @@
 MODULE_PARM_DESC(cpi_alg,
 		 "PFC algorithm (0=none, 1=VLAN, 2=VLAN16, 3=IP Diffserv)");
 
+/* Initialize the Shadow List */
+void nicvf_shadow_list_init(struct netdev_hw_addr_list *list)
+{
+	INIT_LIST_HEAD(&list->list);
+	list->count = 0;
+}
+
+/*Set the sync it of the addr structure */
+void nicvf_shadow_list_setsync(struct netdev_hw_addr_list *list, int sync)
+{
+	struct netdev_hw_addr *ha, *tmp;
+
+	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+		ha->synced = sync;
+	}
+}
+
+/*Flush the entire list */
+void nicvf_shadow_list_flush(struct netdev_hw_addr_list *list)
+{
+	struct netdev_hw_addr *ha, *tmp;
+
+	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+		list_del(&ha->list);
+		kfree(ha);
+	}
+	list->count = 0;
+}
+
+/*Return the number of items in the list */
+int nicvf_shadow_list_count(struct netdev_hw_addr_list *list)
+{
+	return list->count;
+}
+
+/*Check if the list is empty */
+int nicvf_shadow_list_empty(struct netdev_hw_addr_list *list)
+{
+	return (list->count == 0);
+}
+
+/* Add item to list */
+int nicvf_shadow_list_add(struct netdev_hw_addr_list *list, unsigned char *addr)
+{
+	struct netdev_hw_addr *ha;
+	int alloc_size;
+
+	alloc_size = sizeof(*ha);
+	ha = kmalloc(alloc_size, GFP_ATOMIC);
+	if (!ha)
+		return -ENOMEM;
+	ether_addr_copy(ha->addr, addr);
+	ha->synced = 0;
+	list_add_tail(&ha->list, &list->list);
+	list->count++;
+	return 0;
+}
+
+/* Delete item in the list given the address */
+void nicvf_shadow_list_del_ha(struct netdev_hw_addr_list *list,
+			      struct netdev_hw_addr *ha)
+{
+	list_del(&ha->list);
+	kfree(ha);
+	list->count--;
+}
+
+/* Delete item in list by address */
+int nicvf_shadow_list_del(struct netdev_hw_addr_list *list, unsigned char *addr)
+{
+	struct netdev_hw_addr *ha, *tmp;
+
+	list_for_each_entry_safe(ha, tmp, &list->list, list)
+		if (ether_addr_equal(ha->addr, addr))
+			nicvf_shadow_list_del_ha(list, ha);
+
+	return -ENOENT;
+}
+
+/* Delete the addresses that are not in the netdev list and send delete
+ * notification
+ */
+int nicvf_shadow_list_delsync(struct netdev_hw_addr_list *list,
+			      struct nicvf *nic, int addr_type)
+{
+	int is_modified = 0;
+	union nic_mbx mbx = {};
+	struct netdev_hw_addr *ha, *tmp;
+
+	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+		if (ha->synced == 1) {
+			if (!uc_mc_list) {
+				mbx.msg.msg = NIC_MBOX_MSG_UC_MC;
+				mbx.uc_mc_cfg.vf_id = nic->vf_id;
+				mbx.uc_mc_cfg.addr_type = addr_type;
+				mbx.uc_mc_cfg.is_flush = 0;
+				mbx.uc_mc_cfg.is_add = 0;
+				ether_addr_copy(mbx.uc_mc_cfg.mac_addr,
+						ha->addr);
+				if (nicvf_send_msg_to_pf(nic, &mbx)) {
+					netdev_err(nic->netdev,
+						   "PF not respond to MSG_UC_MC\n");
+				}
+			}
+			is_modified = 1;
+			nicvf_shadow_list_del_ha(list, ha);
+		}
+	}
+	return is_modified;
+}
+
+/*Check if an entry with the mac address exits in the list */
+int nicvf_shadow_list_find(struct netdev_hw_addr_list *list,
+			   unsigned char *addr)
+{
+	struct netdev_hw_addr *ha;
+
+	list_for_each_entry(ha, &list->list, list) {
+		if (ether_addr_equal(ha->addr, addr)) {
+			ha->synced = 0;
+			return 0;
+		}
+	}
+	return -ENOENT;
+}
+
 static inline u8 nicvf_netdev_qidx(struct nicvf *nic, u8 qidx)
 {
 	if (nic->sqs_mode)
@@ -113,22 +244,198 @@ static void nicvf_write_to_mbx(struct nicvf *nic, union nic_mbx *mbx)
 	nicvf_reg_write(nic, NIC_VF_PF_MAILBOX_0_1 + 8, msg[1]);
 }
 
+bool pf_ack_required(struct nicvf *nic, union nic_mbx *mbx)
+{
+	if (mbx->msg.msg == NIC_MBOX_MSG_PROMISC ||
+	    !nic->wait_for_ack)
+		return false;
+
+	return true;
+}
+
+void submit_uc_mc_mbox_msg(struct nicvf *nic, int vf, int flush, int addr_type,
+			   u8 *mac)
+{
+	union nic_mbx mbx = {};
+
+	mbx.msg.msg = NIC_MBOX_MSG_UC_MC;
+	mbx.uc_mc_cfg.vf_id = vf;
+	mbx.uc_mc_cfg.addr_type = addr_type;
+	mbx.uc_mc_cfg.is_flush = flush;
+	mbx.uc_mc_cfg.is_add = !flush;
+	if (mac)
+		ether_addr_copy(mbx.uc_mc_cfg.mac_addr, mac);
+
+	if (nicvf_send_msg_to_pf(nic, &mbx) == -EBUSY) {
+		netdev_err(nic->netdev,
+			   "PF didn't respond to MSG_UC_MC flush\n");
+	}
+}
+
+void send_uc_mc_msg(struct work_struct *work)
+{
+	struct nicvf *nic = container_of(work, struct nicvf, dwork.work);
+	struct net_device *netdev = nic->netdev;
+	union nic_mbx mbx = {};
+	int is_modified1 = 0;
+	int is_modified2 = 0;
+
+	if (nic->send_op_link_status) {
+		mbx.msg.msg = nic->link_up ? NIC_MBOX_MSG_OP_UP :
+						NIC_MBOX_MSG_OP_DOWN;
+		if (nicvf_send_msg_to_pf(nic, &mbx)) {
+			netdev_err(nic->netdev,
+				   "PF not respond to msg %d\n", mbx.msg.msg);
+		}
+		nic->send_op_link_status = false;
+		return;
+	}
+
+	/* If the netdev list is empty */
+	if (netdev_uc_empty(netdev)) {
+		/* If shadow list is not empty */
+		if (!nicvf_shadow_list_empty(&nic->uc_shadow)) {
+			/* send uc flush notifcation */
+			nicvf_shadow_list_flush(&nic->uc_shadow);
+			submit_uc_mc_mbox_msg(nic, nic->vf_id, 1, 0, NULL);
+		}
+	} else {
+		/* If shadow list is empty add all and notify */
+		if (nicvf_shadow_list_empty(&nic->uc_shadow)) {
+			struct netdev_hw_addr *ha;
+
+			netdev_for_each_uc_addr(ha, netdev) {
+				nicvf_shadow_list_add(&nic->uc_shadow,
+						      ha->addr);
+				submit_uc_mc_mbox_msg(nic, nic->vf_id, 0, 0,
+						      ha->addr);
+			}
+		} else {
+			struct netdev_hw_addr *ha;
+
+			nicvf_shadow_list_setsync(&nic->uc_shadow, 1);
+			/* ADD the entries which are present in netdev list
+			 * and not present in shadow list
+			 */
+			netdev_for_each_uc_addr(ha, netdev) {
+				if (nicvf_shadow_list_find(&nic->uc_shadow,
+							   ha->addr)) {
+					is_modified1 = 1;
+					nicvf_shadow_list_add(&nic->uc_shadow,
+							      ha->addr);
+					if (uc_mc_list)
+						continue;
+					submit_uc_mc_mbox_msg(nic, nic->vf_id,
+							      0, 0, ha->addr);
+				}
+			}
+			/* Delete items that are not present in netdev list and
+			 *  present in shadow list
+			 */
+			is_modified2 = nicvf_shadow_list_delsync(
+					&nic->uc_shadow, nic, 0);
+			if (uc_mc_list && (is_modified1 || is_modified2)) {
+				/* Now the shadow list is updated,
+				 * send the entire list
+				 */
+				netdev_for_each_uc_addr(ha, netdev)
+					submit_uc_mc_mbox_msg(nic, nic->vf_id,
+							      0, 0, ha->addr);
+			}
+		}
+	}
+
+	is_modified1 = 0;
+	is_modified2 = 0;
+	if (netdev_mc_empty(netdev)) { // If the netdev list is empty
+		/* If shadow list is not empty */
+		if (!nicvf_shadow_list_empty(&nic->mc_shadow)) {
+			// send uc flush notifcation
+			nicvf_shadow_list_flush(&nic->mc_shadow);
+			submit_uc_mc_mbox_msg(nic, nic->vf_id, 1, 1, NULL);
+		}
+	} else {
+		/* If shadow list is empty add all and notfy */
+		if (nicvf_shadow_list_empty(&nic->mc_shadow)) {
+			struct netdev_hw_addr *ha;
+
+			netdev_for_each_mc_addr(ha, netdev) {
+				nicvf_shadow_list_add(&nic->mc_shadow,
+						      ha->addr);
+				submit_uc_mc_mbox_msg(nic, nic->vf_id, 0, 1,
+						      ha->addr);
+			}
+		} else {
+			struct netdev_hw_addr *ha;
+
+			nicvf_shadow_list_setsync(&nic->mc_shadow, 1);
+			/* ADD the entries which are present in netdev list and
+			 * not present in shadow list
+			 */
+			netdev_for_each_mc_addr(ha, netdev) {
+				if (nicvf_shadow_list_find(&nic->mc_shadow,
+							   ha->addr)) {
+					is_modified1 = 1;
+					nicvf_shadow_list_add(&nic->mc_shadow,
+							      ha->addr);
+					if (!uc_mc_list)
+						submit_uc_mc_mbox_msg(
+							nic, nic->vf_id, 0, 1,
+							ha->addr);
+				}
+			}
+			/* Delete items that are not present in netdev list and
+			 * present in shadow list
+			 */
+			is_modified2 = nicvf_shadow_list_delsync(
+					&nic->mc_shadow, nic, 1);
+			if (uc_mc_list && (is_modified1 || is_modified2)) {
+				/* Now the shadow list is updated, send the
+				 * entire list
+				 */
+				netdev_for_each_mc_addr(ha, netdev)
+					submit_uc_mc_mbox_msg(nic, nic->vf_id,
+							      0, 1, ha->addr);
+			}
+		}
+	}
+}
+
 int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
 {
 	int timeout = NIC_MBOX_MSG_TIMEOUT;
 	int sleep = 10;
 
+	if (nic->pf_ack_waiting) {
+		timeout += 20;
+		while (nic->pf_ack_waiting) {
+			msleep(sleep);
+			if (!timeout)
+				break;
+			timeout -= sleep;
+		}
+		timeout = NIC_MBOX_MSG_TIMEOUT;
+	}
 	nic->pf_acked = false;
 	nic->pf_nacked = false;
+	nic->pf_ack_waiting = true;
 
 	nicvf_write_to_mbx(nic, mbx);
 
+	if (!pf_ack_required(nic, mbx)) {
+		nic->pf_ack_waiting = false;
+		nic->pf_acked = true;
+		nic->pf_nacked = true;
+		return 0;
+	}
 	/* Wait for previous message to be acked, timeout 2sec */
 	while (!nic->pf_acked) {
 		if (nic->pf_nacked) {
-			netdev_err(nic->netdev,
-				   "PF NACK to mbox msg 0x%02x from VF%d\n",
-				   (mbx->msg.msg & 0xFF), nic->vf_id);
+			if (mbx->msg.msg != NIC_MBOX_MSG_READY)
+				netdev_info(nic->netdev,
+					    "PF NACK to mbox msg 0x%02x from VF%d\n",
+					    (mbx->msg.msg & 0xFF), nic->vf_id);
+			nic->pf_ack_waiting = false;
 			return -EINVAL;
 		}
 		msleep(sleep);
@@ -139,9 +446,11 @@ int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
 			netdev_err(nic->netdev,
 				   "PF didn't ACK to mbox msg 0x%02x from VF%d\n",
 				   (mbx->msg.msg & 0xFF), nic->vf_id);
+			nic->pf_ack_waiting = false;
 			return -EBUSY;
 		}
 	}
+	nic->pf_ack_waiting = false;
 	return 0;
 }
 
@@ -151,9 +460,14 @@ int nicvf_send_msg_to_pf(struct nicvf *nic, union nic_mbx *mbx)
 static int nicvf_check_pf_ready(struct nicvf *nic)
 {
 	union nic_mbx mbx = {};
+	int ret = 0;
 
 	mbx.msg.msg = NIC_MBOX_MSG_READY;
-	if (nicvf_send_msg_to_pf(nic, &mbx)) {
+	ret = nicvf_send_msg_to_pf(nic, &mbx);
+	if (ret == -EINVAL) {
+		/* VF disabled through module parameter */
+		return 0;
+	} else if (ret) {
 		netdev_err(nic->netdev,
 			   "PF didn't respond to READY msg\n");
 		return 0;
@@ -193,12 +507,22 @@ static void  nicvf_handle_mbx_intr(struct nicvf *nic)
 		nic->vf_id = mbx.nic_cfg.vf_id & 0x7F;
 		nic->tns_mode = mbx.nic_cfg.tns_mode & 0x7F;
 		nic->node = mbx.nic_cfg.node_id;
+		nic->true_vf = mbx.nic_cfg.is_pf;
+		if (!veb_enabled)
+			veb_enabled = mbx.nic_cfg.veb_enabled;
+		if (veb_enabled)
+			snprintf(nic->phys_port_name, IFNAMSIZ, "%d %d %d %d",
+				 nic->node, mbx.nic_cfg.bgx_id,
+				 mbx.nic_cfg.lmac, mbx.nic_cfg.chan);
 		if (!nic->set_mac_pending)
 			ether_addr_copy(nic->netdev->dev_addr,
 					mbx.nic_cfg.mac_addr);
 		nic->sqs_mode = mbx.nic_cfg.sqs_mode;
 		nic->loopback_supported = mbx.nic_cfg.loopback_supported;
-		nic->link_up = false;
+		if (veb_enabled)
+			nic->link_up = mbx.nic_cfg.pf_up;
+		else
+			nic->link_up = false;
 		nic->duplex = 0;
 		nic->speed = 0;
 		break;
@@ -208,6 +532,12 @@ static void  nicvf_handle_mbx_intr(struct nicvf *nic)
 	case NIC_MBOX_MSG_NACK:
 		nic->pf_nacked = true;
 		break;
+	case NIC_MBOX_MSG_ADMIN_VLAN:
+		if (mbx.vlan_cfg.vlan_add && nic->admin_vlan_id == -1)
+			nic->admin_vlan_id = mbx.vlan_cfg.vlan_id;
+		else if (!mbx.vlan_cfg.vlan_add)
+			nic->admin_vlan_id = -1;
+		break;
 	case NIC_MBOX_MSG_RSS_SIZE:
 		nic->rss_info.rss_size = mbx.rss_size.ind_tbl_size;
 		nic->pf_acked = true;
@@ -216,16 +546,21 @@ static void  nicvf_handle_mbx_intr(struct nicvf *nic)
 		nicvf_read_bgx_stats(nic, &mbx.bgx_stats);
 		nic->pf_acked = true;
 		break;
-	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+	case NIC_MBOX_MSG_CFG_DONE:
 		nic->pf_acked = true;
 		nic->link_up = mbx.link_status.link_up;
 		nic->duplex = mbx.link_status.duplex;
 		nic->speed = mbx.link_status.speed;
+		break;
+	case NIC_MBOX_MSG_BGX_LINK_CHANGE:
+		nic->link_up = mbx.link_status.link_up;
+		nic->duplex = mbx.link_status.duplex;
+		nic->speed = mbx.link_status.speed;
 		if (nic->link_up) {
 			netdev_info(nic->netdev, "%s: Link is Up %d Mbps %s\n",
 				    nic->netdev->name, nic->speed,
 				    nic->duplex == DUPLEX_FULL ?
-				"Full duplex" : "Half duplex");
+				    "Full duplex" : "Half duplex");
 			netif_carrier_on(nic->netdev);
 			netif_tx_start_all_queues(nic->netdev);
 		} else {
@@ -563,6 +898,14 @@ static inline void nicvf_set_rxhash(struct net_device *netdev,
 	skb_set_hash(skb, hash, hash_type);
 }
 
+static inline bool is_vf_vlan(struct nicvf *nic, u16 vid)
+{
+	if (veb_enabled && ((nic->admin_vlan_id & 0xFFF) == vid))
+		return false;
+
+	return true;
+}
+
 static void nicvf_rcv_pkt_handler(struct net_device *netdev,
 				  struct napi_struct *napi,
 				  struct cqe_rx_t *cqe_rx)
@@ -617,7 +960,8 @@ static void nicvf_rcv_pkt_handler(struct net_device *netdev,
 	skb->protocol = eth_type_trans(skb, netdev);
 
 	/* Check for stripped VLAN */
-	if (cqe_rx->vlan_found && cqe_rx->vlan_stripped)
+	if (cqe_rx->vlan_found && cqe_rx->vlan_stripped &&
+	    is_vf_vlan(nic, (ntohs(cqe_rx->vlan_tci) & 0xFFF)))
 		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
 				       ntohs((__force __be16)cqe_rx->vlan_tci));
 
@@ -1151,6 +1495,8 @@ int nicvf_stop(struct net_device *netdev)
 	/* disable mailbox interrupt */
 	nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0);
 
+	//MBOX interrupts disabled, don't expect any ACK's from PF
+	nic->wait_for_ack = false;
 	nicvf_unregister_interrupts(nic);
 
 	nicvf_free_cq_poll(nic);
@@ -1182,6 +1528,8 @@ int nicvf_open(struct net_device *netdev)
 
 	netif_carrier_off(netdev);
 
+	//MBOX interrupts enabled, so wait for ACK from PF
+	nic->wait_for_ack = true;
 	err = nicvf_register_misc_interrupt(nic);
 	if (err)
 		return err;
@@ -1202,7 +1550,8 @@ int nicvf_open(struct net_device *netdev)
 	}
 
 	/* Check if we got MAC address from PF or else generate a radom MAC */
-	if (!nic->sqs_mode && is_zero_ether_addr(netdev->dev_addr)) {
+	if ((veb_enabled || !nic->sqs_mode) &&
+	    is_zero_ether_addr(netdev->dev_addr)) {
 		eth_hw_addr_random(netdev);
 		nicvf_hw_set_mac_addr(nic, netdev);
 	}
@@ -1268,7 +1617,17 @@ int nicvf_open(struct net_device *netdev)
 
 	/* Send VF config done msg to PF */
 	mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE;
-	nicvf_write_to_mbx(nic, &mbx);
+	if (veb_enabled)
+		nicvf_send_msg_to_pf(nic, &mbx);
+	else
+		nicvf_write_to_mbx(nic, &mbx);
+
+	if (veb_enabled && nic->link_up) {
+		nic->send_op_link_status = true;
+		queue_delayed_work(nic->uc_mc_msg, &nic->dwork, 0);
+		netif_carrier_on(netdev);
+		netif_tx_start_all_queues(netdev);
+	}
 
 	return 0;
 cleanup:
@@ -1299,6 +1658,8 @@ static int nicvf_change_mtu(struct net_device *netdev, int new_mtu)
 		return -EINVAL;
 
 	netdev->mtu = new_mtu;
+	if (!nic->link_up)
+		return 0;
 
 	if (!netif_running(netdev))
 		return 0;
@@ -1508,6 +1869,142 @@ static int nicvf_set_features(struct net_device *netdev,
 	return 0;
 }
 
+static int nicvf_vlan_rx_add_vid(struct net_device *netdev,
+				 __always_unused __be16 proto, u16 vid)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+	union nic_mbx mbx = {};
+	int ret = 0;
+
+	if (!veb_enabled)
+		return 0;
+
+	if (nic->admin_vlan_id != -1) {
+		netdev_err(nic->netdev,
+			   "VF %d could not add VLAN %d\n", nic->vf_id, vid);
+		return -1;
+	}
+	mbx.msg.msg = NIC_MBOX_MSG_VLAN;
+	mbx.vlan_cfg.vf_id = nic->vf_id;
+	mbx.vlan_cfg.vlan_id = vid;
+	mbx.vlan_cfg.vlan_add = 1;
+	ret = nicvf_send_msg_to_pf(nic, &mbx);
+	if (ret == -EINVAL) {
+		netdev_err(nic->netdev, "VF %d could not add VLAN %d\n",
+			   nic->vf_id, vid);
+	} else if (ret == -EBUSY) {
+		netdev_err(nic->netdev,
+			   "PF didn't respond to VLAN msg VLAN ID: %d VF: %d\n",
+			   vid, nic->vf_id);
+	}
+	return ret;
+}
+
+static int nicvf_vlan_rx_kill_vid(struct net_device *netdev,
+				  __always_unused __be16 proto, u16 vid)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+	union nic_mbx mbx = {};
+
+	if (!veb_enabled)
+		return 0;
+
+	mbx.msg.msg = NIC_MBOX_MSG_VLAN;
+	mbx.vlan_cfg.vf_id = nic->vf_id;
+	mbx.vlan_cfg.vlan_id = vid;
+	mbx.vlan_cfg.vlan_add = 0;
+	if (nicvf_send_msg_to_pf(nic, &mbx)) {
+		netdev_err(nic->netdev,
+			   "PF didn't respond to VLAN msg VLAN ID: %d VF: %d\n",
+			   vid, nic->vf_id);
+		return -1;
+	}
+	return 0;
+}
+
+void nicvf_set_rx_mode(struct net_device *netdev)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+
+	if (!veb_enabled)
+		return;
+
+	queue_delayed_work(nic->uc_mc_msg, &nic->dwork, 0);
+}
+
+void nicvf_change_rx_flags(struct net_device *netdev, int flags)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+	union nic_mbx mbx = {};
+
+	if (!veb_enabled)
+		return;
+
+	mbx.msg.msg = NIC_MBOX_MSG_PROMISC;
+	mbx.promisc_cfg.vf_id = nic->vf_id;
+	mbx.promisc_cfg.on = netdev->flags & IFF_PROMISC;
+	if (nicvf_send_msg_to_pf(nic, &mbx)) {
+		netdev_err(nic->netdev,
+			   "PF didn't respond to PROMISC Mode\n");
+		return;
+	}
+}
+
+int nicvf_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos,
+		      __be16 vlan_proto)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+	int is_add = (vlan | qos);
+	union nic_mbx mbx = {};
+	int ret = 0;
+
+	if (!veb_enabled)
+		return 0;
+
+	mbx.msg.msg = NIC_MBOX_MSG_ADMIN_VLAN;
+	mbx.vlan_cfg.vf_id   = vf;
+	mbx.vlan_cfg.vlan_add = is_add;
+	mbx.vlan_cfg.vlan_id = vlan;
+
+	ret = nicvf_send_msg_to_pf(nic, &mbx);
+	if (ret == -EINVAL) {
+		netdev_err(nic->netdev, "ADMIN VLAN %s failed For Vf %d\n",
+			   is_add ? "Add" : "Delete", vf);
+	} else if (ret == -EBUSY) {
+		netdev_err(nic->netdev,
+			   "PF didn't respond to ADMIN VLAN UPDATE msg\n");
+	}
+	return ret;
+}
+
+static int nicvf_get_phys_port_name(struct net_device *netdev, char *name,
+				    size_t len)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+	int plen;
+
+	plen = snprintf(name, len, "%s", nic->phys_port_name);
+
+	if (plen >= len)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int nicvf_get_phys_port_id(struct net_device *netdev,
+				  struct netdev_phys_item_id *ppid)
+{
+	struct nicvf *nic = netdev_priv(netdev);
+
+	if (veb_enabled && !nic->true_vf)
+		return -EOPNOTSUPP;
+
+	ppid->id_len = min_t(int, sizeof(netdev->dev_addr), sizeof(ppid->id));
+	memcpy(ppid->id, netdev->dev_addr, ppid->id_len);
+
+	return 0;
+}
+
 static const struct net_device_ops nicvf_netdev_ops = {
 	.ndo_open		= nicvf_open,
 	.ndo_stop		= nicvf_stop,
@@ -1518,6 +2015,13 @@ static int nicvf_set_features(struct net_device *netdev,
 	.ndo_tx_timeout         = nicvf_tx_timeout,
 	.ndo_fix_features       = nicvf_fix_features,
 	.ndo_set_features       = nicvf_set_features,
+	.ndo_vlan_rx_add_vid    = nicvf_vlan_rx_add_vid,
+	.ndo_vlan_rx_kill_vid   = nicvf_vlan_rx_kill_vid,
+	.ndo_set_rx_mode        = nicvf_set_rx_mode,
+	.ndo_change_rx_flags    = nicvf_change_rx_flags,
+	.ndo_set_vf_vlan        = nicvf_set_vf_vlan,
+	.ndo_get_phys_port_name = nicvf_get_phys_port_name,
+	.ndo_get_phys_port_id   = nicvf_get_phys_port_id,
 };
 
 static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
@@ -1576,6 +2080,7 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	nic->pdev = pdev;
 	nic->pnicvf = nic;
 	nic->max_queues = qcount;
+	nic->pf_ack_waiting = false;
 
 	/* MAP VF's configuration registers */
 	nic->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
@@ -1595,6 +2100,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (err)
 		goto err_free_netdev;
 
+	//MBOX interrupts enabled, so wait for ACK from PF
+	nic->wait_for_ack = true;
 	/* Check if PF is alive and get MAC address for this VF */
 	err = nicvf_register_misc_interrupt(nic);
 	if (err)
@@ -1619,12 +2126,13 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	netdev->hw_features = (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_SG |
 			       NETIF_F_TSO | NETIF_F_GRO |
-			       NETIF_F_HW_VLAN_CTAG_RX);
-
-	netdev->hw_features |= NETIF_F_RXHASH;
+			       NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_RXHASH);
 
 	netdev->features |= netdev->hw_features;
-	netdev->hw_features |= NETIF_F_LOOPBACK;
+	if (veb_enabled)
+		netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+	else
+		netdev->hw_features |= NETIF_F_LOOPBACK;
 
 	netdev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
 
@@ -1642,6 +2150,37 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	nic->msg_enable = debug;
 
 	nicvf_set_ethtool_ops(netdev);
+	if (veb_enabled) {
+		int bgx, lmac, chan, node, ret;
+
+		ret = sscanf(nic->phys_port_name, "%d %d %d %d", &node, &bgx,
+			     &lmac, &chan);
+		if (nic->true_vf) {
+			dev_info(dev,
+				 "interface %s enabled with node %d VF %d channel %d directly attached to physical port n%d-bgx-%d-%d\n",
+				 netdev->name, node, nic->vf_id, chan, node,
+				 bgx, lmac);
+		} else {
+			dev_info(dev,
+				 "interface %s enabled with node %d VF %d channel %d attached to physical port n%d-bgx-%d-%d\n",
+				 netdev->name, node, nic->vf_id, chan, node,
+				 bgx, lmac);
+		}
+		snprintf(nic->phys_port_name, IFNAMSIZ, "n%d-bgx-%d-%d",
+			 node, bgx, lmac);
+		nicvf_shadow_list_init(&nic->uc_shadow);
+		nicvf_shadow_list_init(&nic->mc_shadow);
+
+		nic->admin_vlan_id = -1;
+		nic->send_op_link_status = false;
+		nic->uc_mc_msg = alloc_workqueue("uc_mc_msg", WQ_UNBOUND |
+						 WQ_MEM_RECLAIM, 1);
+		if (!nic->uc_mc_msg)
+			return -ENOMEM;
+		INIT_DELAYED_WORK(&nic->dwork, send_uc_mc_msg);
+	} else {
+		strlcpy(nic->phys_port_name, netdev->name, IFNAMSIZ);
+	}
 
 	return 0;
 
@@ -1669,6 +2208,12 @@ static void nicvf_remove(struct pci_dev *pdev)
 		return;
 
 	nic = netdev_priv(netdev);
+	if (veb_enabled) {
+		if (nicvf_shadow_list_count(&nic->uc_shadow))
+			nicvf_shadow_list_flush(&nic->uc_shadow);
+		if (nicvf_shadow_list_count(&nic->mc_shadow))
+			nicvf_shadow_list_flush(&nic->mc_shadow);
+	}
 	pnetdev = nic->pnicvf->netdev;
 
 	/* Check if this Qset is assigned to different VF.
@@ -1678,6 +2223,12 @@ static void nicvf_remove(struct pci_dev *pdev)
 		unregister_netdev(pnetdev);
 	nicvf_unregister_interrupts(nic);
 	pci_set_drvdata(pdev, NULL);
+	if (veb_enabled) {
+		if (nic->uc_mc_msg) {
+			cancel_delayed_work_sync(&nic->dwork);
+			destroy_workqueue(nic->uc_mc_msg);
+		}
+	}
 	if (nic->drv_stats)
 		free_percpu(nic->drv_stats);
 	free_netdev(netdev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 3/7] Enable pause frame support
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 2/7] VF driver changes to enable hooks to get kernel notifications Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
       [not found]   ` <DM5PR07MB2889471E0668C95BD2A266709E930@DM5PR07MB2889.namprd07.prod.outlook.com>
  2016-12-21  8:46 ` [RFC PATCH 4/7] HW Filter Initialization code and register access APIs Satha Koteswara Rao
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 25 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/thunder/thunder_bgx.h |  7 +++++++
 2 files changed, 32 insertions(+)

diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
index 050e21f..92d7e04 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
@@ -121,6 +121,31 @@ static int bgx_poll_reg(struct bgx *bgx, u8 lmac, u64 reg, u64 mask, bool zero)
 	return 1;
 }
 
+void enable_pause_frames(int node, int bgx_idx, int lmac)
+{
+	u64 reg_value = 0;
+	struct bgx *bgx = bgx_vnic[(node * MAX_BGX_PER_NODE) + bgx_idx];
+
+	reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_TX_CTL);
+	/* Enable BGX()_SMU()_TX_CTL */
+	if (!(reg_value & L2P_BP_CONV))
+		bgx_reg_write(bgx, lmac, BGX_SMUX_TX_CTL,
+			      (reg_value | (L2P_BP_CONV)));
+
+	reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_HG2_CTL);
+	/* Clear if BGX()_SMU()_HG2_CONTROL[HG2TX_EN] is set */
+	if (reg_value & SMUX_HG2_CTL_HG2TX_EN)
+		bgx_reg_write(bgx, lmac, BGX_SMUX_HG2_CTL,
+			      (reg_value & (~SMUX_HG2_CTL_HG2TX_EN)));
+
+	reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_CBFC_CTL);
+	/* Clear if BGX()_SMU()_CBFC_CTL[TX_EN] is set */
+	if (reg_value & CBFC_CTL_TX_EN)
+		bgx_reg_write(bgx, lmac, BGX_SMUX_CBFC_CTL,
+			      (reg_value & (~CBFC_CTL_TX_EN)));
+}
+EXPORT_SYMBOL(enable_pause_frames);
+
 /* Return number of BGX present in HW */
 unsigned bgx_get_map(int node)
 {
diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.h b/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
index 01cc7c8..5b57bd1 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
@@ -131,6 +131,11 @@
 #define BGX_SMUX_TX_CTL			0x20178
 #define  SMU_TX_CTL_DIC_EN			BIT_ULL(0)
 #define  SMU_TX_CTL_UNI_EN			BIT_ULL(1)
+#define  L2P_BP_CONV				BIT_ULL(7)
+#define  BGX_SMUX_CBFC_CTL		0x20218
+#define  CBFC_CTL_TX_EN				BIT_ULL(1)
+#define  BGX_SMUX_HG2_CTL		0x20210
+#define SMUX_HG2_CTL_HG2TX_EN			BIT_ULL(18)
 #define  SMU_TX_CTL_LNK_STATUS			(3ull << 4)
 #define BGX_SMUX_TX_THRESH		0x20180
 #define BGX_SMUX_CTL			0x20200
@@ -212,6 +217,8 @@ void bgx_lmac_internal_loopback(int node, int bgx_idx,
 
 u64 bgx_get_rx_stats(int node, int bgx_idx, int lmac, int idx);
 u64 bgx_get_tx_stats(int node, int bgx_idx, int lmac, int idx);
+void enable_pause_frames(int node, int bgx_idx, int lmac);
+
 #define BGX_RX_STATS_COUNT 11
 #define BGX_TX_STATS_COUNT 18
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 4/7] HW Filter Initialization code and register access APIs
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
                   ` (2 preceding siblings ...)
  2016-12-21  8:46 ` [RFC PATCH 3/7] Enable pause frame support Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21 12:36   ` Sunil Kovvuri
  2016-12-21  8:46 ` [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's Satha Koteswara Rao
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/pf_reg.c | 660 +++++++++++++++++++++++++++
 1 file changed, 660 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_reg.c

diff --git a/drivers/net/ethernet/cavium/thunder/pf_reg.c b/drivers/net/ethernet/cavium/thunder/pf_reg.c
new file mode 100644
index 0000000..1f95c7f
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/pf_reg.c
@@ -0,0 +1,660 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/version.h>
+#include <linux/proc_fs.h>
+#include <linux/device.h>
+#include <linux/mman.h>
+#include <linux/uaccess.h>
+#include <linux/delay.h>
+#include <linux/cdev.h>
+#include <linux/err.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/firmware.h>
+#include "pf_globals.h"
+#include "pf_locals.h"
+#include "tbl_access.h"
+#include "linux/lz4.h"
+
+struct tns_table_s tbl_info[TNS_MAX_TABLE];
+
+#define TNS_TDMA_SST_ACC_CMD_ADDR	0x0000842000000270ull
+
+#define BAR0_START 0x842000000000
+#define BAR0_END   0x84200000FFFF
+#define BAR0_SIZE  (64 * 1024)
+#define BAR2_START 0x842040000000
+#define BAR2_END   0x84207FFFFFFF
+#define BAR2_SIZE  (1024 * 1024 * 1024)
+
+#define NODE1_BAR0_START 0x942000000000
+#define NODE1_BAR0_END   0x94200000FFFF
+#define NODE1_BAR0_SIZE  (64 * 1024)
+#define NODE1_BAR2_START 0x942040000000
+#define NODE1_BAR2_END   0x94207FFFFFFF
+#define NODE1_BAR2_SIZE  (1024 * 1024 * 1024)
+/* Allow a max of 4 chunks for the Indirect Read/Write */
+#define MAX_SIZE (64 * 4)
+#define CHUNK_SIZE (64)
+/* To protect register access */
+spinlock_t pf_reg_lock;
+
+u64 iomem0;
+u64 iomem2;
+u8 tns_enabled;
+u64 node1_iomem0;
+u64 node1_iomem2;
+u8 node1_tns;
+int n1_tns;
+
+int tns_write_register_indirect(int node_id, u64 address, u8 size,
+				u8 *kern_buffer)
+{
+	union tns_tdma_sst_acc_cmd acccmd;
+	union tns_tdma_sst_acc_stat_t accstat;
+	union tns_acc_data data;
+	int i, j, w = 0;
+	int cnt = 0;
+	u32 *dataw = NULL;
+	int temp = 0;
+	int k = 0;
+	int chunks = 0;
+	u64 acccmd_address;
+	u64 lmem2 = 0, lmem0 = 0;
+
+	if (size == 0 || !kern_buffer) {
+		filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
+		return TNS_ERROR_INVALID_ARG;
+	}
+	if (size > MAX_SIZE) {
+		filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
+		return TNS_ERROR_DATA_TOO_LARGE;
+	}
+	if (node_id) {
+		lmem0 = node1_iomem0;
+		lmem2 = node1_iomem2;
+	} else {
+		lmem0 = iomem0;
+		lmem2 = iomem2;
+	}
+
+	chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
+	acccmd_address = (address & 0x00000000ffffffff);
+	spin_lock_bh(&pf_reg_lock);
+
+	for (k = 0; k < chunks; k++) {
+		/* Should never happen */
+		if (size < 0) {
+			filter_dbg(FERR, "%s size mismatch [CHUNK %d]\n",
+				   __func__, k);
+			break;
+		}
+		temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
+		dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
+		cnt = ((temp + 3) / 4);
+		data.u = 0ULL;
+		for (j = 0, i = 0; i < cnt; i++) {
+			/* Odd words go in the upper 32 bits of the data
+			 * register
+			 */
+			if (i & 1) {
+				data.s.upper32 = dataw[i];
+				writeq_relaxed(data.u, (void *)(lmem0 +
+					       TNS_TDMA_SST_ACC_WDATX(j)));
+				data.u = 0ULL;
+				j++; /* Advance to the next data word */
+				w = 0;
+			} else {
+				/* Lower 32 bits contain words 0, 2, 4, etc. */
+				data.s.lower32 = dataw[i];
+				w = 1;
+			}
+		}
+
+		/* If the last word was a partial (< 64 bits) then
+		 * see if we need to write it.
+		 */
+		if (w)
+			writeq_relaxed(data.u, (void *)(lmem0 +
+				       TNS_TDMA_SST_ACC_WDATX(j)));
+
+		acccmd.u = 0ULL;
+		acccmd.s.go = 1; /* Cleared once the request is serviced */
+		acccmd.s.size = cnt;
+		acccmd.s.addr = (acccmd_address >> 2);
+		writeq_relaxed(acccmd.u, (void *)(lmem0 +
+			       TDMA_SST_ACC_CMD));
+		accstat.u = 0ULL;
+
+		while (!accstat.s.cmd_done && !accstat.s.error)
+			accstat.u = readq_relaxed((void *)(lmem0 +
+					  TDMA_SST_ACC_STAT));
+
+		if (accstat.s.error) {
+			data.u = readq_relaxed((void *)(lmem2 +
+					       TDMA_NB_INT_STAT));
+			filter_dbg(FERR, "%s Reading data from ", __func__);
+			filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
+				   (unsigned long)address, k,
+				   (unsigned long)data.u);
+			spin_unlock_bh(&pf_reg_lock);
+			kfree(kern_buffer);
+			return TNS_ERROR_INDIRECT_WRITE;
+		}
+		/* Calculate the next offset to write */
+		acccmd_address = acccmd_address + CHUNK_SIZE;
+		size -= CHUNK_SIZE;
+	}
+	spin_unlock_bh(&pf_reg_lock);
+
+	return 0;
+}
+
+int tns_read_register_indirect(int node_id, u64 address, u8 size,
+			       u8 *kern_buffer)
+{
+	union tns_tdma_sst_acc_cmd acccmd;
+	union tns_tdma_sst_acc_stat_t accstat;
+	union tns_acc_data data;
+	int i, j, dcnt;
+	int cnt = 0;
+	u32 *dataw = NULL;
+	int temp = 0;
+	int k = 0;
+	int chunks = 0;
+	u64 acccmd_address;
+	u64 lmem2 = 0, lmem0 = 0;
+
+	if (size == 0 || !kern_buffer) {
+		filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
+		return TNS_ERROR_INVALID_ARG;
+	}
+	if (size > MAX_SIZE) {
+		filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
+		return TNS_ERROR_DATA_TOO_LARGE;
+	}
+	if (node_id) {
+		lmem0 = node1_iomem0;
+		lmem2 = node1_iomem2;
+	} else {
+		lmem0 = iomem0;
+		lmem2 = iomem2;
+	}
+
+	chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
+	acccmd_address = (address & 0x00000000ffffffff);
+	spin_lock_bh(&pf_reg_lock);
+	for (k = 0; k < chunks; k++) {
+		/* This should never happen */
+		if (size < 0) {
+			filter_dbg(FERR, "%s size mismatch [CHUNK:%d]\n",
+				   __func__, k);
+			break;
+		}
+		temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
+		dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
+		cnt = ((temp + 3) / 4);
+		acccmd.u = 0ULL;
+		acccmd.s.op = 1; /* Read operation */
+		acccmd.s.size = cnt;
+		acccmd.s.addr = (acccmd_address >> 2);
+		acccmd.s.go = 1; /* Execute */
+		writeq_relaxed(acccmd.u, (void *)(lmem0 +
+			       TDMA_SST_ACC_CMD));
+		accstat.u = 0ULL;
+
+		while (!accstat.s.cmd_done && !accstat.s.error)
+			accstat.u = readq_relaxed((void *)(lmem0 +
+						  TDMA_SST_ACC_STAT));
+
+		if (accstat.s.error) {
+			data.u = readq_relaxed((void *)(lmem2 +
+					       TDMA_NB_INT_STAT));
+			filter_dbg(FERR, "%s Reading data from", __func__);
+			filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
+				   (unsigned long)address, k,
+				   (unsigned long)data.u);
+			spin_unlock_bh(&pf_reg_lock);
+			kfree(kern_buffer);
+			return TNS_ERROR_INDIRECT_READ;
+		}
+
+		dcnt = cnt / 2;
+		if (cnt & 1)
+			dcnt++;
+		for (i = 0, j = 0; (j < dcnt) && (i < cnt); j++) {
+			data.u = readq_relaxed((void *)(lmem0 +
+					       TNS_TDMA_SST_ACC_RDATX(j)));
+			dataw[i++] = data.s.lower32;
+			if (i < cnt)
+				dataw[i++] = data.s.upper32;
+		}
+		/* Calculate the next offset to read */
+		acccmd_address = acccmd_address + CHUNK_SIZE;
+		size -= CHUNK_SIZE;
+	}
+	spin_unlock_bh(&pf_reg_lock);
+	return 0;
+}
+
+u64 tns_read_register(u64 start, u64 offset)
+{
+	return readq_relaxed((void *)(start + offset));
+}
+
+void tns_write_register(u64 start, u64 offset, u64 data)
+{
+	writeq_relaxed(data, (void *)(start + offset));
+}
+
+/* Check if TNS is available. If yes return 0 else 1 */
+int is_tns_available(void)
+{
+	union tns_tdma_cap tdma_cap;
+
+	tdma_cap.u = tns_read_register(iomem0, TNS_TDMA_CAP_OFFSET);
+	tns_enabled = tdma_cap.s.switch_capable;
+	/* In multi-node systems, make sure TNS should be there in both nodes */
+	if (nr_node_ids > 1) {
+		tdma_cap.u = tns_read_register(node1_iomem0,
+					       TNS_TDMA_CAP_OFFSET);
+		if (tdma_cap.s.switch_capable)
+			n1_tns = 1;
+	}
+	tns_enabled &= tdma_cap.s.switch_capable;
+	return (!tns_enabled);
+}
+
+int bist_error_check(void)
+{
+	int fail = 0, i;
+	u64 bist_stat = 0;
+
+	for (i = 0; i < 12; i++) {
+		bist_stat = tns_read_register(iomem0, (i * 16));
+		if (bist_stat) {
+			filter_dbg(FERR, "TNS BIST%d fail 0x%llx\n",
+				   i, bist_stat);
+			fail = 1;
+		}
+		if (!n1_tns)
+			continue;
+		bist_stat = tns_read_register(node1_iomem0, (i * 16));
+		if (bist_stat) {
+			filter_dbg(FERR, "TNS(N1) BIST%d fail 0x%llx\n",
+				   i, bist_stat);
+			fail = 1;
+		}
+	}
+
+	return fail;
+}
+
+int replay_indirect_trace(int node, u64 *buf_ptr, int idx)
+{
+	union _tns_sst_config cmd = (union _tns_sst_config)(buf_ptr[idx]);
+	int remaining = cmd.cmd.run;
+	u64 io_addr;
+	int word_cnt = cmd.cmd.word_cnt;
+	int size = (word_cnt + 1) / 2;
+	u64 stride = word_cnt;
+	u64 acc_cmd = cmd.copy.do_copy;
+	u64 lmem2 = 0, lmem0 = 0;
+	union tns_tdma_sst_acc_stat_t accstat;
+	union tns_acc_data data;
+
+	if (node) {
+		lmem0 = node1_iomem0;
+		lmem2 = node1_iomem2;
+	} else {
+		lmem0 = iomem0;
+		lmem2 = iomem2;
+	}
+
+	if (word_cnt == 0) {
+		word_cnt = 16;
+		stride = 16;
+		size = 8;
+	} else {
+		// make stride next power of 2
+		if (cmd.cmd.powerof2stride)
+			while ((stride & (stride - 1)) != 0)
+				stride++;
+	}
+	stride *= 4; //convert stride from 32-bit words to bytes
+
+	do {
+		int addr_p = 1;
+		/* extract (big endian) data from the config
+		 * into the data array
+		 */
+		while (size > 0) {
+			io_addr = lmem0 + TDMA_SST_ACC_CMD + addr_p * 16;
+			tns_write_register(io_addr, 0, buf_ptr[idx + size]);
+			addr_p += 1;
+			size--;
+		}
+		tns_write_register((lmem0 + TDMA_SST_ACC_CMD), 0, acc_cmd);
+		/* TNS Block access registers indirectly, ran memory barrier
+		 * between two writes
+		 */
+		wmb();
+		/* Check for completion */
+		accstat.u = 0ULL;
+		while (!accstat.s.cmd_done && !accstat.s.error)
+			accstat.u = readq_relaxed((void *)(lmem0 +
+							   TDMA_SST_ACC_STAT));
+
+		/* Check for error, and report it */
+		if (accstat.s.error) {
+			filter_dbg(FERR, "%s data from 0x%0llx failed 0x%llx\n",
+				   __func__, acc_cmd, accstat.u);
+			data.u = readq_relaxed((void *)(lmem2 +
+							TDMA_NB_INT_STAT));
+			filter_dbg(FERR, "Status 0x%llx\n", data.u);
+		}
+		/* update the address */
+		acc_cmd += stride;
+		size = (word_cnt + 1) / 2;
+		usleep_range(20, 30);
+	} while (remaining-- > 0);
+
+	return size;
+}
+
+void replay_tns_node(int node, u64 *buf_ptr, int reg_cnt)
+{
+	int counter = 0;
+	u64 offset = 0;
+	u64 io_address;
+	int datapathmode = 1;
+	u64 lmem2 = 0, lmem0 = 0;
+
+	if (node) {
+		lmem0 = node1_iomem0;
+		lmem2 = node1_iomem2;
+	} else {
+		lmem0 = iomem0;
+		lmem2 = iomem2;
+	}
+	for (counter = 0; counter < reg_cnt; counter++) {
+		if (buf_ptr[counter] == 0xDADADADADADADADAull) {
+			datapathmode = 1;
+			continue;
+		} else if (buf_ptr[counter] == 0xDEDEDEDEDEDEDEDEull) {
+			datapathmode = 0;
+			continue;
+		}
+		if (datapathmode == 1) {
+			if (buf_ptr[counter] >= BAR0_START &&
+			    buf_ptr[counter] <= BAR0_END) {
+				offset = buf_ptr[counter] - BAR0_START;
+				io_address = lmem0 + offset;
+			} else if (buf_ptr[counter] >= BAR2_START &&
+				   buf_ptr[counter] <= BAR2_END) {
+				offset = buf_ptr[counter] - BAR2_START;
+				io_address = lmem2 + offset;
+			} else {
+				filter_dbg(FERR, "%s Address 0x%llx invalid\n",
+					   __func__, buf_ptr[counter]);
+				return;
+			}
+
+			tns_write_register(io_address, 0, buf_ptr[counter + 1]);
+			/* TNS Block access registers indirectly, ran memory
+			 * barrier between two writes
+			 */
+			wmb();
+			counter += 1;
+			usleep_range(20, 30);
+		} else if (datapathmode == 0) {
+			int sz = replay_indirect_trace(node, buf_ptr, counter);
+
+			counter += sz;
+		}
+	}
+}
+
+int alloc_table_info(int i, struct table_static_s tbl_sdata[])
+{
+	tbl_info[i].ddata[0].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
+					      sizeof(uintptr_t), GFP_KERNEL);
+	if (!tbl_info[i].ddata[0].bitmap)
+		return 1;
+
+	if (!n1_tns)
+		return 0;
+
+	tbl_info[i].ddata[1].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
+					      sizeof(uintptr_t), GFP_KERNEL);
+	if (!tbl_info[i].ddata[1].bitmap) {
+		kfree(tbl_info[i].ddata[0].bitmap);
+		return 1;
+	}
+
+	return 0;
+}
+
+void tns_replay_register_trace(const struct firmware *fw, struct device *dev)
+{
+	int i;
+	int node = 0;
+	u8 *buffer = NULL;
+	u64 *buf_ptr = NULL;
+	struct tns_global_st *fw_header = NULL;
+	struct table_static_s tbl_sdata[TNS_MAX_TABLE];
+	size_t src_len;
+	size_t dest_len = TNS_FW_MAX_SIZE;
+	int rc;
+	u8 *fw2_buf = NULL;
+	unsigned char *decomp_dest = NULL;
+
+	fw2_buf = (u8 *)fw->data;
+	src_len = fw->size - 8;
+
+	decomp_dest = kcalloc((dest_len * 2), sizeof(char), GFP_KERNEL);
+	if (!decomp_dest)
+		return;
+
+	memset(decomp_dest, 0, (dest_len * 2));
+	rc = lz4_decompress_unknownoutputsize(&fw2_buf[8], src_len, decomp_dest,
+					      &dest_len);
+	if (rc) {
+		filter_dbg(FERR, "Decompress Error %d\n", rc);
+		pr_info("Uncompressed destination length %ld\n", dest_len);
+		kfree(decomp_dest);
+		return;
+	}
+	fw_header = (struct tns_global_st *)decomp_dest;
+	buffer = (u8 *)decomp_dest;
+
+	filter_dbg(FINFO, "TNS Firmware version: %s Loading...\n",
+		   fw_header->version);
+
+	memset(tbl_info, 0x0, sizeof(tbl_info));
+	buf_ptr = (u64 *)(buffer + sizeof(struct tns_global_st));
+	memcpy(tbl_sdata, fw_header->tbl_info, sizeof(fw_header->tbl_info));
+
+	for (i = 0; i < TNS_MAX_TABLE; i++) {
+		if (!tbl_sdata[i].valid)
+			continue;
+		memcpy(&tbl_info[i].sdata, &tbl_sdata[i],
+		       sizeof(struct table_static_s));
+		if (alloc_table_info(i, tbl_sdata)) {
+			kfree(decomp_dest);
+			return;
+		}
+	}
+
+	for (node = 0; node < nr_node_ids; node++)
+		replay_tns_node(node, buf_ptr, fw_header->reg_cnt);
+
+	kfree(decomp_dest);
+	release_firmware(fw);
+}
+
+int tns_init(const struct firmware *fw, struct device *dev)
+{
+	int result = 0;
+	int i = 0;
+	int temp;
+	union tns_tdma_config tdma_config;
+	union tns_tdma_lmacx_config tdma_lmac_cfg;
+	u64 reg_init_val;
+
+	spin_lock_init(&pf_reg_lock);
+
+	/* use two regions insted of a single big mapping to save
+	 * the kernel virtual space
+	 */
+	iomem0 = (u64)ioremap(BAR0_START, BAR0_SIZE);
+	if (iomem0 == 0ULL) {
+		filter_dbg(FERR, "Node0 ioremap failed for BAR0\n");
+		result = -EAGAIN;
+		goto error;
+	} else {
+		filter_dbg(FINFO, "ioremap success for BAR0\n");
+	}
+
+	if (nr_node_ids > 1) {
+		node1_iomem0 = (u64)ioremap(NODE1_BAR0_START, NODE1_BAR0_SIZE);
+		if (node1_iomem0 == 0ULL) {
+			filter_dbg(FERR, "Node1 ioremap failed for BAR0\n");
+			result = -EAGAIN;
+			goto error;
+		} else {
+			filter_dbg(FINFO, "ioremap success for BAR0\n");
+		}
+	}
+
+	if (is_tns_available()) {
+		filter_dbg(FERR, "TNS NOT AVAILABLE\n");
+		goto error;
+	}
+
+	if (bist_error_check()) {
+		filter_dbg(FERR, "BIST ERROR CHECK FAILED");
+		goto error;
+	}
+
+	/* NIC0-BGX0 is TNS, NIC1-BGX1 is TNS, DISABLE BACKPRESSURE */
+	reg_init_val = 0ULL;
+	pr_info("NIC Block configured in TNS/TNS mode");
+	tns_write_register(iomem0, TNS_RDMA_CONFIG_OFFSET, reg_init_val);
+	usleep_range(10, 20);
+	if (n1_tns) {
+		tns_write_register(node1_iomem0, TNS_RDMA_CONFIG_OFFSET,
+				   reg_init_val);
+		usleep_range(10, 20);
+	}
+
+	// Configure each LMAC with 512 credits in BYPASS mode
+	for (i = TNS_MIN_LMAC; i < (TNS_MIN_LMAC + TNS_MAX_LMAC); i++) {
+		tdma_lmac_cfg.u = 0ULL;
+		tdma_lmac_cfg.s.fifo_cdts = 0x200;
+		tns_write_register(iomem0, TNS_TDMA_LMACX_CONFIG_OFFSET(i),
+				   tdma_lmac_cfg.u);
+		usleep_range(10, 20);
+		if (n1_tns) {
+			tns_write_register(node1_iomem0,
+					   TNS_TDMA_LMACX_CONFIG_OFFSET(i),
+					   tdma_lmac_cfg.u);
+			usleep_range(10, 20);
+		}
+	}
+
+	//ENABLE TNS CLOCK AND CSR READS
+	temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
+	tdma_config.u = temp;
+	tdma_config.s.clk_2x_ena = 1;
+	tdma_config.s.clk_ena = 1;
+	tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
+	if (n1_tns)
+		tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
+				   tdma_config.u);
+
+	temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
+	tdma_config.u = temp;
+	tdma_config.s.csr_access_ena = 1;
+	tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
+	if (n1_tns)
+		tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
+				   tdma_config.u);
+
+	reg_init_val = 0ULL;
+	tns_write_register(iomem0, TNS_TDMA_RESET_CTL_OFFSET, reg_init_val);
+	if (n1_tns)
+		tns_write_register(node1_iomem0, TNS_TDMA_RESET_CTL_OFFSET,
+				   reg_init_val);
+
+	iomem2 = (u64)ioremap(BAR2_START, BAR2_SIZE);
+	if (iomem2 == 0ULL) {
+		filter_dbg(FERR, "ioremap failed for BAR2\n");
+		result = -EAGAIN;
+		goto error;
+	} else {
+		filter_dbg(FINFO, "ioremap success for BAR2\n");
+	}
+
+	if (n1_tns) {
+		node1_iomem2 = (u64)ioremap(NODE1_BAR2_START, NODE1_BAR2_SIZE);
+		if (node1_iomem2 == 0ULL) {
+			filter_dbg(FERR, "Node1 ioremap failed for BAR2\n");
+			result = -EAGAIN;
+			goto error;
+		} else {
+			filter_dbg(FINFO, "Node1 ioremap success for BAR2\n");
+		}
+	}
+	msleep(1000);
+	//We will replay register trace to initialize TNS block
+	tns_replay_register_trace(fw, dev);
+
+	return 0;
+error:
+	if (iomem0 != 0)
+		iounmap((void *)iomem0);
+	if (iomem2 != 0)
+		iounmap((void *)iomem2);
+
+	if (node1_iomem0 != 0)
+		iounmap((void *)node1_iomem0);
+	if (node1_iomem2 != 0)
+		iounmap((void *)node1_iomem2);
+
+	return result;
+}
+
+void tns_exit(void)
+{
+	int i;
+
+	if (iomem0 != 0)
+		iounmap((void *)iomem0);
+	if (iomem2 != 0)
+		iounmap((void *)iomem2);
+
+	if (node1_iomem0 != 0)
+		iounmap((void *)node1_iomem0);
+	if (node1_iomem2 != 0)
+		iounmap((void *)node1_iomem2);
+
+	for (i = 0; i < TNS_MAX_TABLE; i++) {
+		if (!tbl_info[i].sdata.valid)
+			continue;
+		kfree(tbl_info[i].ddata[0].bitmap);
+		kfree(tbl_info[i].ddata[n1_tns].bitmap);
+	}
+}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
                   ` (3 preceding siblings ...)
  2016-12-21  8:46 ` [RFC PATCH 4/7] HW Filter Initialization code and register access APIs Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21 12:43   ` Sunil Kovvuri
  2016-12-21  8:46 ` [RFC PATCH 6/7] HW Filter Table access API's Satha Koteswara Rao
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/pf_globals.h |  78 +++++
 drivers/net/ethernet/cavium/thunder/pf_locals.h  | 365 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/thunder/pf_vf.c      | 207 +++++++++++++
 3 files changed, 650 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_globals.h
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_locals.h
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_vf.c

diff --git a/drivers/net/ethernet/cavium/thunder/pf_globals.h b/drivers/net/ethernet/cavium/thunder/pf_globals.h
new file mode 100644
index 0000000..79fab86
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/pf_globals.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef NIC_PF_H
+#define	NIC_PF_H
+
+#include <linux/netdevice.h>
+#include <linux/interrupt.h>
+#include <linux/firmware.h>
+#include "thunder_bgx.h"
+#include "tbl_access.h"
+
+#define TNS_MAX_LMAC	8
+#define TNS_MIN_LMAC    0
+
+struct tns_global_st {
+	u64 magic;
+	char     version[16];
+	u64 reg_cnt;
+	struct table_static_s tbl_info[TNS_MAX_TABLE];
+};
+
+#define PF_COUNT 3
+#define PF_1	0
+#define PF_2	64
+#define PF_3	96
+#define PF_END	128
+
+int is_pf(int node_id, int vf);
+int get_pf(int node_id, int vf);
+void get_vf_group(int node_id, int lmac, int *start_vf, int *end_vf);
+int vf_to_pport(int node_id, int vf);
+int pf_filter_init(void);
+int tns_init(const struct firmware *fw, struct device *dev);
+void tns_exit(void);
+void pf_notify_msg_handler(int node_id, void *arg);
+void nic_init_pf_vf_mapping(void);
+int nic_set_pf_vf_mapping(int node_id);
+int get_bgx_id(int node_id, int vf_id, int *bgx_id, int *lmac);
+int phy_port_to_bgx_lmac(int node, int port, int *bgx, int *lmac);
+int tns_filter_valid_entry(int node, int req_type, int vf, int vlan);
+void nic_enable_valid_vf(int max_vf_cnt);
+
+union nic_pf_qsx_rqx_bp_cfg {
+	u64 u;
+	struct nic_pf_qsx_rqx_bp_cfg_s {
+		u64 bpid		: 8;
+		u64 cq_bp		: 8;
+		u64 rbdr_bp		: 8;
+		u64 reserved_24_61	: 38;
+		u64 cq_bp_ena		: 1;
+		u64 rbdr_bp_ena		: 1;
+	} s;
+};
+
+#define NIC_PF_QSX_RQX_BP_CFG	0x20010500ul
+#define RBDR_CQ_BP		129
+
+union nic_pf_intfx_bp_cfg {
+	u64 u;
+	struct bdk_nic_pf_intfx_bp_cfg_s {
+		u64 bp_id		: 4;
+		u64 bp_type		: 1;
+		u64 reserved_5_62	: 58;
+		u64 bp_ena		: 1;
+	} s;
+};
+
+#define NIC_PF_INTFX_BP_CFG	0x208ull
+
+#define FW_NAME	"tns_firmware.bin"
+
+#endif
diff --git a/drivers/net/ethernet/cavium/thunder/pf_locals.h b/drivers/net/ethernet/cavium/thunder/pf_locals.h
new file mode 100644
index 0000000..f7e74bb
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/pf_locals.h
@@ -0,0 +1,365 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef __PF_LOCALS__
+#define __PF_LOCALS__
+
+#include <linux/printk.h>
+
+#define XP_TOTAL_PORTS	(137)
+#define MAX_SYS_PORTS	XP_TOTAL_PORTS
+//Loopback port was invalid in MAC filter design
+#define TNS_MAC_FILTER_MAX_SYS_PORTS	(MAX_SYS_PORTS - 1)
+//Maximum LMAC available
+#define TNS_MAX_INGRESS_GROUP	8
+#define TNS_MAX_VF	(TNS_MAC_FILTER_MAX_SYS_PORTS - TNS_MAX_INGRESS_GROUP)
+#define TNS_VLAN_FILTER_MAX_INDEX	256
+#define TNS_MAC_FILTER_MAX_INDEX	1536
+#define TNS_MAX_VLAN_PER_VF	16
+
+#define TNS_NULL_VIF		152
+#define TNS_BASE_BCAST_VIF	136
+#define TNS_BASE_MCAST_VIF	144
+#define TNS_FW_MAX_SIZE         1048576
+
+/* We are restricting each VF to register atmost 11 filter entries
+ * (including unicast & multicast)
+ */
+#define TNS_MAX_MAC_PER_VF	11
+
+#define FERR		0
+#define FDEBUG		1
+#define FINFO		2
+
+#define FILTER_DBG_GBL		FERR
+#define filter_dbg(dbg_lvl, fmt, args...) \
+	({ \
+	if ((dbg_lvl) <= FILTER_DBG_GBL) \
+		pr_info(fmt, ##args); \
+	})
+
+typedef u8 mac_addr_t[6];		///< User define type for Mac Address
+typedef u8 vlan_port_bitmap_t[32];
+
+enum {
+	TNS_NO_ERR = 0,
+
+	/* Error in indirect read watch out the status */
+	TNS_ERROR_INDIRECT_READ = 4,
+	/* Error in indirect write watch out the status */
+	TNS_ERROR_INDIRECT_WRITE = 5,
+	/* Data too large for Read/Write */
+	TNS_ERROR_DATA_TOO_LARGE = 6,
+	/* Invalid arguments supplied to the IOCTL */
+	TNS_ERROR_INVALID_ARG = 7,
+
+	TNS_ERR_MAC_FILTER_INVALID_ENTRY,
+	TNS_ERR_MAC_FILTER_TBL_READ,
+	TNS_ERR_MAC_FILTER_TBL_WRITE,
+	TNS_ERR_MAC_EVIF_TBL_READ,
+	TNS_ERR_MAC_EVIF_TBL_WRITE,
+
+	TNS_ERR_VLAN_FILTER_INVLAID_ENTRY,
+	TNS_ERR_VLAN_FILTER_TBL_READ,
+	TNS_ERR_VLAN_FILTER_TBL_WRITE,
+	TNS_ERR_VLAN_EVIF_TBL_READ,
+	TNS_ERR_VLAN_EVIF_TBL_WRITE,
+
+	TNS_ERR_PORT_CONFIG_TBL_READ,
+	TNS_ERR_PORT_CONFIG_TBL_WRITE,
+	TNS_ERR_PORT_CONFIG_INVALID_ENTRY,
+
+	TNS_ERR_DRIVER_READ,
+	TNS_ERR_DRIVER_WRITE,
+
+	TNS_ERR_WRONG_PORT_NUMBER,
+	TNS_ERR_INVALID_TBL_ID,
+	TNS_ERR_ENTRY_NOT_FOUND,
+	TNS_ERR_DUPLICATE_MAC,
+	TNS_ERR_MAX_LIMIT,
+
+	TNS_STATUS_NUM_ENTRIES
+};
+
+struct ing_grp_gblvif {
+	u32 ingress_grp;
+	u32 pf_vf;
+	u32 bcast_vif;
+	u32 mcast_vif;
+	u32 null_vif;
+	u32 is_valid; //Is this Ingress Group or LMAC is valid
+	u8 mcast_promis_grp[TNS_MAC_FILTER_MAX_SYS_PORTS];
+	u8 valid_mcast_promis_ports;
+};
+
+struct vf_register_s {
+	int filter_index[16];
+	u32 filter_count;
+	int vf_in_mcast_promis;
+	int vf_in_promis;
+	int vlan[TNS_MAX_VLAN_PER_VF];
+	u32 vlan_count;
+};
+
+union mac_filter_keymask_type_s {
+	u64 key_value;
+
+	struct {
+		u32	ingress_grp: 16;
+		mac_addr_t	mac_DA;
+	} s;
+};
+
+struct mac_filter_keymask_s {
+	u8 is_valid;
+	union mac_filter_keymask_type_s key_type;
+};
+
+union mac_filter_data_s {
+	u64 data;
+	struct {
+		u64 evif: 16;
+		u64 Reserved0 : 48;
+	} s;
+};
+
+struct mac_filter_entry {
+	struct mac_filter_keymask_s key;
+	struct mac_filter_keymask_s mask;
+	union mac_filter_data_s data;
+};
+
+union vlan_filter_keymask_type_s {
+	u64 key_value;
+
+	struct {
+		u32	ingress_grp: 16;
+		u32	vlan: 12;
+		u32	reserved: 4;
+		u32	reserved1;
+	} s;
+};
+
+struct vlan_filter_keymask_s {
+	u8 is_valid;
+	union vlan_filter_keymask_type_s key_type;
+};
+
+union vlan_filter_data_s {
+	u64 data;
+	struct {
+		u64 filter_idx: 16;
+		u64 Reserved0 : 48;
+	} s;
+};
+
+struct vlan_filter_entry {
+	struct vlan_filter_keymask_s key;
+	struct vlan_filter_keymask_s mask;
+	union vlan_filter_data_s data;
+};
+
+struct evif_entry {
+	u64	rsp_type: 2;
+	u64	truncate: 1;
+	u64	mtu_prf: 3;
+	u64	mirror_en: 1;
+	u64	q_mirror_en: 1;
+	u64	prt_bmap7_0: 8;
+	u64	rewrite_ptr0: 8;
+	u64	rewrite_ptr1: 8;
+	/* Byte 0 is data31_0[7:0] and byte 3 is data31_0[31:24] */
+	u64	data31_0: 32;
+	u64	insert_ptr0: 16;
+	u64	insert_ptr1: 16;
+	u64	insert_ptr2: 16;
+	u64	mre_ptr: 15;
+	u64	prt_bmap_8: 1;
+	u64	prt_bmap_72_9;
+	u64	prt_bmap_136_73;
+};
+
+struct itt_entry_s {
+	u32 rsvd0 : 30;
+	u32 pkt_dir : 1;
+	u32 is_admin_vlan_enabled : 1;
+	u32 reserved0 : 6;
+	u32 default_evif : 8;
+	u32 admin_vlan : 12;
+	u32 Reserved1 : 6;
+	u32 Reserved2[6];
+};
+
+static inline u64 TNS_TDMA_SST_ACC_RDATX(unsigned long param1)
+{
+	return 0x00000480ull + (param1 & 7) * 0x10ull;
+}
+
+static inline u64 TNS_TDMA_SST_ACC_WDATX(unsigned long param1)
+{
+	return 0x00000280ull + (param1 & 7) * 0x10ull;
+}
+
+union tns_tdma_sst_acc_cmd {
+	u64 u;
+	struct  tns_tdma_sst_acc_cmd_s {
+		u64 reserved_0_1	: 2;
+		u64 addr		: 30;
+		u64 size		: 4;
+		u64 op			: 1;
+		u64 go			: 1;
+		u64 reserved_38_63	: 26;
+	} s;
+};
+
+#define TDMA_SST_ACC_CMD 0x00000270ull
+
+union tns_tdma_sst_acc_stat_t {
+	u64 u;
+	struct  tns_tdma_sst_acc_stat_s {
+		u64 cmd_done		: 1;
+		u64 error		: 1;
+		u64 reserved_2_63	: 62;
+	} s;
+};
+
+#define TDMA_SST_ACC_STAT 0x00000470ull
+#define TDMA_NB_INT_STAT 0x01000110ull
+
+union tns_acc_data {
+	u64 u;
+	struct tns_acc_data_s {
+		u64 lower32 : 32;
+		u64 upper32 : 32;
+	} s;
+};
+
+union tns_tdma_config {
+	u64 u;
+	struct  tns_tdma_config_s {
+		u64 clk_ena		: 1;
+		u64 clk_2x_ena		: 1;
+		u64 reserved_2_3	: 2;
+		u64 csr_access_ena	: 1;
+		u64 reserved_5_7	: 3;
+		u64 bypass0_ena		: 1;
+		u64 bypass1_ena		: 1;
+		u64 reserved_10_63	: 54;
+	} s;
+};
+
+#define TNS_TDMA_CONFIG_OFFSET  0x00000200ull
+
+union tns_tdma_cap {
+	u64 u;
+	struct tns_tdma_cap_s {
+		u64 switch_capable	: 1;
+		u64 reserved_1_63	: 63;
+	} s;
+};
+
+#define TNS_TDMA_CAP_OFFSET 0x00000400ull
+#define TNS_RDMA_CONFIG_OFFSET 0x00001200ull
+
+union tns_tdma_lmacx_config {
+	u64 u;
+	struct tns_tdma_lmacx_config_s {
+		u64 fifo_cdts		: 14;
+		u64 reserved_14_63	: 50;
+	} s;
+};
+
+union _tns_sst_config {
+	u64 data;
+	struct {
+#ifdef __BIG_ENDIAN
+		u64 powerof2stride	: 1;
+		u64 run			: 11;
+		u64 reserved		: 14;
+		u64 req_type		: 2;
+		u64 word_cnt		: 4;
+		u64 byte_addr		: 32;
+#else
+		u64 byte_addr		: 32;
+		u64 word_cnt		: 4;
+		u64 req_type		: 2;
+		u64 reserved		: 14;
+		u64 run			: 11;
+		u64 powerof2stride	: 1;
+#endif
+	} cmd;
+	struct {
+#ifdef __BIG_ENDIAN
+		u64 do_not_copy		: 26;
+		u64 do_copy		: 38;
+#else
+		u64 do_copy		: 38;
+		u64 do_not_copy		: 26;
+#endif
+	} copy;
+	struct {
+#ifdef __BIG_ENDIAN
+		u64 magic		: 48;
+		u64 major_version_BCD	: 8;
+		u64 minor_version_BCD	: 8;
+#else
+		u64 minor_version_BCD	: 8;
+		u64 major_version_BCD	: 8;
+		u64 magic		: 48;
+#endif
+	} header;
+};
+
+static inline u64 TNS_TDMA_LMACX_CONFIG_OFFSET(unsigned long param1)
+			 __attribute__ ((pure, always_inline));
+static inline u64 TNS_TDMA_LMACX_CONFIG_OFFSET(unsigned long param1)
+{
+	return 0x00000300ull + (param1 & 7) * 0x10ull;
+}
+
+#define TNS_TDMA_RESET_CTL_OFFSET 0x00000210ull
+
+int read_register_indirect(u64 address, u8 size, u8 *kern_buffer);
+int write_register_indirect(u64 address, u8 size, u8 *kern_buffer);
+int tns_write_register_indirect(int node, u64 address, u8 size,
+				u8 *kern_buffer);
+int tns_read_register_indirect(int node, u64 address, u8 size,
+			       u8 *kern_buffer);
+u64 tns_read_register(u64 start, u64 offset);
+void tns_write_register(u64 start, u64 offset, u64 data);
+int tbl_write(int node, int tbl_id, int tbl_index, void *key, void *mask,
+	      void *data);
+int tbl_read(int node, int tbl_id, int tbl_index, void *key, void *mask,
+	     void *data);
+int invalidate_table_entry(int node, int tbl_id, int tbl_idx);
+int alloc_table_index(int node, int table_id, int *index);
+void free_table_index(int node, int table_id, int index);
+
+struct pf_vf_data {
+	int pf_id;
+	int num_vfs;
+	int lmac;
+	int sys_lmac;
+	int bgx_idx;
+};
+
+struct pf_vf_map_s {
+	bool valid;
+	int lmac_cnt;
+	struct pf_vf_data pf_vf[TNS_MAX_LMAC];
+};
+
+extern struct pf_vf_map_s pf_vf_map_data[MAX_NUMNODES];
+int tns_enable_mcast_promis(int node, int vf);
+int filter_tbl_lookup(int node, int tblid, void *entry, int *idx);
+
+#define MCAST_PROMIS(a, b, c) ingressgrp_gblvif[(a)][(b)].mcast_promis_grp[(c)]
+#define VALID_MCAST_PROMIS(a, b) \
+	ingressgrp_gblvif[(a)][(b)].valid_mcast_promis_ports
+
+#endif /*__PF_LOCALS__*/
diff --git a/drivers/net/ethernet/cavium/thunder/pf_vf.c b/drivers/net/ethernet/cavium/thunder/pf_vf.c
new file mode 100644
index 0000000..bc4f923
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/pf_vf.c
@@ -0,0 +1,207 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include "nic_reg.h"
+#include "nic.h"
+#include "pf_globals.h"
+#include "pf_locals.h"
+
+#define PFVF_DAT(gidx, lidx) \
+	pf_vf_map_data[gidx].pf_vf[lidx]
+
+struct pf_vf_map_s pf_vf_map_data[MAX_NUMNODES];
+
+void nic_init_pf_vf_mapping(void)
+{
+	int i;
+
+	for (i = 0 ; i < MAX_NUMNODES; i++) {
+		pf_vf_map_data[i].lmac_cnt = 0;
+		pf_vf_map_data[i].valid = false;
+	}
+}
+
+/* Based on available LMAC's we create physical group called ingress group
+ * Designate first VF as acted PF of this group, called PfVf interface.
+ */
+static inline void set_pf_vf_global_data(int node, int valid_vf_cnt)
+{
+	unsigned int bgx_map;
+	int bgx;
+	int lmac, lmac_cnt = 0;
+
+	if (pf_vf_map_data[node].valid)
+		return;
+
+	bgx_map = bgx_get_map(node);
+	for (bgx = 0; bgx < MAX_BGX_PER_CN88XX; bgx++)	{
+		if (!(bgx_map & (1 << bgx)))
+			continue;
+		pf_vf_map_data[node].valid = true;
+		lmac_cnt = bgx_get_lmac_count(node, bgx);
+
+		for (lmac = 0; lmac < lmac_cnt; lmac++)	{
+			int slc = lmac + pf_vf_map_data[node].lmac_cnt;
+
+			PFVF_DAT(node, slc).pf_id = (bgx * 64) + (lmac *
+								 valid_vf_cnt);
+			PFVF_DAT(node, slc).num_vfs = valid_vf_cnt;
+			PFVF_DAT(node, slc).lmac = lmac;
+			PFVF_DAT(node, slc).bgx_idx = bgx;
+			PFVF_DAT(node, slc).sys_lmac = bgx * MAX_LMAC_PER_BGX +
+						      lmac;
+		}
+		pf_vf_map_data[node].lmac_cnt += lmac_cnt;
+	}
+}
+
+/* We have 2 NIC pipes in each node.Each NIC pipe associated with BGX interface
+ * Each BGX contains atmost 4 LMACs (or PHY's) and supports 64 VF's
+ * Hardware doesn't have any physical PF, one of VF acts as PF.
+ */
+int nic_set_pf_vf_mapping(int node_id)
+{
+	unsigned int bgx_map;
+	int node = 0;
+	int bgx;
+	int lmac_cnt = 0, valid_vf_cnt = 64;
+
+	do {
+		bgx_map = bgx_get_map(node);
+		/* Calculate Maximum VF's in each physical port group */
+		for (bgx = 0; bgx < MAX_BGX_PER_CN88XX; bgx++) {
+			if (!(bgx_map & (1 << bgx)))
+				continue;
+			lmac_cnt = bgx_get_lmac_count(node, bgx);
+			//Maximum 64 VF's for each BGX
+			if (valid_vf_cnt > (64 / lmac_cnt))
+				valid_vf_cnt = (64 / lmac_cnt);
+		}
+	} while (++node < nr_node_ids);
+
+	nic_enable_valid_vf(valid_vf_cnt);
+	node = 0;
+	do {
+		set_pf_vf_global_data(node, valid_vf_cnt);
+	} while (++node < nr_node_ids);
+
+	return 0;
+}
+
+/* Find if VF is a acted PF */
+int is_pf(int node, int vf)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return 0;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++)
+		if (vf == PFVF_DAT(node, i).pf_id)
+			return 1;
+
+	return 0;
+}
+
+/* Get the acted PF corresponding to this VF */
+int get_pf(int node, int vf)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return 0;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++)
+		if ((vf >= PFVF_DAT(node, i).pf_id) &&
+		    (vf < (PFVF_DAT(node, i).pf_id +
+			   PFVF_DAT(node, i).num_vfs)))
+			return pf_vf_map_data[node].pf_vf[i].pf_id;
+
+	return -1;
+}
+
+/* Get the starting vf and ending vf number of the LMAC group */
+void get_vf_group(int node, int lmac, int *start_vf, int *end_vf)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++) {
+		if (lmac == (PFVF_DAT(node, i).sys_lmac)) {
+			*start_vf = PFVF_DAT(node, i).pf_id;
+			*end_vf = PFVF_DAT(node, i).pf_id +
+				  PFVF_DAT(node, i).num_vfs;
+			return;
+		}
+	}
+}
+
+/* Get the physical port # of the given vf */
+int vf_to_pport(int node, int vf)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return 0;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++)
+		if ((vf >= PFVF_DAT(node, i).pf_id) &&
+		    (vf < (PFVF_DAT(node, i).pf_id +
+		     PFVF_DAT(node, i).num_vfs)))
+			return PFVF_DAT(node, i).sys_lmac;
+
+	return -1;
+}
+
+/* Get BGX # and LMAC # corresponding to VF */
+int get_bgx_id(int node, int vf, int *bgx_idx, int *lmac)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return 1;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++) {
+		if ((vf >= PFVF_DAT(node, i).pf_id) &&
+		    (vf < (PFVF_DAT(node, i).pf_id +
+			   PFVF_DAT(node, i).num_vfs))) {
+			*bgx_idx = pf_vf_map_data[node].pf_vf[i].bgx_idx;
+			*lmac = pf_vf_map_data[node].pf_vf[i].lmac;
+			return 0;
+		}
+	}
+
+	return 1;
+}
+
+/* Get BGX # and LMAC # corresponding to physical port */
+int phy_port_to_bgx_lmac(int node, int port, int *bgx, int *lmac)
+{
+	int i;
+
+	/* Invalid Request, Init not done properly */
+	if (!pf_vf_map_data[node].valid)
+		return 1;
+
+	for (i = 0; i < pf_vf_map_data[node].lmac_cnt; i++) {
+		if (port == (PFVF_DAT(node, i).sys_lmac)) {
+			*bgx = pf_vf_map_data[node].pf_vf[i].bgx_idx;
+			*lmac = pf_vf_map_data[node].pf_vf[i].lmac;
+			return 0;
+		}
+	}
+
+	return 1;
+}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 6/7] HW Filter Table access API's
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
                   ` (4 preceding siblings ...)
  2016-12-21  8:46 ` [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21  8:46 ` [RFC PATCH 7/7] Get notifications from PF driver and configure filter block based on request data Satha Koteswara Rao
  2016-12-21 12:03 ` [RFC PATCH 0/7] ThunderX Embedded switch support Sunil Kovvuri
  7 siblings, 0 replies; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/tbl_access.c | 262 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/thunder/tbl_access.h |  61 ++++++
 2 files changed, 323 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/thunder/tbl_access.c
 create mode 100644 drivers/net/ethernet/cavium/thunder/tbl_access.h

diff --git a/drivers/net/ethernet/cavium/thunder/tbl_access.c b/drivers/net/ethernet/cavium/thunder/tbl_access.c
new file mode 100644
index 0000000..6be31eb
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/tbl_access.c
@@ -0,0 +1,262 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/uaccess.h>
+#include "pf_globals.h"
+#include "pf_locals.h"
+#include "tbl_access.h"
+
+struct tns_table_s *get_table_information(int table_id)
+{
+	int i;
+
+	for (i = 0; i < TNS_MAX_TABLE; i++) {
+		if (!tbl_info[i].sdata.valid)
+			continue;
+
+		if (tbl_info[i].sdata.tbl_id == table_id)
+			return &tbl_info[i];
+	}
+
+	return NULL;
+}
+
+int tbl_write(int node, int table_id, int tbl_index, void *key, void *mask,
+	      void *data)
+{
+	int i;
+	struct tns_table_s *tbl = get_table_information(table_id);
+	int bck_cnt, data_index, data_offset;
+	u64 data_entry[4];
+
+	if (!tbl) {
+		filter_dbg(FERR, "Invalid Table ID: %d\n", table_id);
+		return TNS_ERR_INVALID_TBL_ID;
+	}
+
+	bck_cnt = tbl->sdata.data_width / tbl->sdata.data_size;
+	data_index = (tbl_index / bck_cnt);
+	data_offset = (tbl_index % bck_cnt);
+	//TCAM Table, we need to parse key & mask into single array
+	if (tbl->sdata.tbl_type == TNS_TBL_TYPE_TT) {
+		struct filter_keymask_s *tk = (struct filter_keymask_s *)key;
+		struct filter_keymask_s *tm = (struct filter_keymask_s *)mask;
+		u8 km[32];
+		u64 mod_key, mod_mask, temp_mask;
+		int index = 0, offset = 0;
+
+		memset(km, 0x0, 32);
+
+/* TCAM truth table data creation. Translation from data/mask to following
+ * truth table:
+ *
+ *         Mask   Data     Content
+ *          0     0         X
+ *          0     1         1
+ *          1     0         0
+ *          1     1         Always Mismatch
+ *
+ */
+		mod_mask = ~tk->key_value;
+		temp_mask = tm->key_value;
+		mod_key = tk->key_value;
+		mod_key = mod_key & (~temp_mask);
+		mod_mask = mod_mask & (~temp_mask);
+
+		for (i = 0; i < 64; i++) {
+			km[index] = km[index] | (((mod_mask >> i) & 0x1) <<
+						 offset);
+			km[index] = km[index] | (((mod_key >> i) & 0x1) <<
+						 (offset + 1));
+			offset += 2;
+			if (offset == 8) {
+				offset = 0;
+				index += 1;
+			}
+		}
+		km[index] = 0x2;
+		if (tns_write_register_indirect(node,
+						(tbl->sdata.key_base_addr +
+						 (tbl_index * 32)), 32,
+						(void *)&km[0])) {
+			filter_dbg(FERR, "key write failed node %d tbl ID %d",
+				   node, table_id);
+			filter_dbg(FERR, " index %d\n", tbl_index);
+			return TNS_ERR_DRIVER_WRITE;
+		}
+	}
+
+	/* Data Writes are ReadModifyWrite */
+	if (tns_read_register_indirect(node, (tbl->sdata.data_base_addr +
+					      (data_index * 32)), 32,
+				       (void *)&data_entry[0])) {
+		filter_dbg(FERR, "data read failed node %d tbl ID %d idx %d\n",
+			   node, table_id, tbl_index);
+		return TNS_ERR_DRIVER_READ;
+	}
+	memcpy(&data_entry[data_offset], data, tbl->sdata.data_size / 8);
+	if (tns_write_register_indirect(node, (tbl->sdata.data_base_addr +
+					       (data_index * 32)), 32,
+					(void *)&data_entry[0])) {
+		filter_dbg(FERR, "data write failed node %d tbl ID %d idx %d\n",
+			   node, table_id, tbl_index);
+		return TNS_ERR_DRIVER_WRITE;
+	}
+
+	return TNS_NO_ERR;
+}
+
+int tbl_read(int node, int table_id, int tbl_index, void *key, void *mask,
+	     void *data)
+{
+	struct tns_table_s *tbl = get_table_information(table_id);
+	int i, bck_cnt, data_index, data_offset;
+	u64 data_entry[4];
+	u8 km[32];
+
+	if (!tbl) {
+		filter_dbg(FERR, "Invalid Table ID: %d\n", table_id);
+		return TNS_ERR_INVALID_TBL_ID;
+	}
+
+	bck_cnt = tbl->sdata.data_width / tbl->sdata.data_size;
+	data_index = (tbl_index / bck_cnt);
+	data_offset = (tbl_index % bck_cnt);
+
+	//TCAM Table, we need to parse key & mask into single array
+	if (tbl->sdata.tbl_type == TNS_TBL_TYPE_TT) {
+		memset(km, 0x0, 32);
+
+		if (tns_read_register_indirect(node, (tbl->sdata.key_base_addr +
+						      (tbl_index * 32)), 32,
+					       (void *)&km[0])) {
+			filter_dbg(FERR, "key read failed node %d tbl ID %d",
+				   node, table_id);
+			filter_dbg(FERR, " idx %d\n", tbl_index);
+			return TNS_ERR_DRIVER_READ;
+		}
+		if (!(km[((tbl->sdata.key_size * 2) / 8)] == 0x2))
+			return TNS_ERR_MAC_FILTER_INVALID_ENTRY;
+	}
+
+	if (tns_read_register_indirect(node, (tbl->sdata.data_base_addr +
+					      (data_index * 32)), 32,
+				       (void *)&data_entry[0])) {
+		filter_dbg(FERR, "data read failed node %d tbl ID %d idx %d\n",
+			   node, table_id, tbl_index);
+		return TNS_ERR_DRIVER_READ;
+	}
+	memcpy(data, (void *)(&data_entry[data_offset]),
+	       (tbl->sdata.data_size / 8));
+
+	if (tbl->sdata.tbl_type == TNS_TBL_TYPE_TT) {
+		struct filter_keymask_s *tk = (struct filter_keymask_s *)key;
+		struct filter_keymask_s *tm = (struct filter_keymask_s *)mask;
+		u8 temp_km;
+		int index = 0, offset = 0;
+
+		tk->key_value = 0x0ull;
+		tm->key_value = 0x0ull;
+		temp_km = km[0];
+		for (i = 0; i < 64; i++) {
+			tm->key_value = tm->key_value |
+					 ((temp_km & 0x1ull) << i);
+			temp_km >>= 1;
+			tk->key_value = tk->key_value |
+					 ((temp_km & 0x1ull) << i);
+			temp_km >>= 1;
+			offset += 2;
+			if (offset == 8) {
+				offset = 0;
+				index += 1;
+				temp_km = km[index];
+			}
+		}
+		tm->key_value = ~tm->key_value & ~tk->key_value;
+		tk->is_valid = 1;
+		tm->is_valid = 0;
+	}
+
+	return TNS_NO_ERR;
+}
+
+int invalidate_table_entry(int node, int table_id, int tbl_idx)
+{
+	struct tns_table_s *tbl = get_table_information(table_id);
+
+	if (!tbl) {
+		filter_dbg(FERR, "Invalid Table ID: %d\n", table_id);
+		return TNS_ERR_INVALID_TBL_ID;
+	}
+
+	if (tbl->sdata.tbl_type == TNS_TBL_TYPE_TT) {
+		u8 km[32];
+
+		memset(km, 0x0, 32);
+		km[((tbl->sdata.key_size * 2) / 8)] = 0x1;
+
+		if (tns_write_register_indirect(node,
+						(tbl->sdata.key_base_addr +
+						 (tbl_idx * 32)), 32,
+						(void *)&km[0])) {
+			filter_dbg(FERR, "%s failed node %d tbl ID %d idx %d\n",
+				   __func__, node, table_id, tbl_idx);
+			return TNS_ERR_DRIVER_WRITE;
+		}
+	}
+
+	return TNS_NO_ERR;
+}
+
+int alloc_table_index(int node, int table_id, int *index)
+{
+	int err = 0;
+	struct tns_table_s *tbl = get_table_information(table_id);
+
+	if (!tbl) {
+		filter_dbg(FERR, "%s Invalid TableID %d\n", __func__, table_id);
+		return TNS_ERR_INVALID_TBL_ID;
+	}
+
+	if (*index == -1) {
+		*index = find_first_zero_bit(tbl->ddata[node].bitmap,
+					     tbl->sdata.depth);
+
+		if (*index < 0 || *index >= tbl->sdata.depth)
+			err = -ENOSPC;
+		else
+			__set_bit(*index, tbl->ddata[node].bitmap);
+
+		return err;
+	} else if (*index < 0 || *index >= tbl->sdata.depth) {
+		filter_dbg(FERR, "%s Out of bound index %d requested[0...%d]\n",
+			   __func__, *index, tbl->sdata.depth);
+		return TNS_ERR_MAC_FILTER_INVALID_ENTRY;
+	}
+	if (test_and_set_bit(*index, tbl->ddata[node].bitmap))
+		filter_dbg(FDEBUG, "%s Entry Already exists\n", __func__);
+
+	return err;
+}
+
+void free_table_index(int node, int table_id, int index)
+{
+	struct tns_table_s *tbl = get_table_information(table_id);
+
+	if (!tbl) {
+		filter_dbg(FERR, "%s Invalid TableID %d\n", __func__, table_id);
+		return;
+	}
+	if (index < 0 || index >= tbl->sdata.depth) {
+		filter_dbg(FERR, "%s Invalid Index %d Max Limit %d\n",
+			   __func__, index, tbl->sdata.depth);
+		return;
+	}
+
+	__clear_bit(index, tbl->ddata[node].bitmap);
+}
diff --git a/drivers/net/ethernet/cavium/thunder/tbl_access.h b/drivers/net/ethernet/cavium/thunder/tbl_access.h
new file mode 100644
index 0000000..c098410
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/tbl_access.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef __TBL_ACCESS_H__
+#define __TBL_ACCESS_H__
+
+#define TNS_MAX_TABLE	8
+
+enum {
+	TNS_TBL_TYPE_DT,
+	TNS_TBL_TYPE_HT,
+	TNS_TBL_TYPE_TT,
+	TNS_TBL_TYPE_MAX
+};
+
+struct table_static_s {
+	u8 tbl_type;
+	u8 tbl_id;
+	u8 valid;
+	u8 rsvd;
+	u16 key_size;
+	u16 data_size;
+	u16 data_width;
+	u16 key_width;
+	u32 depth;
+	u64 key_base_addr;
+	u64 data_base_addr;
+	u8 tbl_name[32];
+};
+
+struct table_dynamic_s {
+	unsigned long *bitmap;
+};
+
+struct tns_table_s {
+	struct table_static_s sdata;
+	struct table_dynamic_s ddata[MAX_NUMNODES];
+};
+
+enum {
+	MAC_FILTER_TABLE = 102,
+	VLAN_FILTER_TABLE = 103,
+	MAC_EVIF_TABLE = 140,
+	VLAN_EVIF_TABLE = 201,
+	PORT_CONFIG_TABLE = 202,
+	TABLE_ID_END
+};
+
+extern struct tns_table_s	tbl_info[TNS_MAX_TABLE];
+
+struct filter_keymask_s {
+	u8 is_valid;
+	u64 key_value;
+};
+
+#endif /* __TBL_ACCESS_H__ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 7/7] Get notifications from PF driver and configure filter block based on request data
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
                   ` (5 preceding siblings ...)
  2016-12-21  8:46 ` [RFC PATCH 6/7] HW Filter Table access API's Satha Koteswara Rao
@ 2016-12-21  8:46 ` Satha Koteswara Rao
  2016-12-21 12:03 ` [RFC PATCH 0/7] ThunderX Embedded switch support Sunil Kovvuri
  7 siblings, 0 replies; 19+ messages in thread
From: Satha Koteswara Rao @ 2016-12-21  8:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: sgoutham, rric, davem, david.daney, rvatsavayi, derek.chickles,
	satha.rao, philip.romanov, netdev, linux-arm-kernel

---
 drivers/net/ethernet/cavium/thunder/pf_filter.c | 1678 +++++++++++++++++++++++
 1 file changed, 1678 insertions(+)
 create mode 100644 drivers/net/ethernet/cavium/thunder/pf_filter.c

diff --git a/drivers/net/ethernet/cavium/thunder/pf_filter.c b/drivers/net/ethernet/cavium/thunder/pf_filter.c
new file mode 100644
index 0000000..5a04da6
--- /dev/null
+++ b/drivers/net/ethernet/cavium/thunder/pf_filter.c
@@ -0,0 +1,1678 @@
+/*
+ * Copyright (C) 2015 Cavium, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/version.h>
+#include <linux/proc_fs.h>
+#include <linux/device.h>
+#include <linux/mman.h>
+#include <linux/uaccess.h>
+#include <linux/delay.h>
+#include <linux/cdev.h>
+#include <linux/err.h>
+#include <linux/device.h>
+#include "pf_globals.h"
+#include "pf_locals.h"
+#include "nic.h"
+
+u32 intr_to_ingressgrp[MAX_NUMNODES][TNS_MAC_FILTER_MAX_SYS_PORTS];
+struct vf_register_s vf_reg_data[MAX_NUMNODES][TNS_MAX_VF];
+struct ing_grp_gblvif ingressgrp_gblvif[MAX_NUMNODES][TNS_MAX_INGRESS_GROUP];
+
+u32 macfilter_freeindex[MAX_NUMNODES];
+u32 vlanfilter_freeindex[MAX_NUMNODES];
+
+int tns_filter_valid_entry(int node, int req_type, int vf, int vlan)
+{
+	if (req_type == NIC_MBOX_MSG_UC_MC) {
+		if (vf_reg_data[node][vf].vf_in_mcast_promis ||
+		    (macfilter_freeindex[node] >= TNS_MAC_FILTER_MAX_INDEX))
+			return TNS_ERR_MAX_LIMIT;
+		if (vf_reg_data[node][vf].filter_count >= TNS_MAX_MAC_PER_VF) {
+			tns_enable_mcast_promis(node, vf);
+			vf_reg_data[node][vf].vf_in_mcast_promis = 1;
+			return TNS_ERR_MAX_LIMIT;
+		}
+	} else if (req_type == NIC_MBOX_MSG_VLAN ||
+		   req_type == NIC_MBOX_MSG_ADMIN_VLAN) {
+		if (vf_reg_data[node][vf].vlan_count >= TNS_MAX_VLAN_PER_VF)
+			return TNS_ERR_MAX_LIMIT;
+
+		if (vlanfilter_freeindex[node] >= TNS_VLAN_FILTER_MAX_INDEX) {
+			int ret;
+			struct vlan_filter_entry tbl_entry;
+			int vlan_tbl_idx = -1;
+
+			tbl_entry.key.is_valid = 1;
+			tbl_entry.key.key_type.key_value  = 0x0ull;
+			tbl_entry.mask.key_type.key_value = ~0x0ull;
+			tbl_entry.key.key_type.s.ingress_grp =
+				intr_to_ingressgrp[node][vf];
+			tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+			tbl_entry.key.key_type.s.vlan = vlan;
+			tbl_entry.mask.key_type.s.vlan = 0x0;
+
+			ret = filter_tbl_lookup(node, VLAN_FILTER_TABLE,
+						&tbl_entry, &vlan_tbl_idx);
+			if (ret || vlan_tbl_idx == -1)
+				return TNS_ERR_MAX_LIMIT;
+		}
+	} else {
+		filter_dbg(FERR, "Invalid Request %d VF %d\n", req_type, vf);
+	}
+
+	return TNS_NO_ERR;
+}
+
+int dump_port_cfg_etry(struct itt_entry_s *port_cfg_entry)
+{
+	filter_dbg(FINFO, "PortConfig Entry\n");
+	filter_dbg(FINFO, "pkt_dir:			0x%x\n",
+		   port_cfg_entry->pkt_dir);
+	filter_dbg(FINFO, "is_admin_vlan_enabled:	0x%x\n",
+		   port_cfg_entry->is_admin_vlan_enabled);
+	filter_dbg(FINFO, "default_evif:		0x%x\n",
+		   port_cfg_entry->default_evif);
+	filter_dbg(FINFO, "admin_vlan:			0x%x\n",
+		   port_cfg_entry->admin_vlan);
+
+	return TNS_NO_ERR;
+}
+
+int dump_evif_entry(struct evif_entry *evif_dat)
+{
+	filter_dbg(FINFO, "EVIF Entry\n");
+	filter_dbg(FINFO, "prt_bmap_136_73: 0x%llx\n",
+		   evif_dat->prt_bmap_136_73);
+	filter_dbg(FINFO, "prt_bmap_72_9:   0x%llx\n",
+		   evif_dat->prt_bmap_72_9);
+	filter_dbg(FINFO, "prt_bmap_8:      0x%x\n", evif_dat->prt_bmap_8);
+	filter_dbg(FINFO, "mre_ptr:         0x%x\n", evif_dat->mre_ptr);
+	filter_dbg(FINFO, "insert_ptr2:     0x%x\n", evif_dat->insert_ptr2);
+	filter_dbg(FINFO, "insert_ptr1:     0x%x\n", evif_dat->insert_ptr1);
+	filter_dbg(FINFO, "insert_ptr0:     0x%x\n", evif_dat->insert_ptr0);
+	filter_dbg(FINFO, "data31_0:        0x%x\n", evif_dat->data31_0);
+	filter_dbg(FINFO, "rewrite_ptr1:    0x%x\n", evif_dat->rewrite_ptr1);
+	filter_dbg(FINFO, "rewrite_ptr0:    0x%x\n", evif_dat->rewrite_ptr0);
+	filter_dbg(FINFO, "prt_bmap7_0:     0x%x\n", evif_dat->prt_bmap7_0);
+	filter_dbg(FINFO, "q_mirror_en:     0x%x\n", evif_dat->q_mirror_en);
+	filter_dbg(FINFO, "mirror_en:       0x%x\n", evif_dat->mirror_en);
+	filter_dbg(FINFO, "mtu_prf:         0x%x\n", evif_dat->mtu_prf);
+	filter_dbg(FINFO, "truncate:        0x%x\n", evif_dat->truncate);
+	filter_dbg(FINFO, "rsp_type:        0x%x\n", evif_dat->rsp_type);
+
+	return TNS_NO_ERR;
+}
+
+static inline int validate_port(int port_num)
+{
+	if (port_num < 0 && port_num >= TNS_MAC_FILTER_MAX_SYS_PORTS) {
+		filter_dbg(FERR, "%s Invalid Port: %d (Valid range 0-136)\n",
+			   __func__, port_num);
+		return TNS_ERR_WRONG_PORT_NUMBER;
+	}
+	return TNS_NO_ERR;
+}
+
+int enable_port(int port_num, struct evif_entry *tbl_entry)
+{
+	s64 port_base;
+
+	if (validate_port(port_num))
+		return TNS_ERR_WRONG_PORT_NUMBER;
+
+	if (port_num < 8) {
+		tbl_entry->prt_bmap7_0 = tbl_entry->prt_bmap7_0 |
+					 (0x1 << port_num);
+	} else if (port_num == 8) {
+		tbl_entry->prt_bmap_8 = 1;
+	} else if (port_num <= 72) {
+		port_base = port_num - 9;
+		tbl_entry->prt_bmap_72_9 = tbl_entry->prt_bmap_72_9 |
+						(0x1ull << port_base);
+	} else if (port_num <= TNS_MAC_FILTER_MAX_SYS_PORTS) {
+		port_base = port_num - 73;
+		tbl_entry->prt_bmap_136_73 = tbl_entry->prt_bmap_136_73 |
+						(0x1ull << port_base);
+	}
+
+	return TNS_NO_ERR;
+}
+
+int disable_port(int port_num, struct evif_entry *tbl_entry)
+{
+	s64 port_base;
+
+	if (validate_port(port_num))
+		return TNS_ERR_WRONG_PORT_NUMBER;
+
+	if (port_num < 8) {
+		tbl_entry->prt_bmap7_0 = tbl_entry->prt_bmap7_0 &
+					 ~(0x1 << port_num);
+	} else if (port_num == 8) {
+		tbl_entry->prt_bmap_8 = 0;
+	} else if (port_num <= 72) {
+		port_base = port_num - 9;
+		tbl_entry->prt_bmap_72_9 = tbl_entry->prt_bmap_72_9 &
+						~(0x1ull << port_base);
+	} else if (port_num <= TNS_MAC_FILTER_MAX_SYS_PORTS) {
+		port_base = port_num - 73;
+		tbl_entry->prt_bmap_136_73 = tbl_entry->prt_bmap_136_73 &
+						~(0x1ull << port_base);
+	}
+
+	return TNS_NO_ERR;
+}
+
+int disable_all_ports(struct evif_entry *tbl_entry)
+{
+	tbl_entry->prt_bmap_136_73 = 0x0ull;
+	tbl_entry->prt_bmap_72_9 = 0x0ull;
+	tbl_entry->prt_bmap_8 = 0x0;
+	tbl_entry->prt_bmap7_0 = 0x0;
+
+	return TNS_NO_ERR;
+}
+
+int is_vlan_port_enabled(int vf, vlan_port_bitmap_t vlan_vif)
+{
+	int port_base = (vf / 8), port_offset = (vf % 8);
+
+	if (validate_port(vf))
+		return TNS_ERR_WRONG_PORT_NUMBER;
+
+	if (vlan_vif[port_base] & (1 << port_offset))
+		return 1;
+
+	return 0;
+}
+
+int enable_vlan_port(int port_num, vlan_port_bitmap_t vlan_vif)
+{
+	int port_base = (port_num / 8), port_offset = (port_num % 8);
+
+	if (validate_port(port_num))
+		return TNS_ERR_WRONG_PORT_NUMBER;
+
+	vlan_vif[port_base] = vlan_vif[port_base] | (1 << port_offset);
+
+	return TNS_NO_ERR;
+}
+
+int disable_vlan_port(int port_num, vlan_port_bitmap_t vlan_vif)
+{
+	int port_base = (port_num / 8), port_offset = (port_num % 8);
+
+	if (validate_port(port_num))
+		return TNS_ERR_WRONG_PORT_NUMBER;
+
+	vlan_vif[port_base] = vlan_vif[port_base] & ~(1 << port_offset);
+
+	return TNS_NO_ERR;
+}
+
+int disable_vlan_vif_ports(vlan_port_bitmap_t vlan_vif)
+{
+	memset((void *)(&vlan_vif[0]), 0x0, sizeof(vlan_port_bitmap_t));
+
+	return TNS_NO_ERR;
+}
+
+int dump_vlan_vif_portss(vlan_port_bitmap_t vlan_vif)
+{
+	int i;
+
+	filter_dbg(FINFO, "Port Bitmap (0...135) 0x ");
+	for (i = 0; i < (TNS_MAC_FILTER_MAX_SYS_PORTS / 8); i++)
+		filter_dbg(FINFO, "%x ", vlan_vif[i]);
+	filter_dbg(FINFO, "\n");
+
+	return TNS_NO_ERR;
+}
+
+static inline int getingress_grp(int node, int vf)
+{
+	int i;
+
+	for (i = 0; i < TNS_MAX_INGRESS_GROUP; i++) {
+		if (ingressgrp_gblvif[node][i].is_valid &&
+		    (ingressgrp_gblvif[node][i].ingress_grp ==
+		     intr_to_ingressgrp[node][vf]))
+			return i;
+	}
+	return -1;
+}
+
+inline int vf_bcast_vif(int node, int vf, int *bcast_vif)
+{
+	int ing_grp = getingress_grp(node, vf);
+
+	if (ing_grp == -1)
+		return TNS_ERR_ENTRY_NOT_FOUND;
+
+	*bcast_vif = ingressgrp_gblvif[node][ing_grp].bcast_vif;
+
+	return TNS_NO_ERR;
+}
+
+inline int vf_mcast_vif(int node, int vf, int *mcast_vif)
+{
+	int ing_grp = getingress_grp(node, vf);
+
+	if (ing_grp == -1)
+		return TNS_ERR_ENTRY_NOT_FOUND;
+
+	*mcast_vif = ingressgrp_gblvif[node][ing_grp].mcast_vif;
+
+	return TNS_NO_ERR;
+}
+
+inline int vf_pfvf_id(int node, int vf, int *pfvf)
+{
+	int ing_grp = getingress_grp(node, vf);
+
+	if (ing_grp == -1)
+		return TNS_ERR_ENTRY_NOT_FOUND;
+
+	*pfvf = ingressgrp_gblvif[node][ing_grp].pf_vf;
+
+	return TNS_NO_ERR;
+}
+
+bool is_vf_registered_entry(int node, int vf, int index)
+{
+	int i;
+
+	for (i = 0; i < vf_reg_data[node][vf].filter_count; i++) {
+		if (vf_reg_data[node][vf].filter_index[i] == index)
+			return true;
+	}
+
+	return false;
+}
+
+bool is_vlan_registered(int node, int vf, int vlan)
+{
+	int i;
+
+	for (i = 0; i < vf_reg_data[node][vf].vlan_count; i++) {
+		if (vf_reg_data[node][vf].vlan[i] == vlan)
+			return true;
+	}
+
+	return false;
+}
+
+int is_empty_vif(int node, int vf, struct evif_entry *evif_dat)
+{
+	int i;
+
+	for (i = 0; i < TNS_MAX_VF; i++)
+		if (intr_to_ingressgrp[node][vf] ==
+		    intr_to_ingressgrp[node][i] &&
+		    (vf_reg_data[node][i].vf_in_promis ||
+		     vf_reg_data[node][i].vf_in_mcast_promis))
+			disable_port(i, evif_dat);
+	disable_port(intr_to_ingressgrp[node][vf], evif_dat);
+
+	if (evif_dat->prt_bmap7_0 || evif_dat->prt_bmap_8 ||
+	    evif_dat->prt_bmap_72_9 || evif_dat->prt_bmap_136_73)
+		return 0;
+
+	return 1;
+}
+
+int is_empty_vlan(int node, int vf, int vlan, vlan_port_bitmap_t vlan_vif)
+{
+	int i, pf_vf;
+	int ret;
+
+	ret = vf_pfvf_id(node, vf, &pf_vf);
+	if (ret)
+		return ret;
+
+	if (vf_reg_data[node][pf_vf].vf_in_promis &&
+	    !is_vlan_registered(node, pf_vf, vlan))
+		disable_vlan_port(pf_vf, vlan_vif);
+
+	disable_vlan_port(intr_to_ingressgrp[node][vf], vlan_vif);
+	for (i = 0; i < sizeof(vlan_port_bitmap_t); i++)
+		if (vlan_vif[i])
+			break;
+
+	if (i == sizeof(vlan_port_bitmap_t))
+		return 1;
+
+	return 0;
+}
+
+int filter_tbl_lookup(int node, int table_id, void *entry, int *index)
+{
+	switch (table_id) {
+	case MAC_FILTER_TABLE:
+	{
+		struct mac_filter_entry tbl_entry;
+		struct mac_filter_entry *inp = (struct mac_filter_entry *)entry;
+		int i;
+		int ret;
+
+		for (i = 0; i < TNS_MAC_FILTER_MAX_INDEX; i++) {
+			ret = tbl_read(node, MAC_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+				       &tbl_entry.data);
+
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			if ((tbl_entry.key.key_type.key_value ==
+			     inp->key.key_type.key_value) &&
+			    (tbl_entry.mask.key_type.key_value ==
+			     inp->mask.key_type.key_value)) {
+				//Found an Entry
+				*index = i;
+				inp->data.data = tbl_entry.data.data;
+				return TNS_NO_ERR;
+			}
+			//Unable to find entry
+			*index = -1;
+		}
+		break;
+	}
+	case VLAN_FILTER_TABLE:
+	{
+		struct vlan_filter_entry tbl_entry;
+		struct vlan_filter_entry *inp_entry;
+		int i;
+		int ret;
+
+		inp_entry = (struct vlan_filter_entry *)entry;
+		for (i = 1; i < TNS_VLAN_FILTER_MAX_INDEX; i++) {
+			ret = tbl_read(node, VLAN_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+					&tbl_entry.data);
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			if ((tbl_entry.key.key_type.key_value ==
+			     inp_entry->key.key_type.key_value) &&
+			    (tbl_entry.mask.key_type.key_value ==
+			     inp_entry->mask.key_type.key_value)) {
+				//Found an Entry
+				*index = i;
+				inp_entry->data.data = tbl_entry.data.data;
+				return TNS_NO_ERR;
+			}
+		}
+		//Unable to find entry
+		*index = -1;
+		break;
+	}
+	default:
+		filter_dbg(FERR, "Wrong Table ID: %d\n", table_id);
+		return TNS_ERR_INVALID_TBL_ID;
+	}
+
+	return TNS_NO_ERR;
+}
+
+int tns_enable_mcast_promis(int node, int vf)
+{
+	int mcast_vif;
+	int j;
+	int ret;
+	struct evif_entry evif_dat;
+	int ing_grp = getingress_grp(node, vf);
+	int pports;
+
+	if (ing_grp == -1)
+		return TNS_ERROR_INVALID_ARG;
+
+	ret = vf_mcast_vif(node, vf, &mcast_vif);
+	if (ret) {
+		filter_dbg(FERR, "Error: Unable to get multicast VIF\n");
+		return ret;
+	}
+
+	ret = tbl_read(node, MAC_EVIF_TABLE, mcast_vif, NULL, NULL, &evif_dat);
+	if (ret)
+		return ret;
+
+	enable_port(vf, &evif_dat);
+	dump_evif_entry(&evif_dat);
+	ret = tbl_write(node, MAC_EVIF_TABLE, mcast_vif, NULL, NULL,
+			(void *)&evif_dat);
+	if (ret)
+		return ret;
+
+	pports = ingressgrp_gblvif[node][ing_grp].valid_mcast_promis_ports;
+	//Enable VF in multicast MAC promiscuous group
+	for (j = 0; j < pports; j++) {
+		if (MCAST_PROMIS(node, ing_grp, j) == vf) {
+			filter_dbg(FDEBUG, "VF found in MCAST promis group\n");
+			return TNS_NO_ERR;
+		}
+	}
+	MCAST_PROMIS(node, ing_grp, pports) = vf;
+	ingressgrp_gblvif[node][ing_grp].valid_mcast_promis_ports += 1;
+	filter_dbg(FINFO, "VF %d permanently entered into MCAST promisc mode\n",
+		   vf);
+
+	return TNS_NO_ERR;
+}
+
+int remove_vf_from_regi_mcast_vif(int node, int vf)
+{
+	int ret;
+	int mcast_vif;
+	struct evif_entry evif_dat;
+
+	ret = vf_mcast_vif(node, vf, &mcast_vif);
+	if (ret) {
+		filter_dbg(FERR, "Error: Unable to get multicast VIF\n");
+		return ret;
+	}
+
+	ret = tbl_read(node, MAC_EVIF_TABLE, mcast_vif, NULL, NULL, &evif_dat);
+	if (ret)
+		return ret;
+	disable_port(vf, &evif_dat);
+	dump_evif_entry(&evif_dat);
+	ret = tbl_write(node, MAC_EVIF_TABLE, mcast_vif, NULL, NULL,
+			(void *)&evif_dat);
+	if (ret)
+		return ret;
+
+	return TNS_NO_ERR;
+}
+
+int remove_vf_from_mcast_promis_grp(int node, int vf)
+{
+	int j, k;
+	int ing_grp = getingress_grp(node, vf);
+	int pports;
+
+	if (ing_grp == -1)
+		return TNS_ERROR_INVALID_ARG;
+
+	pports = ingressgrp_gblvif[node][ing_grp].valid_mcast_promis_ports;
+	for (j = 0; j < pports; j++) {
+		if (MCAST_PROMIS(node, ing_grp, j) != vf)
+			continue;
+
+		filter_dbg(FDEBUG, "VF found in MCAST promis group %d\n",
+			   intr_to_ingressgrp[node][vf]);
+		for (k = j; k < (pports - 1); k++)
+			MCAST_PROMIS(node, ing_grp, k) =
+			 MCAST_PROMIS(node, ing_grp, (k + 1));
+		VALID_MCAST_PROMIS(node, ing_grp) -= 1;
+		remove_vf_from_regi_mcast_vif(node, vf);
+		return TNS_NO_ERR;
+	}
+	filter_dbg(FDEBUG, "VF %d not found in multicast promiscuous group\n",
+		   vf);
+
+	return TNS_ERR_ENTRY_NOT_FOUND;
+}
+
+int registered_vf_filter_index(int node, int vf, int mac_idx, int action)
+{
+	int f_count = vf_reg_data[node][vf].filter_count, j;
+
+	if (!action) {
+		for (j = 0; j < f_count; j++) {
+			if (vf_reg_data[node][vf].filter_index[j] == mac_idx) {
+				int i, k = j + 1;
+
+				for (i = j; i < f_count - 1; i++, k++)
+					vf_reg_data[node][vf].filter_index[i] =
+					 vf_reg_data[node][vf].filter_index[k];
+				break;
+			}
+		}
+		if (j == vf_reg_data[node][vf].filter_count)
+			filter_dbg(FDEBUG, "VF not in registered filtr list\n");
+		else
+			vf_reg_data[node][vf].filter_count -= 1;
+	} else {
+		vf_reg_data[node][vf].filter_index[f_count] = mac_idx;
+		vf_reg_data[node][vf].filter_count += 1;
+		filter_dbg(FINFO, "%s Added at Filter count %d Index %d\n",
+			   __func__, vf_reg_data[node][vf].filter_count,
+			   mac_idx);
+	}
+
+	/* We are restricting each VF to register atmost 11 filter entries
+	 * (including unicast & multicast)
+	 */
+	if (vf_reg_data[node][vf].filter_count <= TNS_MAX_MAC_PER_VF) {
+		vf_reg_data[node][vf].vf_in_mcast_promis = 0;
+		if (!vf_reg_data[node][vf].vf_in_promis)
+			remove_vf_from_mcast_promis_grp(node, vf);
+		filter_dbg(FINFO, "VF %d removed from MCAST promis mode\n", vf);
+	}
+
+	return TNS_NO_ERR;
+}
+
+int add_mac_filter_mcast_entry(int node, int table_id, int vf, int mac_idx,
+			       void *mac_DA)
+{
+	int ret;
+	struct mac_filter_entry tbl_entry;
+	struct mac_filter_keymask_s key, mask;
+	union mac_filter_data_s data;
+	int vif = -1, k, j;
+	struct evif_entry evif_dat;
+	int ing_grp = getingress_grp(node, vf);
+
+	if (ing_grp == -1)
+		return TNS_ERROR_INVALID_ARG;
+
+	if (vf_reg_data[node][vf].filter_count >= TNS_MAX_MAC_PER_VF) {
+		if (!vf_reg_data[node][vf].vf_in_mcast_promis) {
+			tns_enable_mcast_promis(node, vf);
+			vf_reg_data[node][vf].vf_in_mcast_promis = 1;
+		}
+		return TNS_ERR_MAX_LIMIT;
+	}
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+	for (j = 5, k = 0; j >= 0; j--, k++) {
+		tbl_entry.key.key_type.s.mac_DA[k] = ((u8 *)mac_DA)[j];
+		tbl_entry.mask.key_type.s.mac_DA[k] = 0x0;
+	}
+	ret = filter_tbl_lookup(node, MAC_FILTER_TABLE, &tbl_entry, &mac_idx);
+	if (ret)
+		return ret;
+	if (mac_idx != -1 &&
+	    !(mac_idx >= (TNS_MAC_FILTER_MAX_INDEX - TNS_MAX_INGRESS_GROUP) &&
+	      mac_idx < TNS_MAC_FILTER_MAX_INDEX)) {
+		int evif = tbl_entry.data.s.evif;
+
+		filter_dbg(FINFO, "Multicast MAC found at %d evif: %d\n",
+			   mac_idx, evif);
+		ret = tbl_read(node, MAC_EVIF_TABLE, evif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return ret;
+		if (is_vf_registered_entry(node, vf, mac_idx)) {
+			//No Need to register again
+			return TNS_NO_ERR;
+		}
+		enable_port(vf, &evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, evif, NULL, NULL,
+				(void *)&evif_dat);
+		if (ret)
+			return ret;
+		registered_vf_filter_index(node, vf, mac_idx, 1);
+		dump_evif_entry(&evif_dat);
+		return TNS_NO_ERR;
+	}
+
+	//New multicast MAC registration
+	if (alloc_table_index(node, MAC_FILTER_TABLE, &mac_idx)) {
+		filter_dbg(FERR, "%s Filter Table Full\n", __func__);
+		return TNS_ERR_MAX_LIMIT;
+	}
+	key.is_valid = 1;
+	mask.is_valid = 1;
+	key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	mask.key_type.s.ingress_grp = 0;
+	for (j = 5, k = 0; j >= 0; j--, k++) {
+		key.key_type.s.mac_DA[k] = ((u8 *)mac_DA)[j];
+		mask.key_type.s.mac_DA[k] = 0x0;
+	}
+	if (alloc_table_index(node, MAC_EVIF_TABLE, &vif)) {
+		filter_dbg(FERR, "%s EVIF Table Full\n", __func__);
+		return TNS_ERR_MAX_LIMIT;
+	}
+	evif_dat.insert_ptr0 = 0xFFFF;
+	evif_dat.insert_ptr1 = 0xFFFF;
+	evif_dat.insert_ptr2 = 0xFFFF;
+	evif_dat.mre_ptr = 0x7FFF;
+	evif_dat.rewrite_ptr0 = 0xFF;
+	evif_dat.rewrite_ptr1 = 0xFF;
+	evif_dat.data31_0 = 0x0;
+	evif_dat.q_mirror_en = 0x0;
+	evif_dat.mirror_en = 0x0;
+	evif_dat.mtu_prf = 0x0;
+	evif_dat.truncate = 0x0;
+	evif_dat.rsp_type = 0x3;
+	disable_all_ports(&evif_dat);
+	for (j = 0; j < VALID_MCAST_PROMIS(node, ing_grp); j++)
+		enable_port(MCAST_PROMIS(node, ing_grp, j), &evif_dat);
+	enable_port(vf, &evif_dat);
+	ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL, NULL, &evif_dat);
+	if (ret)
+		return ret;
+	data.data = 0x0ull;
+	data.s.evif = vif;
+	ret = tbl_write(node, MAC_FILTER_TABLE, mac_idx, &key, &mask, &data);
+	if (ret)
+		return ret;
+	macfilter_freeindex[node] += 1;
+	registered_vf_filter_index(node, vf, mac_idx, 1);
+
+	return TNS_NO_ERR;
+}
+
+int del_mac_filter_entry(int node, int table_id, int vf, int mac_idx,
+			 void *mac_DA, int addr_type)
+{
+	int ret;
+	struct mac_filter_entry tbl_entry;
+	int old_mac_idx = -1, vif;
+	int j, k;
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+
+	for (j = 5, k = 0; j >= 0; j--, k++) {
+		tbl_entry.key.key_type.s.mac_DA[k] = ((u8 *)mac_DA)[j];
+		tbl_entry.mask.key_type.s.mac_DA[k] = 0x0;
+	}
+
+	ret = filter_tbl_lookup(node, MAC_FILTER_TABLE, (void *)&tbl_entry,
+				&old_mac_idx);
+	if (ret)
+		return ret;
+
+	if (old_mac_idx == -1) {
+		filter_dbg(FDEBUG, "Invalid Delete, entry not found\n");
+		return TNS_ERR_ENTRY_NOT_FOUND;
+	}
+	if (mac_idx != -1 && mac_idx != old_mac_idx) {
+		filter_dbg(FDEBUG, "Found and requested are mismatched\n");
+		return TNS_ERR_ENTRY_NOT_FOUND;
+	}
+	if (old_mac_idx == vf) {
+		filter_dbg(FDEBUG, "Primary Unicast MAC delete not allowed\n");
+		return TNS_ERR_MAC_FILTER_INVALID_ENTRY;
+	}
+
+	//Remove MAC Filter entry from VF register MAC filter list
+	registered_vf_filter_index(node, vf, old_mac_idx, 0);
+
+	//Remove VIF entry (output portmask) related to this filter entry
+	vif = tbl_entry.data.s.evif;
+	if (addr_type) {
+		struct evif_entry evif_dat;
+
+		ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return ret;
+
+		disable_port(vf, &evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+				&evif_dat);
+		if (ret)
+			return ret;
+
+		dump_evif_entry(&evif_dat);
+		//In case of multicast MAC check for empty portmask
+		if (!is_empty_vif(node, vf, &evif_dat))
+			return TNS_NO_ERR;
+	}
+	invalidate_table_entry(node, MAC_FILTER_TABLE, old_mac_idx);
+	free_table_index(node, MAC_FILTER_TABLE, old_mac_idx);
+	free_table_index(node, MAC_EVIF_TABLE, vif);
+	macfilter_freeindex[node] -= 1;
+
+	return TNS_NO_ERR;
+}
+
+int add_mac_filter_entry(int node, int table_id, int vf, int mac_idx,
+			 void *mac_DA)
+{
+	int ret;
+	struct mac_filter_entry tbl_entry;
+	int old_mac_idx = -1;
+	int j, k;
+	struct mac_filter_keymask_s key, mask;
+	union mac_filter_data_s data;
+
+	/* We are restricting each VF to register atmost 11 filter entries
+	 * (including unicast & multicast)
+	 */
+	if (mac_idx != vf &&
+	    vf_reg_data[node][vf].filter_count >= TNS_MAX_MAC_PER_VF) {
+		if (!vf_reg_data[node][vf].vf_in_mcast_promis) {
+			tns_enable_mcast_promis(node, vf);
+			vf_reg_data[node][vf].vf_in_mcast_promis = 1;
+		}
+		return TNS_ERR_MAX_LIMIT;
+	}
+
+	//Adding Multicast MAC will be handled differently
+	if ((((u8 *)mac_DA)[0]) & 0x1) {
+		filter_dbg(FDEBUG, "%s It is multicast MAC entry\n", __func__);
+		return add_mac_filter_mcast_entry(node, table_id, vf, mac_idx,
+						  mac_DA);
+	}
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+	for (j = 5, k = 0; j >= 0; j--, k++) {
+		tbl_entry.key.key_type.s.mac_DA[k] = ((u8 *)mac_DA)[j];
+		tbl_entry.mask.key_type.s.mac_DA[k] = 0x0;
+	}
+	ret = filter_tbl_lookup(node, MAC_FILTER_TABLE, (void *)&tbl_entry,
+				&old_mac_idx);
+	if (ret)
+		return ret;
+	if (old_mac_idx != -1) {
+		filter_dbg(FINFO, "Duplicate entry found at %d\n", old_mac_idx);
+		if (tbl_entry.data.s.evif != vf) {
+			filter_dbg(FDEBUG, "Registered VF %d Requested VF %d\n",
+				   (int)tbl_entry.data.s.evif, (int)vf);
+			return TNS_ERR_DUPLICATE_MAC;
+		}
+		return TNS_NO_ERR;
+	}
+	if (alloc_table_index(node, MAC_FILTER_TABLE, &mac_idx)) {
+		filter_dbg(FERR, "(%s) Filter Table Full\n", __func__);
+		return TNS_ERR_MAX_LIMIT;
+	}
+	if (mac_idx == -1) {
+		filter_dbg(FERR, "!!!ERROR!!! reached maximum limit\n");
+		return TNS_ERR_MAX_LIMIT;
+	}
+	key.is_valid = 1;
+	mask.is_valid = 1;
+	key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	mask.key_type.s.ingress_grp = 0;
+	for (j = 5, k = 0; j >= 0; j--, k++) {
+		key.key_type.s.mac_DA[k] = ((u8 *)mac_DA)[j];
+		mask.key_type.s.mac_DA[k] = 0x0;
+	}
+	filter_dbg(FINFO, "VF id: %d with ingress_grp: %d ", vf,
+		   key.key_type.s.ingress_grp);
+	filter_dbg(FINFO, "MAC: %x: %x: %x %x: %x %x Added at Index: %d\n",
+		   ((u8 *)mac_DA)[0], ((u8 *)mac_DA)[1],
+		   ((u8 *)mac_DA)[2], ((u8 *)mac_DA)[3],
+		   ((u8 *)mac_DA)[4], ((u8 *)mac_DA)[5], mac_idx);
+
+	data.data = 0x0ull;
+	data.s.evif = vf;
+	ret = tbl_write(node, MAC_FILTER_TABLE, mac_idx, &key, &mask, &data);
+	if (ret)
+		return ret;
+
+	if (mac_idx != vf) {
+		registered_vf_filter_index(node, vf, mac_idx, 1);
+		macfilter_freeindex[node] += 1;
+	}
+
+	return TNS_NO_ERR;
+}
+
+int vf_interface_up(int node, int tbl_id, int vf, void *mac_DA)
+{
+	int ret;
+
+	//Enable unicast MAC entry for this VF
+	ret = add_mac_filter_entry(node, tbl_id, vf, vf, mac_DA);
+	if (ret)
+		return ret;
+
+	return TNS_NO_ERR;
+}
+
+int del_vlan_entry(int node, int vf, int vlan, int vlanx)
+{
+	int ret;
+	struct vlan_filter_entry tbl_entry;
+	int vlan_tbl_idx = -1, i;
+	vlan_port_bitmap_t vlan_vif;
+	int vlan_cnt = vf_reg_data[node][vf].vlan_count;
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.key_value  = 0x0ull;
+	tbl_entry.mask.key_type.key_value = 0xFFFFFFFFFFFFFFFFull;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+	tbl_entry.key.key_type.s.vlan = vlan;
+	tbl_entry.mask.key_type.s.vlan = 0x0;
+
+	filter_dbg(FINFO, "%s VF %d with ingress_grp %d VLANID %d\n",
+		   __func__, vf, tbl_entry.key.key_type.s.ingress_grp,
+		   tbl_entry.key.key_type.s.vlan);
+
+	ret = filter_tbl_lookup(node, VLAN_FILTER_TABLE, &tbl_entry,
+				&vlan_tbl_idx);
+	if (ret)
+		return ret;
+
+	if (vlan_tbl_idx == -1) {
+		filter_dbg(FINFO, "VF %d VLAN %d filter not registered\n",
+			   vf, vlan);
+		return TNS_NO_ERR;
+	}
+
+	if (vlan_tbl_idx < 1 && vlan_tbl_idx >= TNS_VLAN_FILTER_MAX_INDEX) {
+		filter_dbg(FERR, "Invalid VLAN Idx: %d\n", vlan_tbl_idx);
+		return TNS_ERR_VLAN_FILTER_INVLAID_ENTRY;
+	}
+
+	vlanx = tbl_entry.data.s.filter_idx;
+	ret = tbl_read(node, VLAN_EVIF_TABLE, vlanx, NULL, NULL,
+		       (void *)(&vlan_vif[0]));
+	if (ret)
+		return ret;
+
+	disable_vlan_port(vf, vlan_vif);
+	ret = tbl_write(node, VLAN_EVIF_TABLE, vlanx, NULL, NULL,
+			(void *)(&vlan_vif[0]));
+	if (ret)
+		return ret;
+
+	for (i = 0; i < vlan_cnt; i++) {
+		if (vf_reg_data[node][vf].vlan[i] == vlan) {
+			int j;
+
+			for (j = i; j < vlan_cnt - 1; j++)
+				vf_reg_data[node][vf].vlan[j] =
+				 vf_reg_data[node][vf].vlan[j + 1];
+			vf_reg_data[node][vf].vlan_count -= 1;
+			break;
+		}
+	}
+	if (is_empty_vlan(node, vf, vlan, vlan_vif)) {
+		free_table_index(node, VLAN_FILTER_TABLE, vlanx);
+		vlanfilter_freeindex[node] -= 1;
+		invalidate_table_entry(node, VLAN_FILTER_TABLE, vlanx);
+	}
+
+	return TNS_NO_ERR;
+}
+
+int add_vlan_entry(int node, int vf, int vlan, int vlanx)
+{
+	int ret;
+	int pf_vf;
+	struct vlan_filter_entry tbl_entry;
+	int vlan_tbl_idx = -1;
+	vlan_port_bitmap_t vlan_vif;
+
+	if (vf_reg_data[node][vf].vlan_count >= TNS_MAX_VLAN_PER_VF) {
+		filter_dbg(FDEBUG, "Reached maximum limit per VF count: %d\n",
+			   vf_reg_data[node][vf].vlan_count);
+		return TNS_ERR_MAX_LIMIT;
+	}
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.key_value  = 0x0ull;
+	tbl_entry.mask.key_type.key_value = 0xFFFFFFFFFFFFFFFFull;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+	tbl_entry.key.key_type.s.vlan = vlan;
+	tbl_entry.mask.key_type.s.vlan = 0x0;
+
+	ret = filter_tbl_lookup(node, VLAN_FILTER_TABLE, &tbl_entry,
+				&vlan_tbl_idx);
+	if (ret)
+		return ret;
+
+	if (vlan_tbl_idx != -1) {
+		filter_dbg(FINFO, "Duplicate entry found at %d\n",
+			   vlan_tbl_idx);
+		if (vlan_tbl_idx < 1 &&
+		    vlan_tbl_idx >= TNS_VLAN_FILTER_MAX_INDEX) {
+			filter_dbg(FDEBUG, "Invalid VLAN Idx %d\n",
+				   vlan_tbl_idx);
+			return TNS_ERR_VLAN_FILTER_INVLAID_ENTRY;
+		}
+
+		vlanx = tbl_entry.data.s.filter_idx;
+		ret = tbl_read(node, VLAN_EVIF_TABLE, vlanx, NULL, NULL,
+			       (void *)(&vlan_vif[0]));
+		if (ret)
+			return ret;
+
+		enable_vlan_port(vf, vlan_vif);
+		ret = tbl_write(node, VLAN_EVIF_TABLE, vlanx, NULL, NULL,
+				(void *)(&vlan_vif[0]));
+		if (ret)
+			return ret;
+
+		vf_reg_data[node][vf].vlan[vf_reg_data[node][vf].vlan_count] =
+		 vlan;
+		vf_reg_data[node][vf].vlan_count += 1;
+
+		return TNS_NO_ERR;
+	}
+
+	if (alloc_table_index(node, VLAN_FILTER_TABLE, &vlanx)) {
+		filter_dbg(FDEBUG, "%s VLAN Filter Table Full\n", __func__);
+		return TNS_ERR_MAX_LIMIT;
+	}
+	disable_vlan_vif_ports(vlan_vif);
+	enable_vlan_port(vf, vlan_vif);
+	enable_vlan_port(intr_to_ingressgrp[node][vf], vlan_vif);
+	ret = vf_pfvf_id(node, vf, &pf_vf);
+
+	if (ret)
+		return ret;
+
+	if (vf_reg_data[node][pf_vf].vf_in_promis)
+		enable_vlan_port(pf_vf, vlan_vif);
+
+	dump_vlan_vif_portss(vlan_vif);
+	ret = tbl_write(node, VLAN_EVIF_TABLE, vlanx, NULL, NULL,
+			(void *)(&vlan_vif[0]));
+	if (ret)
+		return ret;
+
+	tbl_entry.key.is_valid = 1;
+	tbl_entry.key.key_type.s.ingress_grp = intr_to_ingressgrp[node][vf];
+	tbl_entry.key.key_type.s.vlan = vlan;
+	tbl_entry.key.key_type.s.reserved = 0x0;
+	tbl_entry.key.key_type.s.reserved1 = 0x0;
+	tbl_entry.mask.is_valid = 1;
+	tbl_entry.mask.key_type.s.ingress_grp = 0x0;
+	tbl_entry.mask.key_type.s.vlan = 0x0;
+	tbl_entry.mask.key_type.s.reserved = 0xF;
+	tbl_entry.mask.key_type.s.reserved1 = 0xFFFFFFFF;
+	tbl_entry.data.data = 0x0ull;
+	tbl_entry.data.s.filter_idx = vlanx;
+	ret = tbl_write(node, VLAN_FILTER_TABLE, vlanx, &tbl_entry.key,
+			&tbl_entry.mask, &tbl_entry.data);
+	if (ret)
+		return ret;
+
+	filter_dbg(FINFO, "VF %d with ingress_grp %d VLAN %d Added at %d\n",
+		   vf, tbl_entry.key.key_type.s.ingress_grp,
+		   tbl_entry.key.key_type.s.vlan, vlanx);
+
+	vlanfilter_freeindex[node] += 1;
+	vf_reg_data[node][vf].vlan[vf_reg_data[node][vf].vlan_count] = vlan;
+	vf_reg_data[node][vf].vlan_count += 1;
+
+	return TNS_NO_ERR;
+}
+
+int enable_promiscuous_mode(int node, int vf)
+{
+	int ret = tns_enable_mcast_promis(node, vf);
+	int pf_vf;
+
+	if (ret)
+		return ret;
+
+	vf_reg_data[node][vf].vf_in_promis = 1;
+	ret = vf_pfvf_id(node, vf, &pf_vf);
+	if (ret)
+		return ret;
+
+	if (vf == pf_vf) {
+		//PFVF interface, enable full promiscuous mode
+		int i;
+		int vif = intr_to_ingressgrp[node][vf];
+		struct evif_entry evif_dat;
+		struct itt_entry_s port_cfg_entry;
+
+		for (i = 0; i < macfilter_freeindex[node]; i++) {
+			struct mac_filter_entry tbl_entry;
+
+			ret = tbl_read(node, MAC_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+				       &tbl_entry.data);
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			if (tbl_entry.key.key_type.s.ingress_grp ==
+			    intr_to_ingressgrp[node][vf]) {
+				int vif = tbl_entry.data.s.evif;
+				struct evif_entry evif_dat;
+
+				ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL,
+					       NULL, &evif_dat);
+				if (ret)
+					return ret;
+
+				enable_port(vf, &evif_dat);
+				dump_evif_entry(&evif_dat);
+				ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL,
+						NULL, (void *)&evif_dat);
+				if (ret)
+					return ret;
+			}
+		}
+		/*If pfVf interface enters in promiscuous mode we will forward
+		 * packets destined to corresponding LMAC
+		 */
+
+		ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return ret;
+		enable_port(vf, &evif_dat);
+		dump_evif_entry(&evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+				(void *)&evif_dat);
+		if (ret)
+			return ret;
+
+		/* Update default_evif of LMAC from NULLVif to pfVf interface,
+		 * so that pfVf will shows all dropped packets as well
+		 */
+		ret = tbl_read(node, PORT_CONFIG_TABLE,
+			       intr_to_ingressgrp[node][vf], NULL, NULL,
+			       &port_cfg_entry);
+		if (ret)
+			return ret;
+
+		port_cfg_entry.default_evif = vf;
+		ret = tbl_write(node, PORT_CONFIG_TABLE,
+				intr_to_ingressgrp[node][vf], NULL, NULL,
+				(void *)&port_cfg_entry);
+		if (ret)
+			return ret;
+
+		filter_dbg(FINFO, "%s Port %d pkt_dir %d defaultVif %d",
+			   __func__, vf, port_cfg_entry.pkt_dir,
+			   port_cfg_entry.default_evif);
+		filter_dbg(FINFO, " adminVlan %d %s\n",
+			   port_cfg_entry.admin_vlan,
+			   port_cfg_entry.is_admin_vlan_enabled ? "Enable" :
+				"Disable");
+
+		for (i = 1; i < vlanfilter_freeindex[node]; i++) {
+			struct vlan_filter_entry tbl_entry;
+
+			ret = tbl_read(node, VLAN_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+				       &tbl_entry.data);
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			if (tbl_entry.key.key_type.s.ingress_grp ==
+			    intr_to_ingressgrp[node][vf]) {
+				int vlanx = tbl_entry.data.s.filter_idx;
+				vlan_port_bitmap_t vlan_vif;
+
+				ret = tbl_read(node, VLAN_EVIF_TABLE, vlanx,
+					       NULL, NULL,
+					       (void *)(&vlan_vif[0]));
+				if (ret)
+					return ret;
+				enable_vlan_port(vf, vlan_vif);
+				ret = tbl_write(node, VLAN_EVIF_TABLE, vlanx,
+						NULL, NULL,
+						(void *)(&vlan_vif[0]));
+				if (ret)
+					return ret;
+			}
+		}
+	} else {
+		//VF interface enable multicast promiscuous mode
+		int i;
+		int ret;
+
+		for (i = TNS_MAX_VF; i < macfilter_freeindex[node]; i++) {
+			struct mac_filter_entry tbl_entry;
+
+			ret = tbl_read(node, MAC_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+				       &tbl_entry.data);
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			/* We found filter entry, lets verify either this is
+			 * unicast or multicast
+			 */
+			if (((((u8 *)tbl_entry.key.key_type.s.mac_DA)[5]) &
+			       0x1) && (tbl_entry.key.key_type.s.ingress_grp ==
+					intr_to_ingressgrp[node][vf])) {
+				int vif = tbl_entry.data.s.evif;
+				struct evif_entry evif_dat;
+
+				ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL,
+					       NULL, &evif_dat);
+				if (ret)
+					return ret;
+				enable_port(vf, &evif_dat);
+				dump_evif_entry(&evif_dat);
+				ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL,
+						NULL, (void *)&evif_dat);
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	return TNS_NO_ERR;
+}
+
+int disable_promiscuous_mode(int node, int vf)
+{
+	int i, pf_vf;
+	int ret;
+
+	vf_reg_data[node][vf].vf_in_promis = 0;
+	ret = vf_pfvf_id(node, vf, &pf_vf);
+	if (ret)
+		return ret;
+
+	for (i = TNS_MAX_VF; i < macfilter_freeindex[node]; i++) {
+		struct mac_filter_entry tbl_entry;
+
+		ret = tbl_read(node, MAC_FILTER_TABLE, i, &tbl_entry.key,
+			       &tbl_entry.mask, &tbl_entry.data);
+		if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+			return ret;
+		else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+			continue;
+
+		//We found an entry belongs to this group
+		if (tbl_entry.key.key_type.s.ingress_grp ==
+		    intr_to_ingressgrp[node][vf]) {
+			int vif = tbl_entry.data.s.evif;
+			struct evif_entry evif_dat;
+
+			if (is_vf_registered_entry(node, vf, i))
+				continue;
+
+			//Is this multicast entry
+			if (((((u8 *)tbl_entry.key.key_type.s.mac_DA)[5]) &
+			       0x1) && vf_reg_data[node][vf].vf_in_mcast_promis)
+				continue;
+
+			//Disable port bitmap in EVIF entry
+			ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL,
+				       NULL, &evif_dat);
+			if (ret)
+				return ret;
+			disable_port(vf, &evif_dat);
+			dump_evif_entry(&evif_dat);
+			ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+					(void *)&evif_dat);
+			if (ret)
+				return ret;
+		}
+	}
+	/* If pfVf interface exit from promiscuous mode, then  we will change
+	 * portbitmap corresponding to LMAC
+	 */
+	if (vf == pf_vf) {
+		int vif = intr_to_ingressgrp[node][vf];
+		struct evif_entry evif_dat;
+		struct itt_entry_s port_cfg_entry;
+
+		ret = tbl_read(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return ret;
+
+		disable_port(vf, &evif_dat);
+		dump_evif_entry(&evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, vif, NULL, NULL,
+				(void *)&evif_dat);
+		if (ret)
+			return ret;
+
+		for (i = 1; i < vlanfilter_freeindex[node]; i++) {
+			struct vlan_filter_entry tbl_entry;
+
+			ret = tbl_read(node, VLAN_FILTER_TABLE, i,
+				       &tbl_entry.key, &tbl_entry.mask,
+				       &tbl_entry.data);
+			if (ret && (ret != TNS_ERR_MAC_FILTER_INVALID_ENTRY))
+				return ret;
+			else if (ret == TNS_ERR_MAC_FILTER_INVALID_ENTRY)
+				continue;
+
+			if (tbl_entry.key.key_type.s.ingress_grp ==
+			    intr_to_ingressgrp[node][vf]) {
+				int vlanx = tbl_entry.data.s.filter_idx;
+				vlan_port_bitmap_t vlan_vif;
+				int vlan = tbl_entry.key.key_type.s.vlan;
+
+				if (!is_vlan_registered(node, vf, vlan)) {
+					ret = tbl_read(node, VLAN_EVIF_TABLE,
+						       vlanx, NULL, NULL,
+						       (void *)(&vlan_vif[0]));
+					if (ret)
+						return ret;
+					disable_vlan_port(vf, vlan_vif);
+					ret = tbl_write(node, VLAN_EVIF_TABLE,
+							vlanx, NULL, NULL,
+							(void *)(&vlan_vif[0]));
+					if (ret)
+						return ret;
+				}
+			}
+		}
+		//Update default_evif of LMAC to NULLVif
+		ret = tbl_read(node, PORT_CONFIG_TABLE,
+			       intr_to_ingressgrp[node][vf], NULL, NULL,
+			       &port_cfg_entry);
+		if (ret)
+			return ret;
+
+		port_cfg_entry.default_evif = TNS_NULL_VIF;
+		ret = tbl_write(node, PORT_CONFIG_TABLE,
+				intr_to_ingressgrp[node][vf], NULL, NULL,
+				(void *)&port_cfg_entry);
+		if (ret)
+			return ret;
+		filter_dbg(FINFO, "%s Port %d pkt_dir %d defaultVif %d ",
+			   __func__, vf, port_cfg_entry.pkt_dir,
+			   port_cfg_entry.default_evif);
+		filter_dbg(FINFO, "adminVlan %d %s\n",
+			   port_cfg_entry.admin_vlan,
+			   port_cfg_entry.is_admin_vlan_enabled ? "Enable" :
+			   "Disable");
+	}
+	if (!vf_reg_data[node][vf].vf_in_mcast_promis)
+		remove_vf_from_mcast_promis_grp(node, vf);
+
+	return TNS_NO_ERR;
+}
+
+/* CRB-1S configuration
+ * Valid LMAC's - 3 (128, 132, & 133)
+ * PFVF - 3 (0, 64, & 96)
+ * bcast_vif - 3 (136, 140, & 141)
+ * mcast_vif - 3 (144, 148, & 149)
+ * null_vif - 1 (152)
+ */
+int mac_filter_config(void)
+{
+	int node, j;
+
+	for (node = 0; node < nr_node_ids; node++) {
+		int lmac;
+
+		//Reset inerface to Ingress Group
+		for (j = 0; j < TNS_MAC_FILTER_MAX_SYS_PORTS; j++)
+			intr_to_ingressgrp[node][j] = j;
+
+		if (!pf_vf_map_data[node].valid)
+			continue;
+
+		for (j = 0; j < TNS_MAX_INGRESS_GROUP; j++)
+			ingressgrp_gblvif[node][j].is_valid = 0;
+
+		for (lmac = 0; lmac < pf_vf_map_data[node].lmac_cnt; lmac++) {
+			int slm = pf_vf_map_data[node].pf_vf[lmac].sys_lmac;
+			int valid_pf = pf_vf_map_data[node].pf_vf[lmac].pf_id;
+			int num_vfs = pf_vf_map_data[node].pf_vf[lmac].num_vfs;
+			struct evif_entry evif_dat;
+			int bvif, mvif;
+			int ret;
+
+			bvif = TNS_BASE_BCAST_VIF + slm;
+			mvif = TNS_BASE_MCAST_VIF + slm;
+
+			//Map inerface to Ingress Group
+			for (j = valid_pf; j < (valid_pf + num_vfs); j++) {
+				struct itt_entry_s port_cfg_entry;
+				int ret;
+
+				intr_to_ingressgrp[node][j] = TNS_MAX_VF + slm;
+
+				ret = tbl_read(node, PORT_CONFIG_TABLE, j, NULL,
+					       NULL, (void *)&port_cfg_entry);
+				if (ret)
+					return ret;
+				port_cfg_entry.default_evif =
+					intr_to_ingressgrp[node][j];
+				ret = tbl_write(node, PORT_CONFIG_TABLE, j,
+						NULL, NULL,
+						(void *)&port_cfg_entry);
+				if (ret)
+					return ret;
+			}
+
+			//LMAC Configuration
+			ingressgrp_gblvif[node][slm].is_valid = 1;
+			ingressgrp_gblvif[node][slm].ingress_grp = TNS_MAX_VF +
+								     slm;
+			ingressgrp_gblvif[node][slm].pf_vf = valid_pf;
+			ingressgrp_gblvif[node][slm].bcast_vif = bvif;
+			ingressgrp_gblvif[node][slm].mcast_vif = mvif;
+			ingressgrp_gblvif[node][slm].null_vif = TNS_NULL_VIF;
+			MCAST_PROMIS(node, slm, 0) = TNS_MAX_VF + slm;
+			VALID_MCAST_PROMIS(node, slm) = 1;
+
+			filter_dbg(FINFO, "lmac %d syslm %d num_vfs %d ",
+				   lmac, slm,
+				   pf_vf_map_data[node].pf_vf[lmac].num_vfs);
+			filter_dbg(FINFO, "ingress_grp %d pfVf %d bCast %d ",
+				   ingressgrp_gblvif[node][slm].ingress_grp,
+				   ingressgrp_gblvif[node][slm].pf_vf,
+				   ingressgrp_gblvif[node][slm].bcast_vif);
+			filter_dbg(FINFO, "mCast: %d\n",
+				   ingressgrp_gblvif[node][slm].mcast_vif);
+
+			ret = tbl_read(node, MAC_EVIF_TABLE, bvif, NULL, NULL,
+				       &evif_dat);
+			if (ret)
+				return ret;
+
+			evif_dat.rewrite_ptr0 = 0xFF;
+			evif_dat.rewrite_ptr1 = 0xFF;
+			enable_port(ingressgrp_gblvif[node][slm].ingress_grp,
+				    &evif_dat);
+
+			ret = tbl_write(node, MAC_EVIF_TABLE, bvif, NULL, NULL,
+					(void *)&evif_dat);
+			if (ret)
+				return ret;
+
+			ret = tbl_read(node, MAC_EVIF_TABLE, mvif, NULL, NULL,
+				       &evif_dat);
+			if (ret)
+				return ret;
+
+			evif_dat.rewrite_ptr0 = 0xFF;
+			evif_dat.rewrite_ptr1 = 0xFF;
+			enable_port(ingressgrp_gblvif[node][slm].ingress_grp,
+				    &evif_dat);
+
+			ret = tbl_write(node, MAC_EVIF_TABLE, mvif, NULL, NULL,
+					(void *)&evif_dat);
+			if (ret)
+				return ret;
+
+			ret = tbl_read(node, MAC_EVIF_TABLE, TNS_NULL_VIF, NULL,
+				       NULL, &evif_dat);
+			if (ret)
+				return ret;
+
+			evif_dat.rewrite_ptr0 = 0xFF;
+			evif_dat.rewrite_ptr1 = 0xFF;
+
+			ret = tbl_write(node, MAC_EVIF_TABLE, TNS_NULL_VIF,
+					NULL, NULL, (void *)&evif_dat);
+			if (ret)
+				return ret;
+		}
+		j = 0;
+		alloc_table_index(node, VLAN_FILTER_TABLE, &j);
+
+		for (j = 0; j < TNS_MAX_VF; j++) {
+			vf_reg_data[node][j].vf_in_mcast_promis = 0;
+			vf_reg_data[node][j].filter_count = 1;
+			vf_reg_data[node][j].filter_index[0] = j;
+			vf_reg_data[node][j].vlan_count = 0;
+			alloc_table_index(node, MAC_FILTER_TABLE, &j);
+		}
+		for (j = 0; j <= TNS_NULL_VIF; j++)
+			alloc_table_index(node, MAC_EVIF_TABLE, &j);
+		macfilter_freeindex[node] = TNS_MAX_VF;
+		vlanfilter_freeindex[node] = 1;
+	}
+
+	return TNS_NO_ERR;
+}
+
+int add_admin_vlan(int node, int vf, int vlan)
+{
+	int index = -1;
+	int ret;
+	struct itt_entry_s port_cfg_entry;
+
+	ret = add_vlan_entry(node, vf, vlan, index);
+	if (ret) {
+		filter_dbg(FERR, "Add admin VLAN for VF: %d Failed %d\n",
+			   vf, ret);
+		return ret;
+	}
+
+	ret = tbl_read(node, PORT_CONFIG_TABLE, vf, NULL, NULL,
+		       (void *)&port_cfg_entry);
+	if (ret)
+		return ret;
+	port_cfg_entry.is_admin_vlan_enabled = 1;
+	port_cfg_entry.admin_vlan = vlan;
+	ret = tbl_write(node, PORT_CONFIG_TABLE, vf, NULL, NULL,
+			(void *)&port_cfg_entry);
+	if (ret)
+		return ret;
+	filter_dbg(FINFO, "%s Port %d dir %d defaultVif %d adminVlan %d %s\n",
+		   __func__, vf, port_cfg_entry.pkt_dir,
+		   port_cfg_entry.default_evif, port_cfg_entry.admin_vlan,
+		   port_cfg_entry.is_admin_vlan_enabled ? "Enable" : "Disable");
+
+	return TNS_NO_ERR;
+}
+
+int del_admin_vlan(int node, int vf, int vlan)
+{
+	int index = -1;
+	int ret;
+	struct itt_entry_s port_cfg_entry;
+
+	ret = del_vlan_entry(node, vf, vlan, index);
+	if (ret) {
+		filter_dbg(FERR, "Delete admin VLAN: %d for VF %d failed %d\n",
+			   vlan, vf, ret);
+		return ret;
+	}
+
+	ret = tbl_read(node, PORT_CONFIG_TABLE, vf, NULL, NULL,
+		       (void *)&port_cfg_entry);
+	if (ret)
+		return ret;
+	port_cfg_entry.is_admin_vlan_enabled = 0;
+	port_cfg_entry.admin_vlan = 0x0;
+	ret = tbl_write(node, PORT_CONFIG_TABLE, vf, NULL, NULL,
+			(void *)&port_cfg_entry);
+	if (ret)
+		return ret;
+	filter_dbg(FINFO, "%s Port %d dir %d defaultVif %d adminVlan %d %s\n",
+		   __func__, vf, port_cfg_entry.pkt_dir,
+		   port_cfg_entry.default_evif, port_cfg_entry.admin_vlan,
+		   port_cfg_entry.is_admin_vlan_enabled ? "Enable" : "Disable");
+
+	return TNS_NO_ERR;
+}
+
+void link_status_notification(int node, int vf, void *arg)
+{
+	int status =  *((int *)arg);
+	int bcast_vif;
+	int ret;
+	struct evif_entry evif_dat;
+
+	filter_dbg(FINFO, "VF %d Link %s\n", vf, status ? "up " : "down");
+	if (status) {
+		ret = vf_bcast_vif(node, vf, &bcast_vif);
+		if (ret)
+			return;
+
+		ret = tbl_read(node, MAC_EVIF_TABLE, bcast_vif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return;
+
+		enable_port(vf, &evif_dat);
+		dump_evif_entry(&evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, bcast_vif, NULL, NULL,
+				(void *)&evif_dat);
+		if (ret)
+			return;
+	} else {
+		ret = vf_bcast_vif(node, vf, &bcast_vif);
+		if (ret)
+			return;
+
+		ret = tbl_read(node, MAC_EVIF_TABLE, bcast_vif, NULL, NULL,
+			       &evif_dat);
+		if (ret)
+			return;
+
+		disable_port(vf, &evif_dat);
+		dump_evif_entry(&evif_dat);
+		ret = tbl_write(node, MAC_EVIF_TABLE, bcast_vif, NULL, NULL,
+				(void *)&evif_dat);
+		if (ret)
+			return;
+	}
+}
+
+void mac_update_notification(int node, int vf_id, void *arg)
+{
+	u8 *mac = (u8 *)arg;
+
+	filter_dbg(FINFO, "VF:%d MAC %02x:%02x:%02x:%02x:%02x:%02x Updated\n",
+		   vf_id, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
+	vf_interface_up(node, MAC_FILTER_TABLE, vf_id, arg);
+}
+
+void promisc_update_notification(int node, int vf_id, void *arg)
+{
+	int on = *(int *)arg;
+
+	filter_dbg(FERR, "VF %d %s promiscuous mode\n", vf_id,
+		   on ? "enter" : "left");
+	if (on)
+		enable_promiscuous_mode(node, vf_id);
+	else
+		disable_promiscuous_mode(node, vf_id);
+}
+
+void uc_mc_update_notification(int node, int vf_id, void *arg)
+{
+	struct uc_mc_msg *uc_mc_cfg = (struct uc_mc_msg *)arg;
+	u8 *mac;
+
+	mac = (u8 *)uc_mc_cfg->mac_addr;
+	if (uc_mc_cfg->is_flush) {
+		filter_dbg(FINFO, "\nNOTIFICATION VF:%d %s %s\n", vf_id,
+			   uc_mc_cfg->addr_type ? "mc" : "uc", "flush");
+	} else {
+		filter_dbg(FINFO, "\nNOTIFICATION VF:%d %s %s ", vf_id,
+			   uc_mc_cfg->addr_type ? "mc" : "uc",
+			   uc_mc_cfg->is_add ? "add" : "del");
+		filter_dbg(FINFO, "MAC ADDRESS %02x:%02x:%02x:%02x:%02x:%02x\n",
+			   mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
+		if (uc_mc_cfg->is_add) {
+			if (uc_mc_cfg->addr_type)
+				add_mac_filter_mcast_entry(node,
+							   MAC_FILTER_TABLE,
+							   vf_id, -1, mac);
+			else
+				add_mac_filter_entry(node, MAC_FILTER_TABLE,
+						     vf_id, -1, mac);
+		} else {
+			del_mac_filter_entry(node, MAC_FILTER_TABLE, vf_id, -1,
+					     mac, uc_mc_cfg->addr_type);
+		}
+	}
+}
+
+void admin_vlan_update_notification(int node, int vf_id, void *arg)
+{
+	struct vlan_msg *vlan_cfg = (struct vlan_msg *)arg;
+
+	filter_dbg(FINFO, "\nNOTIFICATION ADMIN VF %d VLAN id %d %s\n", vf_id,
+		   vlan_cfg->vlan_id, (vlan_cfg->vlan_add) ? "add" : "del");
+	if (vlan_cfg->vlan_add)
+		add_admin_vlan(node, vf_id, vlan_cfg->vlan_id);
+	else
+		del_admin_vlan(node, vf_id, vlan_cfg->vlan_id);
+}
+
+void vlan_update_notification(int node, int vf_id, void *arg)
+{
+	struct vlan_msg *vlan_cfg = (struct vlan_msg *)arg;
+
+	filter_dbg(FINFO, "\nNOTIFICATION VF %d VLAN id %d %s\n", vf_id,
+		   vlan_cfg->vlan_id, (vlan_cfg->vlan_add) ? "add" : "del");
+	if (vlan_cfg->vlan_add && vlan_cfg->vlan_id) {
+		int index = -1;
+		int ret = add_vlan_entry(node, vf_id, vlan_cfg->vlan_id,
+					      index);
+
+		if (ret)
+			filter_dbg(FERR, "Adding VLAN failed: %d\n", ret);
+		else
+			filter_dbg(FINFO, "VF: %d with VLAN: %d added\n",
+				   vf_id, vlan_cfg->vlan_id);
+	} else if (!vlan_cfg->vlan_add && vlan_cfg->vlan_id) {
+		int index = -1;
+		int ret = del_vlan_entry(node, vf_id, vlan_cfg->vlan_id,
+						index);
+
+		if (ret)
+			filter_dbg(FERR, "Deleting VLAN failed: %d\n", ret);
+		else
+			filter_dbg(FINFO, "VF: %d with VLAN: %d deleted\n",
+				   vf_id, vlan_cfg->vlan_id);
+	}
+}
+
+void pf_notify_msg_handler(int node, void *arg)
+{
+	union nic_mbx *mbx = (union nic_mbx *)arg;
+	int status;
+
+	switch (mbx->msg.msg) {
+	case NIC_MBOX_MSG_ADMIN_VLAN:
+		admin_vlan_update_notification(node, mbx->vlan_cfg.vf_id,
+					       &mbx->vlan_cfg);
+		break;
+	case NIC_MBOX_MSG_VLAN:
+		vlan_update_notification(node, mbx->vlan_cfg.vf_id,
+					 &mbx->vlan_cfg);
+		break;
+	case NIC_MBOX_MSG_UC_MC:
+		uc_mc_update_notification(node, mbx->vlan_cfg.vf_id,
+					  &mbx->uc_mc_cfg);
+		break;
+	case NIC_MBOX_MSG_SET_MAC:
+		mac_update_notification(node, mbx->mac.vf_id,
+					(void *)mbx->mac.mac_addr);
+		break;
+	case NIC_MBOX_MSG_CFG_DONE:
+	case NIC_MBOX_MSG_OP_UP:
+		status = true;
+		link_status_notification(node, mbx->mac.vf_id, (void *)&status);
+		break;
+	case NIC_MBOX_MSG_SHUTDOWN:
+	case NIC_MBOX_MSG_OP_DOWN:
+		status = false;
+		link_status_notification(node, mbx->mac.vf_id, (void *)&status);
+		break;
+	case NIC_MBOX_MSG_PROMISC:
+		status = mbx->promisc_cfg.on;
+		promisc_update_notification(node, mbx->promisc_cfg.vf_id,
+					    (void *)&status);
+		break;
+	}
+}
+
+int pf_filter_init(void)
+{
+	mac_filter_config();
+
+	return 0;
+}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/7] ThunderX Embedded switch support
  2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
                   ` (6 preceding siblings ...)
  2016-12-21  8:46 ` [RFC PATCH 7/7] Get notifications from PF driver and configure filter block based on request data Satha Koteswara Rao
@ 2016-12-21 12:03 ` Sunil Kovvuri
  2016-12-26 14:04   ` Koteshwar Rao, Satha
  7 siblings, 1 reply; 19+ messages in thread
From: Sunil Kovvuri @ 2016-12-21 12:03 UTC (permalink / raw)
  To: Satha Koteswara Rao
  Cc: LKML, Sunil Goutham, Robert Richter, David S. Miller,
	David Daney, rvatsavayi, derek.chickles, philip.romanov,
	Linux Netdev List, LAKML

It would be easier for anyone to review if you prepare patches based on
features rather than based on modifications to files.

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 4/7] HW Filter Initialization code and register access APIs
  2016-12-21  8:46 ` [RFC PATCH 4/7] HW Filter Initialization code and register access APIs Satha Koteswara Rao
@ 2016-12-21 12:36   ` Sunil Kovvuri
  2016-12-26 14:13     ` Koteshwar Rao, Satha
  0 siblings, 1 reply; 19+ messages in thread
From: Sunil Kovvuri @ 2016-12-21 12:36 UTC (permalink / raw)
  To: Satha Koteswara Rao
  Cc: LKML, Sunil Goutham, Robert Richter, David S. Miller,
	David Daney, rvatsavayi, derek.chickles, philip.romanov,
	Linux Netdev List, LAKML

On Wed, Dec 21, 2016 at 2:16 PM, Satha Koteswara Rao
<satha.rao@caviumnetworks.com> wrote:
> ---
>  drivers/net/ethernet/cavium/thunder/pf_reg.c | 660 +++++++++++++++++++++++++++
>  1 file changed, 660 insertions(+)
>  create mode 100644 drivers/net/ethernet/cavium/thunder/pf_reg.c
>
> diff --git a/drivers/net/ethernet/cavium/thunder/pf_reg.c b/drivers/net/ethernet/cavium/thunder/pf_reg.c

Sunil>>
>From the file name 'pf_reg.c', what is PF here ?
TNS is not a SRIOV device right ?

> new file mode 100644
> index 0000000..1f95c7f
> --- /dev/null
> +++ b/drivers/net/ethernet/cavium/thunder/pf_reg.c
> @@ -0,0 +1,660 @@
> +/*
> + * Copyright (C) 2015 Cavium, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/fs.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/device.h>
> +#include <linux/version.h>
> +#include <linux/proc_fs.h>
> +#include <linux/device.h>
> +#include <linux/mman.h>
> +#include <linux/uaccess.h>
> +#include <linux/delay.h>
> +#include <linux/cdev.h>
> +#include <linux/err.h>
> +#include <linux/device.h>
> +#include <linux/io.h>
> +#include <linux/firmware.h>
> +#include "pf_globals.h"
> +#include "pf_locals.h"
> +#include "tbl_access.h"
> +#include "linux/lz4.h"
> +
> +struct tns_table_s tbl_info[TNS_MAX_TABLE];
> +
> +#define TNS_TDMA_SST_ACC_CMD_ADDR      0x0000842000000270ull
> +
> +#define BAR0_START 0x842000000000
> +#define BAR0_END   0x84200000FFFF
> +#define BAR0_SIZE  (64 * 1024)
> +#define BAR2_START 0x842040000000
> +#define BAR2_END   0x84207FFFFFFF
> +#define BAR2_SIZE  (1024 * 1024 * 1024)
> +
> +#define NODE1_BAR0_START 0x942000000000
> +#define NODE1_BAR0_END   0x94200000FFFF
> +#define NODE1_BAR0_SIZE  (64 * 1024)
> +#define NODE1_BAR2_START 0x942040000000
> +#define NODE1_BAR2_END   0x94207FFFFFFF
> +#define NODE1_BAR2_SIZE  (1024 * 1024 * 1024)

Sunil>> This is absurd, why are you using hardcoded HW addresses,
why not use TNS device's PCI BARs.

> +/* Allow a max of 4 chunks for the Indirect Read/Write */
> +#define MAX_SIZE (64 * 4)
> +#define CHUNK_SIZE (64)
> +/* To protect register access */
> +spinlock_t pf_reg_lock;
> +
> +u64 iomem0;
> +u64 iomem2;
> +u8 tns_enabled;
> +u64 node1_iomem0;
> +u64 node1_iomem2;
> +u8 node1_tns;
> +int n1_tns;

Sunil>> A simple structure would have nice instead of so many global variables.

> +
> +int tns_write_register_indirect(int node_id, u64 address, u8 size,
> +                               u8 *kern_buffer)
> +{
> +       union tns_tdma_sst_acc_cmd acccmd;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +       int i, j, w = 0;
> +       int cnt = 0;
> +       u32 *dataw = NULL;
> +       int temp = 0;
> +       int k = 0;
> +       int chunks = 0;
> +       u64 acccmd_address;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (size == 0 || !kern_buffer) {
> +               filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
> +               return TNS_ERROR_INVALID_ARG;
> +       }
> +       if (size > MAX_SIZE) {
> +               filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
> +               return TNS_ERROR_DATA_TOO_LARGE;
> +       }
> +       if (node_id) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
> +       acccmd_address = (address & 0x00000000ffffffff);
> +       spin_lock_bh(&pf_reg_lock);
> +
> +       for (k = 0; k < chunks; k++) {

Sunil>> Why not use some proper variable names, instead of i,j,k,w,
temp e.t.c e.t.c


> +               /* Should never happen */
> +               if (size < 0) {
> +                       filter_dbg(FERR, "%s size mismatch [CHUNK %d]\n",
> +                                  __func__, k);
> +                       break;
> +               }
> +               temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
> +               dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
> +               cnt = ((temp + 3) / 4);
> +               data.u = 0ULL;
> +               for (j = 0, i = 0; i < cnt; i++) {
> +                       /* Odd words go in the upper 32 bits of the data
> +                        * register
> +                        */
> +                       if (i & 1) {
> +                               data.s.upper32 = dataw[i];
> +                               writeq_relaxed(data.u, (void *)(lmem0 +
> +                                              TNS_TDMA_SST_ACC_WDATX(j)));
> +                               data.u = 0ULL;
> +                               j++; /* Advance to the next data word */
> +                               w = 0;
> +                       } else {
> +                               /* Lower 32 bits contain words 0, 2, 4, etc. */
> +                               data.s.lower32 = dataw[i];
> +                               w = 1;
> +                       }
> +               }
> +
> +               /* If the last word was a partial (< 64 bits) then
> +                * see if we need to write it.
> +                */
> +               if (w)
> +                       writeq_relaxed(data.u, (void *)(lmem0 +
> +                                      TNS_TDMA_SST_ACC_WDATX(j)));
> +
> +               acccmd.u = 0ULL;
> +               acccmd.s.go = 1; /* Cleared once the request is serviced */
> +               acccmd.s.size = cnt;
> +               acccmd.s.addr = (acccmd_address >> 2);
> +               writeq_relaxed(acccmd.u, (void *)(lmem0 +
> +                              TDMA_SST_ACC_CMD));
> +               accstat.u = 0ULL;
> +
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                         TDMA_SST_ACC_STAT));
> +
> +               if (accstat.s.error) {
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                              TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "%s Reading data from ", __func__);
> +                       filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
> +                                  (unsigned long)address, k,
> +                                  (unsigned long)data.u);
> +                       spin_unlock_bh(&pf_reg_lock);
> +                       kfree(kern_buffer);
> +                       return TNS_ERROR_INDIRECT_WRITE;
> +               }
> +               /* Calculate the next offset to write */
> +               acccmd_address = acccmd_address + CHUNK_SIZE;
> +               size -= CHUNK_SIZE;
> +       }
> +       spin_unlock_bh(&pf_reg_lock);
> +
> +       return 0;
> +}
> +
> +int tns_read_register_indirect(int node_id, u64 address, u8 size,
> +                              u8 *kern_buffer)
> +{
> +       union tns_tdma_sst_acc_cmd acccmd;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +       int i, j, dcnt;
> +       int cnt = 0;
> +       u32 *dataw = NULL;
> +       int temp = 0;
> +       int k = 0;
> +       int chunks = 0;
> +       u64 acccmd_address;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (size == 0 || !kern_buffer) {
> +               filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
> +               return TNS_ERROR_INVALID_ARG;
> +       }
> +       if (size > MAX_SIZE) {
> +               filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
> +               return TNS_ERROR_DATA_TOO_LARGE;
> +       }
> +       if (node_id) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
> +       acccmd_address = (address & 0x00000000ffffffff);
> +       spin_lock_bh(&pf_reg_lock);
> +       for (k = 0; k < chunks; k++) {
> +               /* This should never happen */
> +               if (size < 0) {
> +                       filter_dbg(FERR, "%s size mismatch [CHUNK:%d]\n",
> +                                  __func__, k);
> +                       break;
> +               }
> +               temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
> +               dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
> +               cnt = ((temp + 3) / 4);
> +               acccmd.u = 0ULL;
> +               acccmd.s.op = 1; /* Read operation */
> +               acccmd.s.size = cnt;
> +               acccmd.s.addr = (acccmd_address >> 2);
> +               acccmd.s.go = 1; /* Execute */
> +               writeq_relaxed(acccmd.u, (void *)(lmem0 +
> +                              TDMA_SST_ACC_CMD));
> +               accstat.u = 0ULL;
> +
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                                 TDMA_SST_ACC_STAT));
> +
> +               if (accstat.s.error) {
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                              TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "%s Reading data from", __func__);
> +                       filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
> +                                  (unsigned long)address, k,
> +                                  (unsigned long)data.u);
> +                       spin_unlock_bh(&pf_reg_lock);
> +                       kfree(kern_buffer);
> +                       return TNS_ERROR_INDIRECT_READ;
> +               }
> +
> +               dcnt = cnt / 2;
> +               if (cnt & 1)
> +                       dcnt++;
> +               for (i = 0, j = 0; (j < dcnt) && (i < cnt); j++) {
> +                       data.u = readq_relaxed((void *)(lmem0 +
> +                                              TNS_TDMA_SST_ACC_RDATX(j)));
> +                       dataw[i++] = data.s.lower32;
> +                       if (i < cnt)
> +                               dataw[i++] = data.s.upper32;
> +               }
> +               /* Calculate the next offset to read */
> +               acccmd_address = acccmd_address + CHUNK_SIZE;
> +               size -= CHUNK_SIZE;
> +       }
> +       spin_unlock_bh(&pf_reg_lock);
> +       return 0;
> +}
> +
> +u64 tns_read_register(u64 start, u64 offset)
> +{
> +       return readq_relaxed((void *)(start + offset));
> +}
> +
> +void tns_write_register(u64 start, u64 offset, u64 data)
> +{
> +       writeq_relaxed(data, (void *)(start + offset));
> +}
> +
> +/* Check if TNS is available. If yes return 0 else 1 */
> +int is_tns_available(void)
> +{
> +       union tns_tdma_cap tdma_cap;
> +
> +       tdma_cap.u = tns_read_register(iomem0, TNS_TDMA_CAP_OFFSET);
> +       tns_enabled = tdma_cap.s.switch_capable;
> +       /* In multi-node systems, make sure TNS should be there in both nodes */

Can't node-0 TNS work with node-0 interfaces if node-1 TNS is not detected ?

> +       if (nr_node_ids > 1) {
> +               tdma_cap.u = tns_read_register(node1_iomem0,
> +                                              TNS_TDMA_CAP_OFFSET);
> +               if (tdma_cap.s.switch_capable)
> +                       n1_tns = 1;
> +       }
> +       tns_enabled &= tdma_cap.s.switch_capable;
> +       return (!tns_enabled);
> +}
> +
> +int bist_error_check(void)
> +{
> +       int fail = 0, i;
> +       u64 bist_stat = 0;
> +
> +       for (i = 0; i < 12; i++) {
> +               bist_stat = tns_read_register(iomem0, (i * 16));
> +               if (bist_stat) {
> +                       filter_dbg(FERR, "TNS BIST%d fail 0x%llx\n",
> +                                  i, bist_stat);
> +                       fail = 1;
> +               }
> +               if (!n1_tns)
> +                       continue;
> +               bist_stat = tns_read_register(node1_iomem0, (i * 16));
> +               if (bist_stat) {
> +                       filter_dbg(FERR, "TNS(N1) BIST%d fail 0x%llx\n",
> +                                  i, bist_stat);
> +                       fail = 1;
> +               }
> +       }
> +
> +       return fail;
> +}
> +
> +int replay_indirect_trace(int node, u64 *buf_ptr, int idx)
> +{
> +       union _tns_sst_config cmd = (union _tns_sst_config)(buf_ptr[idx]);
> +       int remaining = cmd.cmd.run;
> +       u64 io_addr;
> +       int word_cnt = cmd.cmd.word_cnt;
> +       int size = (word_cnt + 1) / 2;
> +       u64 stride = word_cnt;
> +       u64 acc_cmd = cmd.copy.do_copy;
> +       u64 lmem2 = 0, lmem0 = 0;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +
> +       if (node) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       if (word_cnt == 0) {
> +               word_cnt = 16;
> +               stride = 16;
> +               size = 8;
> +       } else {
> +               // make stride next power of 2

Please use proper commenting, have you ran checkpatch ?

> +               if (cmd.cmd.powerof2stride)
> +                       while ((stride & (stride - 1)) != 0)
> +                               stride++;
> +       }
> +       stride *= 4; //convert stride from 32-bit words to bytes
> +
> +       do {
> +               int addr_p = 1;
> +               /* extract (big endian) data from the config
> +                * into the data array
> +                */
> +               while (size > 0) {
> +                       io_addr = lmem0 + TDMA_SST_ACC_CMD + addr_p * 16;
> +                       tns_write_register(io_addr, 0, buf_ptr[idx + size]);
> +                       addr_p += 1;
> +                       size--;
> +               }
> +               tns_write_register((lmem0 + TDMA_SST_ACC_CMD), 0, acc_cmd);
> +               /* TNS Block access registers indirectly, ran memory barrier
> +                * between two writes
> +                */
> +               wmb();
> +               /* Check for completion */
> +               accstat.u = 0ULL;
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                                          TDMA_SST_ACC_STAT));
> +
> +               /* Check for error, and report it */
> +               if (accstat.s.error) {
> +                       filter_dbg(FERR, "%s data from 0x%0llx failed 0x%llx\n",
> +                                  __func__, acc_cmd, accstat.u);
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                                       TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "Status 0x%llx\n", data.u);
> +               }
> +               /* update the address */
> +               acc_cmd += stride;
> +               size = (word_cnt + 1) / 2;
> +               usleep_range(20, 30);
> +       } while (remaining-- > 0);
> +
> +       return size;
> +}
> +
> +void replay_tns_node(int node, u64 *buf_ptr, int reg_cnt)
> +{
> +       int counter = 0;
> +       u64 offset = 0;
> +       u64 io_address;
> +       int datapathmode = 1;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (node) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +       for (counter = 0; counter < reg_cnt; counter++) {
> +               if (buf_ptr[counter] == 0xDADADADADADADADAull) {
> +                       datapathmode = 1;
> +                       continue;
> +               } else if (buf_ptr[counter] == 0xDEDEDEDEDEDEDEDEull) {
> +                       datapathmode = 0;
> +                       continue;
> +               }
> +               if (datapathmode == 1) {
> +                       if (buf_ptr[counter] >= BAR0_START &&
> +                           buf_ptr[counter] <= BAR0_END) {
> +                               offset = buf_ptr[counter] - BAR0_START;
> +                               io_address = lmem0 + offset;
> +                       } else if (buf_ptr[counter] >= BAR2_START &&
> +                                  buf_ptr[counter] <= BAR2_END) {
> +                               offset = buf_ptr[counter] - BAR2_START;
> +                               io_address = lmem2 + offset;
> +                       } else {
> +                               filter_dbg(FERR, "%s Address 0x%llx invalid\n",
> +                                          __func__, buf_ptr[counter]);
> +                               return;
> +                       }
> +
> +                       tns_write_register(io_address, 0, buf_ptr[counter + 1]);
> +                       /* TNS Block access registers indirectly, ran memory
> +                        * barrier between two writes
> +                        */
> +                       wmb();
> +                       counter += 1;
> +                       usleep_range(20, 30);
> +               } else if (datapathmode == 0) {
> +                       int sz = replay_indirect_trace(node, buf_ptr, counter);
> +
> +                       counter += sz;
> +               }
> +       }
> +}
> +
> +int alloc_table_info(int i, struct table_static_s tbl_sdata[])
> +{
> +       tbl_info[i].ddata[0].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
> +                                             sizeof(uintptr_t), GFP_KERNEL);
> +       if (!tbl_info[i].ddata[0].bitmap)
> +               return 1;
> +
> +       if (!n1_tns)
> +               return 0;
> +
> +       tbl_info[i].ddata[1].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
> +                                             sizeof(uintptr_t), GFP_KERNEL);
> +       if (!tbl_info[i].ddata[1].bitmap) {
> +               kfree(tbl_info[i].ddata[0].bitmap);
> +               return 1;
> +       }
> +
> +       return 0;
> +}
> +
> +void tns_replay_register_trace(const struct firmware *fw, struct device *dev)
> +{
> +       int i;
> +       int node = 0;
> +       u8 *buffer = NULL;
> +       u64 *buf_ptr = NULL;
> +       struct tns_global_st *fw_header = NULL;
> +       struct table_static_s tbl_sdata[TNS_MAX_TABLE];
> +       size_t src_len;
> +       size_t dest_len = TNS_FW_MAX_SIZE;
> +       int rc;
> +       u8 *fw2_buf = NULL;
> +       unsigned char *decomp_dest = NULL;
> +
> +       fw2_buf = (u8 *)fw->data;
> +       src_len = fw->size - 8;
> +
> +       decomp_dest = kcalloc((dest_len * 2), sizeof(char), GFP_KERNEL);
> +       if (!decomp_dest)
> +               return;
> +
> +       memset(decomp_dest, 0, (dest_len * 2));
> +       rc = lz4_decompress_unknownoutputsize(&fw2_buf[8], src_len, decomp_dest,
> +                                             &dest_len);
> +       if (rc) {
> +               filter_dbg(FERR, "Decompress Error %d\n", rc);
> +               pr_info("Uncompressed destination length %ld\n", dest_len);
> +               kfree(decomp_dest);
> +               return;
> +       }
> +       fw_header = (struct tns_global_st *)decomp_dest;
> +       buffer = (u8 *)decomp_dest;
> +
> +       filter_dbg(FINFO, "TNS Firmware version: %s Loading...\n",
> +                  fw_header->version);
> +
> +       memset(tbl_info, 0x0, sizeof(tbl_info));
> +       buf_ptr = (u64 *)(buffer + sizeof(struct tns_global_st));
> +       memcpy(tbl_sdata, fw_header->tbl_info, sizeof(fw_header->tbl_info));
> +
> +       for (i = 0; i < TNS_MAX_TABLE; i++) {
> +               if (!tbl_sdata[i].valid)
> +                       continue;
> +               memcpy(&tbl_info[i].sdata, &tbl_sdata[i],
> +                      sizeof(struct table_static_s));
> +               if (alloc_table_info(i, tbl_sdata)) {
> +                       kfree(decomp_dest);
> +                       return;
> +               }
> +       }
> +
> +       for (node = 0; node < nr_node_ids; node++)
> +               replay_tns_node(node, buf_ptr, fw_header->reg_cnt);
> +
> +       kfree(decomp_dest);
> +       release_firmware(fw);
> +}
> +
> +int tns_init(const struct firmware *fw, struct device *dev)
> +{
> +       int result = 0;
> +       int i = 0;
> +       int temp;
> +       union tns_tdma_config tdma_config;
> +       union tns_tdma_lmacx_config tdma_lmac_cfg;
> +       u64 reg_init_val;
> +
> +       spin_lock_init(&pf_reg_lock);
> +
> +       /* use two regions insted of a single big mapping to save
> +        * the kernel virtual space
> +        */
> +       iomem0 = (u64)ioremap(BAR0_START, BAR0_SIZE);
> +       if (iomem0 == 0ULL) {
> +               filter_dbg(FERR, "Node0 ioremap failed for BAR0\n");
> +               result = -EAGAIN;
> +               goto error;
> +       } else {
> +               filter_dbg(FINFO, "ioremap success for BAR0\n");
> +       }
> +
> +       if (nr_node_ids > 1) {
> +               node1_iomem0 = (u64)ioremap(NODE1_BAR0_START, NODE1_BAR0_SIZE);
> +               if (node1_iomem0 == 0ULL) {
> +                       filter_dbg(FERR, "Node1 ioremap failed for BAR0\n");
> +                       result = -EAGAIN;
> +                       goto error;
> +               } else {
> +                       filter_dbg(FINFO, "ioremap success for BAR0\n");
> +               }
> +       }
> +
> +       if (is_tns_available()) {
> +               filter_dbg(FERR, "TNS NOT AVAILABLE\n");
> +               goto error;
> +       }
> +
> +       if (bist_error_check()) {
> +               filter_dbg(FERR, "BIST ERROR CHECK FAILED");
> +               goto error;
> +       }
> +
> +       /* NIC0-BGX0 is TNS, NIC1-BGX1 is TNS, DISABLE BACKPRESSURE */

Sunil>> Why disable backpressure, if it's in TNS mode ?

> +       reg_init_val = 0ULL;
> +       pr_info("NIC Block configured in TNS/TNS mode");
> +       tns_write_register(iomem0, TNS_RDMA_CONFIG_OFFSET, reg_init_val);
> +       usleep_range(10, 20);

Sunil>> Why sleep after every register write ?

> +       if (n1_tns) {
> +               tns_write_register(node1_iomem0, TNS_RDMA_CONFIG_OFFSET,
> +                                  reg_init_val);
> +               usleep_range(10, 20);
> +       }
> +
> +       // Configure each LMAC with 512 credits in BYPASS mode
> +       for (i = TNS_MIN_LMAC; i < (TNS_MIN_LMAC + TNS_MAX_LMAC); i++) {
> +               tdma_lmac_cfg.u = 0ULL;
> +               tdma_lmac_cfg.s.fifo_cdts = 0x200;
> +               tns_write_register(iomem0, TNS_TDMA_LMACX_CONFIG_OFFSET(i),
> +                                  tdma_lmac_cfg.u);
> +               usleep_range(10, 20);
> +               if (n1_tns) {
> +                       tns_write_register(node1_iomem0,
> +                                          TNS_TDMA_LMACX_CONFIG_OFFSET(i),
> +                                          tdma_lmac_cfg.u);
> +                       usleep_range(10, 20);
> +               }
> +       }
> +
> +       //ENABLE TNS CLOCK AND CSR READS
> +       temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
> +       tdma_config.u = temp;
> +       tdma_config.s.clk_2x_ena = 1;
> +       tdma_config.s.clk_ena = 1;
> +       tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
> +                                  tdma_config.u);
> +
> +       temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
> +       tdma_config.u = temp;
> +       tdma_config.s.csr_access_ena = 1;
> +       tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
> +                                  tdma_config.u);
> +
> +       reg_init_val = 0ULL;
> +       tns_write_register(iomem0, TNS_TDMA_RESET_CTL_OFFSET, reg_init_val);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_RESET_CTL_OFFSET,
> +                                  reg_init_val);
> +
> +       iomem2 = (u64)ioremap(BAR2_START, BAR2_SIZE);
> +       if (iomem2 == 0ULL) {
> +               filter_dbg(FERR, "ioremap failed for BAR2\n");
> +               result = -EAGAIN;
> +               goto error;
> +       } else {
> +               filter_dbg(FINFO, "ioremap success for BAR2\n");
> +       }
> +
> +       if (n1_tns) {
> +               node1_iomem2 = (u64)ioremap(NODE1_BAR2_START, NODE1_BAR2_SIZE);
> +               if (node1_iomem2 == 0ULL) {
> +                       filter_dbg(FERR, "Node1 ioremap failed for BAR2\n");
> +                       result = -EAGAIN;
> +                       goto error;
> +               } else {
> +                       filter_dbg(FINFO, "Node1 ioremap success for BAR2\n");
> +               }
> +       }
> +       msleep(1000);
> +       //We will replay register trace to initialize TNS block
> +       tns_replay_register_trace(fw, dev);
> +
> +       return 0;
> +error:
> +       if (iomem0 != 0)
> +               iounmap((void *)iomem0);
> +       if (iomem2 != 0)
> +               iounmap((void *)iomem2);
> +
> +       if (node1_iomem0 != 0)
> +               iounmap((void *)node1_iomem0);
> +       if (node1_iomem2 != 0)
> +               iounmap((void *)node1_iomem2);
> +
> +       return result;
> +}
> +
> +void tns_exit(void)
> +{
> +       int i;
> +
> +       if (iomem0 != 0)
> +               iounmap((void *)iomem0);
> +       if (iomem2 != 0)
> +               iounmap((void *)iomem2);
> +
> +       if (node1_iomem0 != 0)
> +               iounmap((void *)node1_iomem0);
> +       if (node1_iomem2 != 0)
> +               iounmap((void *)node1_iomem2);
> +
> +       for (i = 0; i < TNS_MAX_TABLE; i++) {
> +               if (!tbl_info[i].sdata.valid)
> +                       continue;
> +               kfree(tbl_info[i].ddata[0].bitmap);
> +               kfree(tbl_info[i].ddata[n1_tns].bitmap);
> +       }
> +}
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's
  2016-12-21  8:46 ` [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's Satha Koteswara Rao
@ 2016-12-21 12:43   ` Sunil Kovvuri
  2016-12-26 14:16     ` Koteshwar Rao, Satha
  0 siblings, 1 reply; 19+ messages in thread
From: Sunil Kovvuri @ 2016-12-21 12:43 UTC (permalink / raw)
  To: Satha Koteswara Rao
  Cc: LKML, Sunil Goutham, Robert Richter, David S. Miller,
	David Daney, rvatsavayi, derek.chickles, philip.romanov,
	Linux Netdev List, LAKML

On Wed, Dec 21, 2016 at 2:16 PM, Satha Koteswara Rao
<satha.rao@caviumnetworks.com> wrote:
> +struct tns_global_st {
> +       u64 magic;
> +       char     version[16];
> +       u64 reg_cnt;
> +       struct table_static_s tbl_info[TNS_MAX_TABLE];
> +};
> +
> +#define PF_COUNT 3
> +#define PF_1   0
> +#define PF_2   64
> +#define PF_3   96
> +#define PF_END 128

Some comments please ... what is 0, 64, 96 ??
You can read PCI_SRIOV_TOTAL_VF from PCI config space instead of
defining PF_END with 128.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file
  2016-12-21  8:46 ` [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file Satha Koteswara Rao
@ 2016-12-21 13:05   ` Sunil Kovvuri
  2016-12-26 14:20     ` Koteshwar Rao, Satha
  0 siblings, 1 reply; 19+ messages in thread
From: Sunil Kovvuri @ 2016-12-21 13:05 UTC (permalink / raw)
  To: Satha Koteswara Rao
  Cc: LKML, Sunil Goutham, Robert Richter, David S. Miller,
	David Daney, rvatsavayi, derek.chickles, philip.romanov,
	Linux Netdev List, LAKML

>
>  #define NIC_MAX_RSS_HASH_BITS          8
>  #define NIC_MAX_RSS_IDR_TBL_SIZE       (1 << NIC_MAX_RSS_HASH_BITS)
> +#define NIC_TNS_RSS_IDR_TBL_SIZE       5

So you want to use only 5 queues per VF when TNS is enabled, is it ??
There are 4096 RSS indices in total, for each VF you can use max 32.
I guess you wanted to set no of hash bits to 5 instead of table size.

>  #define RSS_HASH_KEY_SIZE              5 /* 320 bit key */
>
>  struct nicvf_rss_info {
> @@ -255,74 +258,6 @@ struct nicvf_drv_stats {
>         struct u64_stats_sync   syncp;
>  };
>
> -struct nicvf {
> -       struct nicvf            *pnicvf;
> -       struct net_device       *netdev;
> -       struct pci_dev          *pdev;
> -       void __iomem            *reg_base;

Didn't get why you moved this structure to the end of file.
Looks like an unnecessary modification.


> +static unsigned int num_vfs;
> +module_param(num_vfs, uint, 0644);
> +MODULE_PARM_DESC(num_vfs, "Non zero positive value, specifies number of VF's per physical port");

So what if driver is built-in instead of module, I can't use TNS is it ?

>
> +/* Set RBDR Backpressure (RBDR_BP) and CQ backpressure (CQ_BP) of vnic queues
> + * to 129 each

Why 129 ??
RBDR minimum size is 8K buffers, why you want to assert BP when still
~4K buffers
are available. Isn't 4K a huge number to start asserting backpressure ?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH 0/7] ThunderX Embedded switch support
  2016-12-21 12:03 ` [RFC PATCH 0/7] ThunderX Embedded switch support Sunil Kovvuri
@ 2016-12-26 14:04   ` Koteshwar Rao, Satha
  2016-12-26 14:55     ` Andrew Lunn
  0 siblings, 1 reply; 19+ messages in thread
From: Koteshwar Rao, Satha @ 2016-12-26 14:04 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: LKML, Goutham, Sunil, Robert Richter, David S. Miller, Daney,
	David, Vatsavayi, Raghu, Chickles, Derek, Romanov, Philip,
	Linux Netdev List, LAKML

Hi Sunil,

In RFC cover letter we explained the feature details, files organized based on their supporting functionality, let me know if you are interested in any specific details

Thanks,
Satha

-----Original Message-----
From: Sunil Kovvuri [mailto:sunil.kovvuri@gmail.com] 
Sent: Wednesday, December 21, 2016 4:03 AM
To: Koteshwar Rao, Satha
Cc: LKML; Goutham, Sunil; Robert Richter; David S. Miller; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Romanov, Philip; Linux Netdev List; LAKML
Subject: Re: [RFC PATCH 0/7] ThunderX Embedded switch support

It would be easier for anyone to review if you prepare patches based on features rather than based on modifications to files.

Thanks,
Sunil.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH 4/7] HW Filter Initialization code and register access APIs
  2016-12-21 12:36   ` Sunil Kovvuri
@ 2016-12-26 14:13     ` Koteshwar Rao, Satha
  0 siblings, 0 replies; 19+ messages in thread
From: Koteshwar Rao, Satha @ 2016-12-26 14:13 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: LKML, Goutham, Sunil, Robert Richter, David S. Miller, Daney,
	David, Vatsavayi, Raghu, Chickles, Derek, Romanov, Philip,
	Linux Netdev List, LAKML

Hi Sunil,

Thanks for review. Answers inline.

Thanks,
Satha.

-----Original Message-----
From: Sunil Kovvuri [mailto:sunil.kovvuri@gmail.com] 
Sent: Wednesday, December 21, 2016 4:36 AM
To: Koteshwar Rao, Satha
Cc: LKML; Goutham, Sunil; Robert Richter; David S. Miller; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Romanov, Philip; Linux Netdev List; LAKML
Subject: Re: [RFC PATCH 4/7] HW Filter Initialization code and register access APIs

On Wed, Dec 21, 2016 at 2:16 PM, Satha Koteswara Rao <satha.rao@caviumnetworks.com> wrote:
> ---
>  drivers/net/ethernet/cavium/thunder/pf_reg.c | 660 
> +++++++++++++++++++++++++++
>  1 file changed, 660 insertions(+)
>  create mode 100644 drivers/net/ethernet/cavium/thunder/pf_reg.c
>
> diff --git a/drivers/net/ethernet/cavium/thunder/pf_reg.c 
> b/drivers/net/ethernet/cavium/thunder/pf_reg.c

Sunil>>
>From the file name 'pf_reg.c', what is PF here ?
TNS is not a SRIOV device right ?
SATHA>>> PF stands for acted Physical Function. PF referred in file name confuses common usage of NIC PF, planning to change file name in next version
Yes this block does not support SRIOV

> new file mode 100644
> index 0000000..1f95c7f
> --- /dev/null
> +++ b/drivers/net/ethernet/cavium/thunder/pf_reg.c
> @@ -0,0 +1,660 @@
> +/*
> + * Copyright (C) 2015 Cavium, Inc.
> + *
> + * This program is free software; you can redistribute it and/or 
> +modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/fs.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/device.h>
> +#include <linux/version.h>
> +#include <linux/proc_fs.h>
> +#include <linux/device.h>
> +#include <linux/mman.h>
> +#include <linux/uaccess.h>
> +#include <linux/delay.h>
> +#include <linux/cdev.h>
> +#include <linux/err.h>
> +#include <linux/device.h>
> +#include <linux/io.h>
> +#include <linux/firmware.h>
> +#include "pf_globals.h"
> +#include "pf_locals.h"
> +#include "tbl_access.h"
> +#include "linux/lz4.h"
> +
> +struct tns_table_s tbl_info[TNS_MAX_TABLE];
> +
> +#define TNS_TDMA_SST_ACC_CMD_ADDR      0x0000842000000270ull
> +
> +#define BAR0_START 0x842000000000
> +#define BAR0_END   0x84200000FFFF
> +#define BAR0_SIZE  (64 * 1024)
> +#define BAR2_START 0x842040000000
> +#define BAR2_END   0x84207FFFFFFF
> +#define BAR2_SIZE  (1024 * 1024 * 1024)
> +
> +#define NODE1_BAR0_START 0x942000000000
> +#define NODE1_BAR0_END   0x94200000FFFF
> +#define NODE1_BAR0_SIZE  (64 * 1024)
> +#define NODE1_BAR2_START 0x942040000000
> +#define NODE1_BAR2_END   0x94207FFFFFFF
> +#define NODE1_BAR2_SIZE  (1024 * 1024 * 1024)

Sunil>> This is absurd, why are you using hardcoded HW addresses,
why not use TNS device's PCI BARs.
SATHA>>> Due to various considerations TNS is not treated as PCI device by our driver (probably will be even disabled as such later in the FW), HW addresses mentioned was register base address as per HRM.
Macro name with BAR_X confuses, will change this in my next revision.

> +/* Allow a max of 4 chunks for the Indirect Read/Write */ #define 
> +MAX_SIZE (64 * 4) #define CHUNK_SIZE (64)
> +/* To protect register access */
> +spinlock_t pf_reg_lock;
> +
> +u64 iomem0;
> +u64 iomem2;
> +u8 tns_enabled;
> +u64 node1_iomem0;
> +u64 node1_iomem2;
> +u8 node1_tns;
> +int n1_tns;

Sunil>> A simple structure would have nice instead of so many global variables.
SATHA>>> Good suggestion, will do this in next version

> +
> +int tns_write_register_indirect(int node_id, u64 address, u8 size,
> +                               u8 *kern_buffer) {
> +       union tns_tdma_sst_acc_cmd acccmd;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +       int i, j, w = 0;
> +       int cnt = 0;
> +       u32 *dataw = NULL;
> +       int temp = 0;
> +       int k = 0;
> +       int chunks = 0;
> +       u64 acccmd_address;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (size == 0 || !kern_buffer) {
> +               filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
> +               return TNS_ERROR_INVALID_ARG;
> +       }
> +       if (size > MAX_SIZE) {
> +               filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
> +               return TNS_ERROR_DATA_TOO_LARGE;
> +       }
> +       if (node_id) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
> +       acccmd_address = (address & 0x00000000ffffffff);
> +       spin_lock_bh(&pf_reg_lock);
> +
> +       for (k = 0; k < chunks; k++) {

Sunil>> Why not use some proper variable names, instead of i,j,k,w,
temp e.t.c e.t.c
SATHA>>> Will do this in next version

> +               /* Should never happen */
> +               if (size < 0) {
> +                       filter_dbg(FERR, "%s size mismatch [CHUNK %d]\n",
> +                                  __func__, k);
> +                       break;
> +               }
> +               temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
> +               dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
> +               cnt = ((temp + 3) / 4);
> +               data.u = 0ULL;
> +               for (j = 0, i = 0; i < cnt; i++) {
> +                       /* Odd words go in the upper 32 bits of the data
> +                        * register
> +                        */
> +                       if (i & 1) {
> +                               data.s.upper32 = dataw[i];
> +                               writeq_relaxed(data.u, (void *)(lmem0 +
> +                                              TNS_TDMA_SST_ACC_WDATX(j)));
> +                               data.u = 0ULL;
> +                               j++; /* Advance to the next data word */
> +                               w = 0;
> +                       } else {
> +                               /* Lower 32 bits contain words 0, 2, 4, etc. */
> +                               data.s.lower32 = dataw[i];
> +                               w = 1;
> +                       }
> +               }
> +
> +               /* If the last word was a partial (< 64 bits) then
> +                * see if we need to write it.
> +                */
> +               if (w)
> +                       writeq_relaxed(data.u, (void *)(lmem0 +
> +                                      TNS_TDMA_SST_ACC_WDATX(j)));
> +
> +               acccmd.u = 0ULL;
> +               acccmd.s.go = 1; /* Cleared once the request is serviced */
> +               acccmd.s.size = cnt;
> +               acccmd.s.addr = (acccmd_address >> 2);
> +               writeq_relaxed(acccmd.u, (void *)(lmem0 +
> +                              TDMA_SST_ACC_CMD));
> +               accstat.u = 0ULL;
> +
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                         TDMA_SST_ACC_STAT));
> +
> +               if (accstat.s.error) {
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                              TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "%s Reading data from ", __func__);
> +                       filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
> +                                  (unsigned long)address, k,
> +                                  (unsigned long)data.u);
> +                       spin_unlock_bh(&pf_reg_lock);
> +                       kfree(kern_buffer);
> +                       return TNS_ERROR_INDIRECT_WRITE;
> +               }
> +               /* Calculate the next offset to write */
> +               acccmd_address = acccmd_address + CHUNK_SIZE;
> +               size -= CHUNK_SIZE;
> +       }
> +       spin_unlock_bh(&pf_reg_lock);
> +
> +       return 0;
> +}
> +
> +int tns_read_register_indirect(int node_id, u64 address, u8 size,
> +                              u8 *kern_buffer) {
> +       union tns_tdma_sst_acc_cmd acccmd;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +       int i, j, dcnt;
> +       int cnt = 0;
> +       u32 *dataw = NULL;
> +       int temp = 0;
> +       int k = 0;
> +       int chunks = 0;
> +       u64 acccmd_address;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (size == 0 || !kern_buffer) {
> +               filter_dbg(FERR, "%s data size cannot be zero\n", __func__);
> +               return TNS_ERROR_INVALID_ARG;
> +       }
> +       if (size > MAX_SIZE) {
> +               filter_dbg(FERR, "%s Max allowed size exceeded\n", __func__);
> +               return TNS_ERROR_DATA_TOO_LARGE;
> +       }
> +       if (node_id) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       chunks = ((size + (CHUNK_SIZE - 1)) / CHUNK_SIZE);
> +       acccmd_address = (address & 0x00000000ffffffff);
> +       spin_lock_bh(&pf_reg_lock);
> +       for (k = 0; k < chunks; k++) {
> +               /* This should never happen */
> +               if (size < 0) {
> +                       filter_dbg(FERR, "%s size mismatch [CHUNK:%d]\n",
> +                                  __func__, k);
> +                       break;
> +               }
> +               temp = (size > CHUNK_SIZE) ? CHUNK_SIZE : size;
> +               dataw = (u32 *)(kern_buffer + (k * CHUNK_SIZE));
> +               cnt = ((temp + 3) / 4);
> +               acccmd.u = 0ULL;
> +               acccmd.s.op = 1; /* Read operation */
> +               acccmd.s.size = cnt;
> +               acccmd.s.addr = (acccmd_address >> 2);
> +               acccmd.s.go = 1; /* Execute */
> +               writeq_relaxed(acccmd.u, (void *)(lmem0 +
> +                              TDMA_SST_ACC_CMD));
> +               accstat.u = 0ULL;
> +
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                                 TDMA_SST_ACC_STAT));
> +
> +               if (accstat.s.error) {
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                              TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "%s Reading data from", __func__);
> +                       filter_dbg(FERR, "0x%0lx chunk %d failed 0x%0lx",
> +                                  (unsigned long)address, k,
> +                                  (unsigned long)data.u);
> +                       spin_unlock_bh(&pf_reg_lock);
> +                       kfree(kern_buffer);
> +                       return TNS_ERROR_INDIRECT_READ;
> +               }
> +
> +               dcnt = cnt / 2;
> +               if (cnt & 1)
> +                       dcnt++;
> +               for (i = 0, j = 0; (j < dcnt) && (i < cnt); j++) {
> +                       data.u = readq_relaxed((void *)(lmem0 +
> +                                              TNS_TDMA_SST_ACC_RDATX(j)));
> +                       dataw[i++] = data.s.lower32;
> +                       if (i < cnt)
> +                               dataw[i++] = data.s.upper32;
> +               }
> +               /* Calculate the next offset to read */
> +               acccmd_address = acccmd_address + CHUNK_SIZE;
> +               size -= CHUNK_SIZE;
> +       }
> +       spin_unlock_bh(&pf_reg_lock);
> +       return 0;
> +}
> +
> +u64 tns_read_register(u64 start, u64 offset) {
> +       return readq_relaxed((void *)(start + offset)); }
> +
> +void tns_write_register(u64 start, u64 offset, u64 data) {
> +       writeq_relaxed(data, (void *)(start + offset)); }
> +
> +/* Check if TNS is available. If yes return 0 else 1 */ int 
> +is_tns_available(void) {
> +       union tns_tdma_cap tdma_cap;
> +
> +       tdma_cap.u = tns_read_register(iomem0, TNS_TDMA_CAP_OFFSET);
> +       tns_enabled = tdma_cap.s.switch_capable;
> +       /* In multi-node systems, make sure TNS should be there in 
> + both nodes */

Can't node-0 TNS work with node-0 interfaces if node-1 TNS is not detected ?
SATHA>>> Presently we enabled TNS only when two nodes supports TNS (only incase of 2 node system)

> +       if (nr_node_ids > 1) {
> +               tdma_cap.u = tns_read_register(node1_iomem0,
> +                                              TNS_TDMA_CAP_OFFSET);
> +               if (tdma_cap.s.switch_capable)
> +                       n1_tns = 1;
> +       }
> +       tns_enabled &= tdma_cap.s.switch_capable;
> +       return (!tns_enabled);
> +}
> +
> +int bist_error_check(void)
> +{
> +       int fail = 0, i;
> +       u64 bist_stat = 0;
> +
> +       for (i = 0; i < 12; i++) {
> +               bist_stat = tns_read_register(iomem0, (i * 16));
> +               if (bist_stat) {
> +                       filter_dbg(FERR, "TNS BIST%d fail 0x%llx\n",
> +                                  i, bist_stat);
> +                       fail = 1;
> +               }
> +               if (!n1_tns)
> +                       continue;
> +               bist_stat = tns_read_register(node1_iomem0, (i * 16));
> +               if (bist_stat) {
> +                       filter_dbg(FERR, "TNS(N1) BIST%d fail 0x%llx\n",
> +                                  i, bist_stat);
> +                       fail = 1;
> +               }
> +       }
> +
> +       return fail;
> +}
> +
> +int replay_indirect_trace(int node, u64 *buf_ptr, int idx) {
> +       union _tns_sst_config cmd = (union _tns_sst_config)(buf_ptr[idx]);
> +       int remaining = cmd.cmd.run;
> +       u64 io_addr;
> +       int word_cnt = cmd.cmd.word_cnt;
> +       int size = (word_cnt + 1) / 2;
> +       u64 stride = word_cnt;
> +       u64 acc_cmd = cmd.copy.do_copy;
> +       u64 lmem2 = 0, lmem0 = 0;
> +       union tns_tdma_sst_acc_stat_t accstat;
> +       union tns_acc_data data;
> +
> +       if (node) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +
> +       if (word_cnt == 0) {
> +               word_cnt = 16;
> +               stride = 16;
> +               size = 8;
> +       } else {
> +               // make stride next power of 2

Please use proper commenting, have you ran checkpatch ?
SATHA>>> Will change these comments in next version. We ran checkpatch, no errors and warning reported

> +               if (cmd.cmd.powerof2stride)
> +                       while ((stride & (stride - 1)) != 0)
> +                               stride++;
> +       }
> +       stride *= 4; //convert stride from 32-bit words to bytes
> +
> +       do {
> +               int addr_p = 1;
> +               /* extract (big endian) data from the config
> +                * into the data array
> +                */
> +               while (size > 0) {
> +                       io_addr = lmem0 + TDMA_SST_ACC_CMD + addr_p * 16;
> +                       tns_write_register(io_addr, 0, buf_ptr[idx + size]);
> +                       addr_p += 1;
> +                       size--;
> +               }
> +               tns_write_register((lmem0 + TDMA_SST_ACC_CMD), 0, acc_cmd);
> +               /* TNS Block access registers indirectly, ran memory barrier
> +                * between two writes
> +                */
> +               wmb();
> +               /* Check for completion */
> +               accstat.u = 0ULL;
> +               while (!accstat.s.cmd_done && !accstat.s.error)
> +                       accstat.u = readq_relaxed((void *)(lmem0 +
> +                                                          
> + TDMA_SST_ACC_STAT));
> +
> +               /* Check for error, and report it */
> +               if (accstat.s.error) {
> +                       filter_dbg(FERR, "%s data from 0x%0llx failed 0x%llx\n",
> +                                  __func__, acc_cmd, accstat.u);
> +                       data.u = readq_relaxed((void *)(lmem2 +
> +                                                       TDMA_NB_INT_STAT));
> +                       filter_dbg(FERR, "Status 0x%llx\n", data.u);
> +               }
> +               /* update the address */
> +               acc_cmd += stride;
> +               size = (word_cnt + 1) / 2;
> +               usleep_range(20, 30);
> +       } while (remaining-- > 0);
> +
> +       return size;
> +}
> +
> +void replay_tns_node(int node, u64 *buf_ptr, int reg_cnt) {
> +       int counter = 0;
> +       u64 offset = 0;
> +       u64 io_address;
> +       int datapathmode = 1;
> +       u64 lmem2 = 0, lmem0 = 0;
> +
> +       if (node) {
> +               lmem0 = node1_iomem0;
> +               lmem2 = node1_iomem2;
> +       } else {
> +               lmem0 = iomem0;
> +               lmem2 = iomem2;
> +       }
> +       for (counter = 0; counter < reg_cnt; counter++) {
> +               if (buf_ptr[counter] == 0xDADADADADADADADAull) {
> +                       datapathmode = 1;
> +                       continue;
> +               } else if (buf_ptr[counter] == 0xDEDEDEDEDEDEDEDEull) {
> +                       datapathmode = 0;
> +                       continue;
> +               }
> +               if (datapathmode == 1) {
> +                       if (buf_ptr[counter] >= BAR0_START &&
> +                           buf_ptr[counter] <= BAR0_END) {
> +                               offset = buf_ptr[counter] - BAR0_START;
> +                               io_address = lmem0 + offset;
> +                       } else if (buf_ptr[counter] >= BAR2_START &&
> +                                  buf_ptr[counter] <= BAR2_END) {
> +                               offset = buf_ptr[counter] - BAR2_START;
> +                               io_address = lmem2 + offset;
> +                       } else {
> +                               filter_dbg(FERR, "%s Address 0x%llx invalid\n",
> +                                          __func__, buf_ptr[counter]);
> +                               return;
> +                       }
> +
> +                       tns_write_register(io_address, 0, buf_ptr[counter + 1]);
> +                       /* TNS Block access registers indirectly, ran memory
> +                        * barrier between two writes
> +                        */
> +                       wmb();
> +                       counter += 1;
> +                       usleep_range(20, 30);
> +               } else if (datapathmode == 0) {
> +                       int sz = replay_indirect_trace(node, buf_ptr, 
> + counter);
> +
> +                       counter += sz;
> +               }
> +       }
> +}
> +
> +int alloc_table_info(int i, struct table_static_s tbl_sdata[]) {
> +       tbl_info[i].ddata[0].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
> +                                             sizeof(uintptr_t), GFP_KERNEL);
> +       if (!tbl_info[i].ddata[0].bitmap)
> +               return 1;
> +
> +       if (!n1_tns)
> +               return 0;
> +
> +       tbl_info[i].ddata[1].bitmap = kcalloc(BITS_TO_LONGS(tbl_sdata[i].depth),
> +                                             sizeof(uintptr_t), GFP_KERNEL);
> +       if (!tbl_info[i].ddata[1].bitmap) {
> +               kfree(tbl_info[i].ddata[0].bitmap);
> +               return 1;
> +       }
> +
> +       return 0;
> +}
> +
> +void tns_replay_register_trace(const struct firmware *fw, struct 
> +device *dev) {
> +       int i;
> +       int node = 0;
> +       u8 *buffer = NULL;
> +       u64 *buf_ptr = NULL;
> +       struct tns_global_st *fw_header = NULL;
> +       struct table_static_s tbl_sdata[TNS_MAX_TABLE];
> +       size_t src_len;
> +       size_t dest_len = TNS_FW_MAX_SIZE;
> +       int rc;
> +       u8 *fw2_buf = NULL;
> +       unsigned char *decomp_dest = NULL;
> +
> +       fw2_buf = (u8 *)fw->data;
> +       src_len = fw->size - 8;
> +
> +       decomp_dest = kcalloc((dest_len * 2), sizeof(char), GFP_KERNEL);
> +       if (!decomp_dest)
> +               return;
> +
> +       memset(decomp_dest, 0, (dest_len * 2));
> +       rc = lz4_decompress_unknownoutputsize(&fw2_buf[8], src_len, decomp_dest,
> +                                             &dest_len);
> +       if (rc) {
> +               filter_dbg(FERR, "Decompress Error %d\n", rc);
> +               pr_info("Uncompressed destination length %ld\n", dest_len);
> +               kfree(decomp_dest);
> +               return;
> +       }
> +       fw_header = (struct tns_global_st *)decomp_dest;
> +       buffer = (u8 *)decomp_dest;
> +
> +       filter_dbg(FINFO, "TNS Firmware version: %s Loading...\n",
> +                  fw_header->version);
> +
> +       memset(tbl_info, 0x0, sizeof(tbl_info));
> +       buf_ptr = (u64 *)(buffer + sizeof(struct tns_global_st));
> +       memcpy(tbl_sdata, fw_header->tbl_info, 
> + sizeof(fw_header->tbl_info));
> +
> +       for (i = 0; i < TNS_MAX_TABLE; i++) {
> +               if (!tbl_sdata[i].valid)
> +                       continue;
> +               memcpy(&tbl_info[i].sdata, &tbl_sdata[i],
> +                      sizeof(struct table_static_s));
> +               if (alloc_table_info(i, tbl_sdata)) {
> +                       kfree(decomp_dest);
> +                       return;
> +               }
> +       }
> +
> +       for (node = 0; node < nr_node_ids; node++)
> +               replay_tns_node(node, buf_ptr, fw_header->reg_cnt);
> +
> +       kfree(decomp_dest);
> +       release_firmware(fw);
> +}
> +
> +int tns_init(const struct firmware *fw, struct device *dev) {
> +       int result = 0;
> +       int i = 0;
> +       int temp;
> +       union tns_tdma_config tdma_config;
> +       union tns_tdma_lmacx_config tdma_lmac_cfg;
> +       u64 reg_init_val;
> +
> +       spin_lock_init(&pf_reg_lock);
> +
> +       /* use two regions insted of a single big mapping to save
> +        * the kernel virtual space
> +        */
> +       iomem0 = (u64)ioremap(BAR0_START, BAR0_SIZE);
> +       if (iomem0 == 0ULL) {
> +               filter_dbg(FERR, "Node0 ioremap failed for BAR0\n");
> +               result = -EAGAIN;
> +               goto error;
> +       } else {
> +               filter_dbg(FINFO, "ioremap success for BAR0\n");
> +       }
> +
> +       if (nr_node_ids > 1) {
> +               node1_iomem0 = (u64)ioremap(NODE1_BAR0_START, NODE1_BAR0_SIZE);
> +               if (node1_iomem0 == 0ULL) {
> +                       filter_dbg(FERR, "Node1 ioremap failed for BAR0\n");
> +                       result = -EAGAIN;
> +                       goto error;
> +               } else {
> +                       filter_dbg(FINFO, "ioremap success for BAR0\n");
> +               }
> +       }
> +
> +       if (is_tns_available()) {
> +               filter_dbg(FERR, "TNS NOT AVAILABLE\n");
> +               goto error;
> +       }
> +
> +       if (bist_error_check()) {
> +               filter_dbg(FERR, "BIST ERROR CHECK FAILED");
> +               goto error;
> +       }
> +
> +       /* NIC0-BGX0 is TNS, NIC1-BGX1 is TNS, DISABLE BACKPRESSURE */

Sunil>> Why disable backpressure, if it's in TNS mode ?
SATHA>>> As part of the init code we disabled BP, later it is enabled

> +       reg_init_val = 0ULL;
> +       pr_info("NIC Block configured in TNS/TNS mode");
> +       tns_write_register(iomem0, TNS_RDMA_CONFIG_OFFSET, reg_init_val);
> +       usleep_range(10, 20);

Sunil>> Why sleep after every register write ?
SATHA>>> Consecutive indirect register access needs some delay

> +       if (n1_tns) {
> +               tns_write_register(node1_iomem0, TNS_RDMA_CONFIG_OFFSET,
> +                                  reg_init_val);
> +               usleep_range(10, 20);
> +       }
> +
> +       // Configure each LMAC with 512 credits in BYPASS mode
> +       for (i = TNS_MIN_LMAC; i < (TNS_MIN_LMAC + TNS_MAX_LMAC); i++) {
> +               tdma_lmac_cfg.u = 0ULL;
> +               tdma_lmac_cfg.s.fifo_cdts = 0x200;
> +               tns_write_register(iomem0, TNS_TDMA_LMACX_CONFIG_OFFSET(i),
> +                                  tdma_lmac_cfg.u);
> +               usleep_range(10, 20);
> +               if (n1_tns) {
> +                       tns_write_register(node1_iomem0,
> +                                          TNS_TDMA_LMACX_CONFIG_OFFSET(i),
> +                                          tdma_lmac_cfg.u);
> +                       usleep_range(10, 20);
> +               }
> +       }
> +
> +       //ENABLE TNS CLOCK AND CSR READS
> +       temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
> +       tdma_config.u = temp;
> +       tdma_config.s.clk_2x_ena = 1;
> +       tdma_config.s.clk_ena = 1;
> +       tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
> +                                  tdma_config.u);
> +
> +       temp = tns_read_register(iomem0, TNS_TDMA_CONFIG_OFFSET);
> +       tdma_config.u = temp;
> +       tdma_config.s.csr_access_ena = 1;
> +       tns_write_register(iomem0, TNS_TDMA_CONFIG_OFFSET, tdma_config.u);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_CONFIG_OFFSET,
> +                                  tdma_config.u);
> +
> +       reg_init_val = 0ULL;
> +       tns_write_register(iomem0, TNS_TDMA_RESET_CTL_OFFSET, reg_init_val);
> +       if (n1_tns)
> +               tns_write_register(node1_iomem0, TNS_TDMA_RESET_CTL_OFFSET,
> +                                  reg_init_val);
> +
> +       iomem2 = (u64)ioremap(BAR2_START, BAR2_SIZE);
> +       if (iomem2 == 0ULL) {
> +               filter_dbg(FERR, "ioremap failed for BAR2\n");
> +               result = -EAGAIN;
> +               goto error;
> +       } else {
> +               filter_dbg(FINFO, "ioremap success for BAR2\n");
> +       }
> +
> +       if (n1_tns) {
> +               node1_iomem2 = (u64)ioremap(NODE1_BAR2_START, NODE1_BAR2_SIZE);
> +               if (node1_iomem2 == 0ULL) {
> +                       filter_dbg(FERR, "Node1 ioremap failed for BAR2\n");
> +                       result = -EAGAIN;
> +                       goto error;
> +               } else {
> +                       filter_dbg(FINFO, "Node1 ioremap success for BAR2\n");
> +               }
> +       }
> +       msleep(1000);
> +       //We will replay register trace to initialize TNS block
> +       tns_replay_register_trace(fw, dev);
> +
> +       return 0;
> +error:
> +       if (iomem0 != 0)
> +               iounmap((void *)iomem0);
> +       if (iomem2 != 0)
> +               iounmap((void *)iomem2);
> +
> +       if (node1_iomem0 != 0)
> +               iounmap((void *)node1_iomem0);
> +       if (node1_iomem2 != 0)
> +               iounmap((void *)node1_iomem2);
> +
> +       return result;
> +}
> +
> +void tns_exit(void)
> +{
> +       int i;
> +
> +       if (iomem0 != 0)
> +               iounmap((void *)iomem0);
> +       if (iomem2 != 0)
> +               iounmap((void *)iomem2);
> +
> +       if (node1_iomem0 != 0)
> +               iounmap((void *)node1_iomem0);
> +       if (node1_iomem2 != 0)
> +               iounmap((void *)node1_iomem2);
> +
> +       for (i = 0; i < TNS_MAX_TABLE; i++) {
> +               if (!tbl_info[i].sdata.valid)
> +                       continue;
> +               kfree(tbl_info[i].ddata[0].bitmap);
> +               kfree(tbl_info[i].ddata[n1_tns].bitmap);
> +       }
> +}
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's
  2016-12-21 12:43   ` Sunil Kovvuri
@ 2016-12-26 14:16     ` Koteshwar Rao, Satha
  0 siblings, 0 replies; 19+ messages in thread
From: Koteshwar Rao, Satha @ 2016-12-26 14:16 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: LKML, Goutham, Sunil, Robert Richter, David S. Miller, Daney,
	David, Vatsavayi, Raghu, Chickles, Derek, Romanov, Philip,
	Linux Netdev List, LAKML

Thanks for suggestion. Will clean up code in next revision

Thanks,
Satha

-----Original Message-----
From: Sunil Kovvuri [mailto:sunil.kovvuri@gmail.com] 
Sent: Wednesday, December 21, 2016 4:44 AM
To: Koteshwar Rao, Satha
Cc: LKML; Goutham, Sunil; Robert Richter; David S. Miller; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Romanov, Philip; Linux Netdev List; LAKML
Subject: Re: [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's

On Wed, Dec 21, 2016 at 2:16 PM, Satha Koteswara Rao <satha.rao@caviumnetworks.com> wrote:
> +struct tns_global_st {
> +       u64 magic;
> +       char     version[16];
> +       u64 reg_cnt;
> +       struct table_static_s tbl_info[TNS_MAX_TABLE]; };
> +
> +#define PF_COUNT 3
> +#define PF_1   0
> +#define PF_2   64
> +#define PF_3   96
> +#define PF_END 128

Some comments please ... what is 0, 64, 96 ??
You can read PCI_SRIOV_TOTAL_VF from PCI config space instead of defining PF_END with 128.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file
  2016-12-21 13:05   ` Sunil Kovvuri
@ 2016-12-26 14:20     ` Koteshwar Rao, Satha
  2016-12-27  4:19       ` Sunil Kovvuri
  0 siblings, 1 reply; 19+ messages in thread
From: Koteshwar Rao, Satha @ 2016-12-26 14:20 UTC (permalink / raw)
  To: Sunil Kovvuri
  Cc: LKML, Goutham, Sunil, Robert Richter, David S. Miller, Daney,
	David, Vatsavayi, Raghu, Chickles, Derek, Romanov, Philip,
	Linux Netdev List, LAKML

Responses inline

Thanks,
Satha

-----Original Message-----
From: Sunil Kovvuri [mailto:sunil.kovvuri@gmail.com] 
Sent: Wednesday, December 21, 2016 5:05 AM
To: Koteshwar Rao, Satha
Cc: LKML; Goutham, Sunil; Robert Richter; David S. Miller; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Romanov, Philip; Linux Netdev List; LAKML
Subject: Re: [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file

>
>  #define NIC_MAX_RSS_HASH_BITS          8
>  #define NIC_MAX_RSS_IDR_TBL_SIZE       (1 << NIC_MAX_RSS_HASH_BITS)
> +#define NIC_TNS_RSS_IDR_TBL_SIZE       5

So you want to use only 5 queues per VF when TNS is enabled, is it ??
There are 4096 RSS indices in total, for each VF you can use max 32.
I guess you wanted to set no of hash bits to 5 instead of table size.

SATHA>>> We enabled 8 queues for VF. Yes Macro name misleads it has to be hash bits, will change this in next version

>  #define RSS_HASH_KEY_SIZE              5 /* 320 bit key */
>
>  struct nicvf_rss_info {
> @@ -255,74 +258,6 @@ struct nicvf_drv_stats {
>         struct u64_stats_sync   syncp;
>  };
>
> -struct nicvf {
> -       struct nicvf            *pnicvf;
> -       struct net_device       *netdev;
> -       struct pci_dev          *pdev;
> -       void __iomem            *reg_base;

Didn't get why you moved this structure to the end of file.
Looks like an unnecessary modification.
SATHA>>> Previously we have some dependency, we look into this, and address in next verison

> +static unsigned int num_vfs;
> +module_param(num_vfs, uint, 0644);
> +MODULE_PARM_DESC(num_vfs, "Non zero positive value, specifies number 
> +of VF's per physical port");

So what if driver is built-in instead of module, I can't use TNS is it ?
SATHA>>> Still you can enable this special features by passing boot argument "nicpf.num_vfs=X"

>
> +/* Set RBDR Backpressure (RBDR_BP) and CQ backpressure (CQ_BP) of 
> +vnic queues
> + * to 129 each

Why 129 ??
RBDR minimum size is 8K buffers, why you want to assert BP when still ~4K buffers are available. Isn't 4K a huge number to start asserting backpressure ?
SATHA>>> As CQ count was 4K entries, I used same BP value for both, will address this in next version

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH 3/7] Enable pause frame support
       [not found]   ` <DM5PR07MB2889471E0668C95BD2A266709E930@DM5PR07MB2889.namprd07.prod.outlook.com>
@ 2016-12-26 14:21     ` Koteshwar Rao, Satha
  0 siblings, 0 replies; 19+ messages in thread
From: Koteshwar Rao, Satha @ 2016-12-26 14:21 UTC (permalink / raw)
  To: Goutham, Sunil, linux-kernel
  Cc: rric, davem, Daney, David, Vatsavayi, Raghu, Chickles, Derek,
	Romanov, Philip, netdev, linux-arm-kernel

Thanks Sunil, will fix this in next version

Thanks,
Satha

From: Goutham, Sunil 
Sent: Wednesday, December 21, 2016 1:20 AM
To: Koteshwar Rao, Satha; linux-kernel@vger.kernel.org
Cc: rric@kernel.org; davem@davemloft.net; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Romanov, Philip; netdev@vger.kernel.org; linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH 3/7] Enable pause frame support

>>+#define  BGX_SMUX_CBFC_CTL             0x20218

These macros are already defined.

if you check 'net-next ' branch pause frame support has already been
added. You should send patch on top it if you have further changes
to the existing.

Thanks,
Sunil.

________________________________________
From: Koteshwar Rao, Satha
Sent: Wednesday, December 21, 2016 2:16 PM
To: linux-kernel@vger.kernel.org
Cc: Goutham, Sunil; rric@kernel.org; davem@davemloft.net; Daney, David; Vatsavayi, Raghu; Chickles, Derek; Koteshwar Rao, Satha; Romanov, Philip; netdev@vger.kernel.org; linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH 3/7] Enable pause frame support 
 
---
 drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 25 +++++++++++++++++++++++
 drivers/net/ethernet/cavium/thunder/thunder_bgx.h |  7 +++++++
 2 files changed, 32 insertions(+)

diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
index 050e21f..92d7e04 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
@@ -121,6 +121,31 @@ static int bgx_poll_reg(struct bgx *bgx, u8 lmac, u64 reg, u64 mask, bool zero)
         return 1;
 }
 
+void enable_pause_frames(int node, int bgx_idx, int lmac)
+{
+       u64 reg_value = 0;
+       struct bgx *bgx = bgx_vnic[(node * MAX_BGX_PER_NODE) + bgx_idx];
+
+       reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_TX_CTL);
+       /* Enable BGX()_SMU()_TX_CTL */
+       if (!(reg_value & L2P_BP_CONV))
+               bgx_reg_write(bgx, lmac, BGX_SMUX_TX_CTL,
+                             (reg_value | (L2P_BP_CONV)));
+
+       reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_HG2_CTL);
+       /* Clear if BGX()_SMU()_HG2_CONTROL[HG2TX_EN] is set */
+       if (reg_value & SMUX_HG2_CTL_HG2TX_EN)
+               bgx_reg_write(bgx, lmac, BGX_SMUX_HG2_CTL,
+                             (reg_value & (~SMUX_HG2_CTL_HG2TX_EN)));
+
+       reg_value =  bgx_reg_read(bgx, lmac, BGX_SMUX_CBFC_CTL);
+       /* Clear if BGX()_SMU()_CBFC_CTL[TX_EN] is set */
+       if (reg_value & CBFC_CTL_TX_EN)
+               bgx_reg_write(bgx, lmac, BGX_SMUX_CBFC_CTL,
+                             (reg_value & (~CBFC_CTL_TX_EN)));
+}
+EXPORT_SYMBOL(enable_pause_frames);
+
 /* Return number of BGX present in HW */
 unsigned bgx_get_map(int node)
 {
diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.h b/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
index 01cc7c8..5b57bd1 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.h
@@ -131,6 +131,11 @@
 #define BGX_SMUX_TX_CTL                 0x20178
 #define  SMU_TX_CTL_DIC_EN                      BIT_ULL(0)
 #define  SMU_TX_CTL_UNI_EN                      BIT_ULL(1)
+#define  L2P_BP_CONV                           BIT_ULL(7)
+#define  BGX_SMUX_CBFC_CTL             0x20218
+#define  CBFC_CTL_TX_EN                                BIT_ULL(1)
+#define  BGX_SMUX_HG2_CTL              0x20210
+#define SMUX_HG2_CTL_HG2TX_EN                  BIT_ULL(18)
 #define  SMU_TX_CTL_LNK_STATUS                  (3ull << 4)
 #define BGX_SMUX_TX_THRESH              0x20180
 #define BGX_SMUX_CTL                    0x20200
@@ -212,6 +217,8 @@ void bgx_lmac_internal_loopback(int node, int bgx_idx,
 
 u64 bgx_get_rx_stats(int node, int bgx_idx, int lmac, int idx);
 u64 bgx_get_tx_stats(int node, int bgx_idx, int lmac, int idx);
+void enable_pause_frames(int node, int bgx_idx, int lmac);
+
 #define BGX_RX_STATS_COUNT 11
 #define BGX_TX_STATS_COUNT 18
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 0/7] ThunderX Embedded switch support
  2016-12-26 14:04   ` Koteshwar Rao, Satha
@ 2016-12-26 14:55     ` Andrew Lunn
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Lunn @ 2016-12-26 14:55 UTC (permalink / raw)
  To: Koteshwar Rao, Satha
  Cc: Sunil Kovvuri, LKML, Goutham, Sunil, Robert Richter,
	David S. Miller, Daney, David, Vatsavayi, Raghu, Chickles, Derek,
	Romanov, Philip, Linux Netdev List, LAKML

On Mon, Dec 26, 2016 at 02:04:27PM +0000, Koteshwar Rao, Satha wrote:
> Hi Sunil,
> 
> In RFC cover letter we explained the feature details, files organized based on their supporting functionality, let me know if you are interested in any specific details

Please don't top post. Also, please perform correct quoting of the
email you are replying to.

As for getting patches merged, you will find it easier to get reviews
if you have lots of small patches which are obviously correct, and
each has a good change log entry describing the why as well as what.

     Andrew

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file
  2016-12-26 14:20     ` Koteshwar Rao, Satha
@ 2016-12-27  4:19       ` Sunil Kovvuri
  0 siblings, 0 replies; 19+ messages in thread
From: Sunil Kovvuri @ 2016-12-27  4:19 UTC (permalink / raw)
  To: Koteshwar Rao, Satha
  Cc: LKML, Goutham, Sunil, Robert Richter, David S. Miller, Daney,
	David, Vatsavayi, Raghu, Chickles, Derek, Romanov, Philip,
	Linux Netdev List, LAKML

>>  #define NIC_MAX_RSS_HASH_BITS          8
>>  #define NIC_MAX_RSS_IDR_TBL_SIZE       (1 << NIC_MAX_RSS_HASH_BITS)
>> +#define NIC_TNS_RSS_IDR_TBL_SIZE       5
>
> So you want to use only 5 queues per VF when TNS is enabled, is it ??
> There are 4096 RSS indices in total, for each VF you can use max 32.
> I guess you wanted to set no of hash bits to 5 instead of table size.
>
> SATHA>>> We enabled 8 queues for VF. Yes Macro name misleads it has to be hash bits, will change this in next version

No, I am not referring to any discrepancy in naming the macro.
If you check your code

- hw->rss_ind_tbl_size = NIC_MAX_RSS_IDR_TBL_SIZE;
+ hw->rss_ind_tbl_size = veb_enabled ? NIC_TNS_RSS_IDR_TBL_SIZE :
+     NIC_MAX_RSS_IDR_TBL_SIZE;

You are setting RSS table size to 5, i.e RSS hash bits will be set to 2.
Hence only 4 queues (not even 5 as i mentioned earlier). Please check
'nicvf_rss_init' you will understand what i am saying.
Have you tested and observed pkts in all 8 queues ?

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2016-12-27  4:20 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-21  8:46 [RFC PATCH 0/7] ThunderX Embedded switch support Satha Koteswara Rao
2016-12-21  8:46 ` [RFC PATCH 1/7] PF driver modified to enable HW filter support, changes works in backward compatibility mode Enable required things in Makefile Enable LZ4 dependecy inside config file Satha Koteswara Rao
2016-12-21 13:05   ` Sunil Kovvuri
2016-12-26 14:20     ` Koteshwar Rao, Satha
2016-12-27  4:19       ` Sunil Kovvuri
2016-12-21  8:46 ` [RFC PATCH 2/7] VF driver changes to enable hooks to get kernel notifications Satha Koteswara Rao
2016-12-21  8:46 ` [RFC PATCH 3/7] Enable pause frame support Satha Koteswara Rao
     [not found]   ` <DM5PR07MB2889471E0668C95BD2A266709E930@DM5PR07MB2889.namprd07.prod.outlook.com>
2016-12-26 14:21     ` Koteshwar Rao, Satha
2016-12-21  8:46 ` [RFC PATCH 4/7] HW Filter Initialization code and register access APIs Satha Koteswara Rao
2016-12-21 12:36   ` Sunil Kovvuri
2016-12-26 14:13     ` Koteshwar Rao, Satha
2016-12-21  8:46 ` [RFC PATCH 5/7] Multiple VF's grouped together under single physical port called PF group PF Group maintainance API's Satha Koteswara Rao
2016-12-21 12:43   ` Sunil Kovvuri
2016-12-26 14:16     ` Koteshwar Rao, Satha
2016-12-21  8:46 ` [RFC PATCH 6/7] HW Filter Table access API's Satha Koteswara Rao
2016-12-21  8:46 ` [RFC PATCH 7/7] Get notifications from PF driver and configure filter block based on request data Satha Koteswara Rao
2016-12-21 12:03 ` [RFC PATCH 0/7] ThunderX Embedded switch support Sunil Kovvuri
2016-12-26 14:04   ` Koteshwar Rao, Satha
2016-12-26 14:55     ` Andrew Lunn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).