netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 net-next 00/22] bnx2x: support SR-IOV
@ 2012-12-10 15:46 Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device Ariel Elior
                   ` (21 more replies)
  0 siblings, 22 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior

Hi Dave,

Changes for v2:
-Remove redundant empty lines
-Remove redundant 'inline'

Changes for v3:
-Really remove all redundant empty lines
-Remove __dev* attributes from series

This patch series adds support for SR-IOV in the bnx2x driver.
In bnx2x SR-IOV scheme the same bnx2x driver drives both VFs and PFs (single
binary).
The bulk of the communication between the VF drivers and the PF driver is done
via the VF <-> PF channel, a hardware-based communications channel with TLV
messages. The TLVs are designed to support different versions of VF drivers
(from multiple VMs) communicating with the same PF driver.

Patches:
01-03 - Probing and removing a VF driver
Includes sending the 'acquire' and 'release' messages on the VF PF channel.
Here the VF PF channel infrastructure is added, including the allocations of
the mailboxes and the definition of thin API structures (which will fill out 
with more content as the series progresses).

04-07 - Loading/Unloading a VF driver
Includes refactoring of the driver load code and differentiating the VF flow
from the PF flow. This also includes requests from the VF for the PF to open
a queue in the HW on its behalf, configure the device with macs/vlans/rxmode
data, etc. Likewise the unload flow has been modified for the PF to undo these
configurations when the VF indicates it is going down.

08    - Modify fastpath flows for VFs
VFs have almost identical behavior in fastpath to PFs. In this patch the
VFs prepare transmit transaction for tx-switching, and the code for acking a
fastpath interrupt has been reorganized to allow a VF or PF to preconfigure the
offset of the interrupt's location in the BAR (as they have BARs with different
mapping) so as not to do so in fastpath.

09-10 - Probe and Load a PF driver with SR-IOV
The PF driver allocates and initializes the VF database to manage and keep track
of its VFs, their resources, queues, etc.

11-19 - The PF side of the VF <-> PF channel requests
Here is the implementation on the PF's side of the requests submitted in patches
01 through 07 by the VF. Patch 14 adds support for statistics collection by the 
PF for all of its VFs (stats are DMAEed directly to VM GPA memory).

20    - Support for VF function level reset
When FLR indication is received for VFs, the PF reclaims all the resources
allocated for these VFs (interrupts, queues) and releases allocation it
performed for the FLRed VFs (it does so by consulting the VF database).

21    - Bulletin Board interface
This patch adds the PF <-> VF Bulletin Board interface. This interface is a
simple interface where the PF can be the initiator, and indicate to a VF that
it has a new MAC for it to use. In this interface each post "covers" any
previous posts (hence the name).

22    - Add the VF device ids and enable feature
In this patch we add the VF device ids of the various devices driven by bnx2x.
Here we also add the calls to "pci_enable_sriov" and "pci_disable_sriov".

Important: In this patch series we have laid the ground work for interfacing
with the infrastructure submitted to the PCI tree for dynamically controlling
the number of VFs of a physical device.

Meanwhile, in patch 9 this code is added to bnx2x_init_one():
--snip snip--
rc = bnx2x_iov_init_one(bp, int_mode, 0/*num vfs*/);
--snip snip--
recompiling the bnx2x with a number of vfs which is not 0 will enable SR-IOV
with that number of VFs.

Please consider applying these patches.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 18:35   ` David Miller
  2012-12-10 15:46 ` [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe Ariel Elior
                   ` (20 subsequent siblings)
  21 siblings, 1 reply; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

To support probing and removing of a bnx2x virtual function
the following were added:
1. add bnx2x_vfpf.h: defines the VF to PF channel
2. add bnx2x_sriov.h: header for bnx2x SR-IOV functionality
3. enumerate VF hw types (identify VFs)
4. if driving a VF, map VF bar
5. if driving a VF, allocate Vf to PF channel
6. refactor interrupt flows to include VF

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |   21 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c   |   25 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h   |    2 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |  391 +++++++++++++--------
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |    9 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |   27 ++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h  |   37 ++
 7 files changed, 354 insertions(+), 158 deletions(-)
 create mode 100644 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
 create mode 100644 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 9a3b81e..b2f7425 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -49,6 +49,12 @@
 #include "bnx2x_dcb.h"
 #include "bnx2x_stats.h"
 
+enum bnx2x_int_mode {
+	BNX2X_INT_MODE_MSIX,
+	BNX2X_INT_MODE_INTX,
+	BNX2X_INT_MODE_MSI
+};
+
 /* error/debug prints */
 
 #define DRV_MODULE_NAME		"bnx2x"
@@ -954,6 +960,9 @@ struct bnx2x_port {
 extern struct workqueue_struct *bnx2x_wq;
 
 #define BNX2X_MAX_NUM_OF_VFS	64
+#define BNX2X_VF_CID_WND	0
+#define BNX2X_CIDS_PER_VF	(1 << BNX2X_VF_CID_WND)
+#define BNX2X_VF_CIDS		(BNX2X_MAX_NUM_OF_VFS * BNX2X_CIDS_PER_VF)
 #define BNX2X_VF_ID_INVALID	0xFF
 
 /*
@@ -1231,6 +1240,10 @@ struct bnx2x {
 	  (vn) * ((CHIP_IS_E1x(bp) || (CHIP_MODE_IS_4_PORT(bp))) ? 2  : 1))
 #define BP_FW_MB_IDX(bp)		BP_FW_MB_IDX_VN(bp, BP_VN(bp))
 
+	/* vf pf channel mailbox contains request and response buffers */
+	struct bnx2x_vf_mbx_msg	*vf2pf_mbox;
+	dma_addr_t		vf2pf_mbox_mapping;
+
 	struct net_device	*dev;
 	struct pci_dev		*pdev;
 
@@ -1318,8 +1331,6 @@ struct bnx2x {
 #define DISABLE_MSI_FLAG		(1 << 7)
 #define TPA_ENABLE_FLAG			(1 << 8)
 #define NO_MCP_FLAG			(1 << 9)
-
-#define BP_NOMCP(bp)			(bp->flags & NO_MCP_FLAG)
 #define GRO_ENABLE_FLAG			(1 << 10)
 #define MF_FUNC_DIS			(1 << 11)
 #define OWN_CNIC_IRQ			(1 << 12)
@@ -1330,6 +1341,11 @@ struct bnx2x {
 #define BC_SUPPORTS_FCOE_FEATURES	(1 << 19)
 #define USING_SINGLE_MSIX_FLAG		(1 << 20)
 #define BC_SUPPORTS_DCBX_MSG_NON_PMF	(1 << 21)
+#define IS_VF_FLAG			(1 << 22)
+
+#define BP_NOMCP(bp)			((bp)->flags & NO_MCP_FLAG)
+#define IS_VF(bp)			((bp)->flags & IS_VF_FLAG)
+#define IS_PF(bp)			(!((bp)->flags & IS_VF_FLAG))
 
 #define NO_ISCSI(bp)		((bp)->flags & NO_ISCSI_FLAG)
 #define NO_ISCSI_OOO(bp)	((bp)->flags & NO_ISCSI_OOO_FLAG)
@@ -1432,6 +1448,7 @@ struct bnx2x {
 	u8			igu_sb_cnt;
 	u8			min_msix_vec_cnt;
 
+	u32			igu_base_addr;
 	dma_addr_t		def_status_blk_mapping;
 
 	struct bnx2x_slowpath	*slowpath;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 67baddd..0a493f4 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -1424,12 +1424,15 @@ void bnx2x_free_irq(struct bnx2x *bp)
 
 int bnx2x_enable_msix(struct bnx2x *bp)
 {
-	int msix_vec = 0, i, rc, req_cnt;
+	int msix_vec = 0, i, rc;
 
-	bp->msix_table[msix_vec].entry = msix_vec;
-	BNX2X_DEV_INFO("msix_table[0].entry = %d (slowpath)\n",
-	   bp->msix_table[0].entry);
-	msix_vec++;
+	/* VFs don't have a default status block */
+	if (IS_PF(bp)) {
+		bp->msix_table[msix_vec].entry = msix_vec;
+		BNX2X_DEV_INFO("msix_table[0].entry = %d (slowpath)\n",
+			       bp->msix_table[0].entry);
+		msix_vec++;
+	}
 
 	/* Cnic requires an msix vector for itself */
 	if (CNIC_SUPPORT(bp)) {
@@ -1447,9 +1450,10 @@ int bnx2x_enable_msix(struct bnx2x *bp)
 		msix_vec++;
 	}
 
-	req_cnt = BNX2X_NUM_ETH_QUEUES(bp) + CNIC_SUPPORT(bp) + 1;
+	DP(BNX2X_MSG_SP, "about to request enable msix with %d vectors",
+	   msix_vec);
 
-	rc = pci_enable_msix(bp->pdev, &bp->msix_table[0], req_cnt);
+	rc = pci_enable_msix(bp->pdev, &bp->msix_table[0], msix_vec);
 
 	/*
 	 * reconfigure number of tx/rx queues according to available
@@ -1457,7 +1461,7 @@ int bnx2x_enable_msix(struct bnx2x *bp)
 	 */
 	if (rc >= BNX2X_MIN_MSIX_VEC_CNT(bp)) {
 		/* how less vectors we will have? */
-		int diff = req_cnt - rc;
+		int diff = msix_vec - rc;
 
 		BNX2X_DEV_INFO("Trying to use less MSI-X vectors: %d\n", rc);
 
@@ -3889,7 +3893,10 @@ int bnx2x_alloc_mem_bp(struct bnx2x *bp)
 	 * The biggest MSI-X table we might need is as a maximum number of fast
 	 * path IGU SBs plus default SB (for PF).
 	 */
-	msix_table_size = bp->igu_sb_cnt + 1;
+	msix_table_size = bp->igu_sb_cnt;
+	if (IS_PF(bp))
+		msix_table_size++;
+	BNX2X_DEV_INFO("msix_table_size %d", msix_table_size);
 
 	/* fp array: RSS plus CNIC related L2 queues */
 	fp_array_size = BNX2X_MAX_RSS_COUNT(bp) + CNIC_SUPPORT(bp);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 0991534..bca371e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -863,7 +863,7 @@ static inline void bnx2x_del_all_napi(struct bnx2x *bp)
 		netif_napi_del(&bnx2x_fp(bp, i, napi));
 }
 
-void bnx2x_set_int_mode(struct bnx2x *bp);
+int bnx2x_set_int_mode(struct bnx2x *bp);
 
 static inline void bnx2x_disable_msi(struct bnx2x *bp)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 940ef85..b9bc677 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -59,6 +59,8 @@
 #include "bnx2x_init.h"
 #include "bnx2x_init_ops.h"
 #include "bnx2x_cmn.h"
+#include "bnx2x_vfpf.h"
+#include "bnx2x_sriov.h"
 #include "bnx2x_dcb.h"
 #include "bnx2x_sp.h"
 
@@ -133,39 +135,49 @@ enum bnx2x_board_type {
 	BCM57711E,
 	BCM57712,
 	BCM57712_MF,
+	BCM57712_VF,
 	BCM57800,
 	BCM57800_MF,
+	BCM57800_VF,
 	BCM57810,
 	BCM57810_MF,
-	BCM57840_O,
+	BCM57810_VF,
 	BCM57840_4_10,
 	BCM57840_2_20,
-	BCM57840_MFO,
 	BCM57840_MF,
+	BCM57840_VF,
 	BCM57811,
-	BCM57811_MF
+	BCM57811_MF,
+	BCM57840_O,
+	BCM57840_MFO,
+	BCM57811_VF
 };
 
 /* indexed by board_type, above */
 static struct {
 	char *name;
 } board_info[] = {
-	{ "Broadcom NetXtreme II BCM57710 10 Gigabit PCIe [Everest]" },
-	{ "Broadcom NetXtreme II BCM57711 10 Gigabit PCIe" },
-	{ "Broadcom NetXtreme II BCM57711E 10 Gigabit PCIe" },
-	{ "Broadcom NetXtreme II BCM57712 10 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57712 10 Gigabit Ethernet Multi Function" },
-	{ "Broadcom NetXtreme II BCM57800 10 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57800 10 Gigabit Ethernet Multi Function" },
-	{ "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function" },
-	{ "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57840 10 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57840 20 Gigabit Ethernet" },
-	{ "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function"},
-	{ "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function"},
-	{ "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet"},
-	{ "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet Multi Function"},
+	[BCM57710]	= { "Broadcom NetXtreme II BCM57710 10 Gigabit PCIe [Everest]" },
+	[BCM57711]	= { "Broadcom NetXtreme II BCM57711 10 Gigabit PCIe" },
+	[BCM57711E]	= { "Broadcom NetXtreme II BCM57711E 10 Gigabit PCIe" },
+	[BCM57712]	= { "Broadcom NetXtreme II BCM57712 10 Gigabit Ethernet" },
+	[BCM57712_MF]	= { "Broadcom NetXtreme II BCM57712 10 Gigabit Ethernet Multi Function" },
+	[BCM57712_VF]	= { "Broadcom NetXtreme II BCM57712 10 Gigabit Ethernet Virtual Function" },
+	[BCM57800]	= { "Broadcom NetXtreme II BCM57800 10 Gigabit Ethernet" },
+	[BCM57800_MF]	= { "Broadcom NetXtreme II BCM57800 10 Gigabit Ethernet Multi Function" },
+	[BCM57800_VF]	= { "Broadcom NetXtreme II BCM57800 10 Gigabit Ethernet Virtual Function" },
+	[BCM57810]	= { "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet" },
+	[BCM57810_MF]	= { "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function" },
+	[BCM57810_VF]	= { "Broadcom NetXtreme II BCM57810 10 Gigabit Ethernet Virtual Function" },
+	[BCM57840_4_10]	= { "Broadcom NetXtreme II BCM57840 10 Gigabit Ethernet" },
+	[BCM57840_2_20]	= { "Broadcom NetXtreme II BCM57840 20 Gigabit Ethernet" },
+	[BCM57840_MF]	= { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function" },
+	[BCM57840_VF]	= { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Virtual Function" },
+	[BCM57811]	= { "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet" },
+	[BCM57811_MF]	= { "Broadcom NetXtreme II BCM57811 10 Gigabit Ethernet Multi Function" },
+	[BCM57840_O]	= { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet" },
+	[BCM57840_MFO]	= { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Multi Function" },
+	[BCM57811_VF]	= { "Broadcom NetXtreme II BCM57840 10/20 Gigabit Ethernet Virtual Function" }
 };
 
 #ifndef PCI_DEVICE_ID_NX2_57710
@@ -7792,41 +7804,49 @@ int bnx2x_setup_leading(struct bnx2x *bp)
  *
  * In case of MSI-X it will also try to enable MSI-X.
  */
-void bnx2x_set_int_mode(struct bnx2x *bp)
+int bnx2x_set_int_mode(struct bnx2x *bp)
 {
+	int rc = 0;
+
+	if (IS_VF(bp) && int_mode != BNX2X_INT_MODE_MSIX)
+		return -EINVAL;
+
 	switch (int_mode) {
-	case INT_MODE_MSI:
+	case BNX2X_INT_MODE_MSIX:
+		/* attempt to enable msix */
+		rc = bnx2x_enable_msix(bp);
+
+		/* msix attained */
+		if (!rc)
+			return 0;
+
+		/* vfs use only msix */
+		if (rc && IS_VF(bp))
+			return rc;
+
+		/* failed to enable multiple MSI-X */
+		BNX2X_DEV_INFO("Failed to enable multiple MSI-X (%d), set number of queues to %d\n",
+			       bp->num_queues,
+			       1 + bp->num_cnic_queues);
+
+		/* falling through... */
+	case BNX2X_INT_MODE_MSI:
 		bnx2x_enable_msi(bp);
+
 		/* falling through... */
-	case INT_MODE_INTx:
+	case BNX2X_INT_MODE_INTX:
 		bp->num_ethernet_queues = 1;
 		bp->num_queues = bp->num_ethernet_queues + bp->num_cnic_queues;
 		BNX2X_DEV_INFO("set number of queues to 1\n");
 		break;
 	default:
-		/* if we can't use MSI-X we only need one fp,
-		 * so try to enable MSI-X with the requested number of fp's
-		 * and fallback to MSI or legacy INTx with one fp
-		 */
-		if (bnx2x_enable_msix(bp) ||
-		    bp->flags & USING_SINGLE_MSIX_FLAG) {
-			/* failed to enable multiple MSI-X */
-			BNX2X_DEV_INFO("Failed to enable multiple MSI-X (%d), set number of queues to %d\n",
-				       bp->num_queues,
-				       1 + bp->num_cnic_queues);
-
-			bp->num_queues = 1 + bp->num_cnic_queues;
-
-			/* Try to enable MSI */
-			if (!(bp->flags & USING_SINGLE_MSIX_FLAG) &&
-			    !(bp->flags & DISABLE_MSI_FLAG))
-				bnx2x_enable_msi(bp);
-		}
-		break;
+		BNX2X_DEV_INFO("unknown value in int_mode module parameter\n");
+		return -EINVAL;
 	}
+	return 0;
 }
 
-/* must be called prioir to any HW initializations */
+/* must be called prior to any HW initializations */
 static inline u16 bnx2x_cid_ilt_lines(struct bnx2x *bp)
 {
 	return L2_ILT_LINES(bp);
@@ -11081,9 +11101,13 @@ static int bnx2x_init_bp(struct bnx2x *bp)
 	INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task);
 	INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task);
 	INIT_DELAYED_WORK(&bp->period_task, bnx2x_period_task);
-	rc = bnx2x_get_hwinfo(bp);
-	if (rc)
-		return rc;
+	if (IS_PF(bp)) {
+		rc = bnx2x_get_hwinfo(bp);
+		if (rc)
+			return rc;
+	} else {
+		random_ether_addr(bp->dev->dev_addr);
+	}
 
 	bnx2x_set_modes_bitmap(bp);
 
@@ -11096,7 +11120,7 @@ static int bnx2x_init_bp(struct bnx2x *bp)
 	func = BP_FUNC(bp);
 
 	/* need to reset chip if undi was active */
-	if (!BP_NOMCP(bp)) {
+	if (IS_PF(bp) && !BP_NOMCP(bp)) {
 		/* init fw_seq */
 		bp->fw_seq =
 			SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
@@ -11133,6 +11157,8 @@ static int bnx2x_init_bp(struct bnx2x *bp)
 	bp->mrrs = mrrs;
 
 	bp->tx_ring_size = IS_MF_FCOE_AFEX(bp) ? 0 : MAX_TX_AVAIL;
+	if (IS_VF(bp))
+		bp->rx_ring_size = MAX_RX_AVAIL;
 
 	/* make sure that the numbers are in the right granularity */
 	bp->tx_ticks = (50 / BNX2X_BTR) * BNX2X_BTR;
@@ -11161,12 +11187,18 @@ static int bnx2x_init_bp(struct bnx2x *bp)
 		bp->cnic_base_cl_id = FP_SB_MAX_E2;
 
 	/* multiple tx priority */
-	if (CHIP_IS_E1x(bp))
+	if (IS_VF(bp))
+		bp->max_cos = 1;
+	else if (CHIP_IS_E1x(bp))
 		bp->max_cos = BNX2X_MULTI_TX_COS_E1X;
-	if (CHIP_IS_E2(bp) || CHIP_IS_E3A0(bp))
+	else if (CHIP_IS_E2(bp) || CHIP_IS_E3A0(bp))
 		bp->max_cos = BNX2X_MULTI_TX_COS_E2_E3A0;
-	if (CHIP_IS_E3B0(bp))
+	else if (CHIP_IS_E3B0(bp))
 		bp->max_cos = BNX2X_MULTI_TX_COS_E3B0;
+	else
+		BNX2X_ERR("unknown chip %x revision %x\n",
+			  CHIP_NUM(bp), CHIP_REV(bp));
+	pr_info("set bp->max_cos to %d\n", bp->max_cos);
 
 	/* We need at least one default status block for slow-path events,
 	 * second status block for the L2 queue, and a third status block for
@@ -11551,10 +11583,9 @@ static int bnx2x_set_coherency_mask(struct bnx2x *bp)
 	return 0;
 }
 
-static int bnx2x_init_dev(struct pci_dev *pdev, struct net_device *dev,
-			  unsigned long board_type)
+static int bnx2x_init_dev(struct bnx2x *bp, struct pci_dev *pdev,
+			  struct net_device *dev, unsigned long board_type)
 {
-	struct bnx2x *bp;
 	int rc;
 	u32 pci_cfg_dword;
 	bool chip_is_e1x = (board_type == BCM57710 ||
@@ -11562,11 +11593,9 @@ static int bnx2x_init_dev(struct pci_dev *pdev, struct net_device *dev,
 			    board_type == BCM57711E);
 
 	SET_NETDEV_DEV(dev, &pdev->dev);
-	bp = netdev_priv(dev);
 
 	bp->dev = dev;
 	bp->pdev = pdev;
-	bp->flags = 0;
 
 	rc = pci_enable_device(pdev);
 	if (rc) {
@@ -11582,9 +11611,8 @@ static int bnx2x_init_dev(struct pci_dev *pdev, struct net_device *dev,
 		goto err_out_disable;
 	}
 
-	if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
-		dev_err(&bp->pdev->dev, "Cannot find second PCI device"
-		       " base address, aborting\n");
+	if (IS_PF(bp) && !(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
+		dev_err(&bp->pdev->dev, "Cannot find second PCI device base address, aborting\n");
 		rc = -ENODEV;
 		goto err_out_disable;
 	}
@@ -11609,12 +11637,14 @@ static int bnx2x_init_dev(struct pci_dev *pdev, struct net_device *dev,
 		pci_save_state(pdev);
 	}
 
-	bp->pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
-	if (bp->pm_cap == 0) {
-		dev_err(&bp->pdev->dev,
-			"Cannot find power management capability, aborting\n");
-		rc = -EIO;
-		goto err_out_release;
+	if (IS_PF(bp)) {
+		bp->pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
+		if (bp->pm_cap == 0) {
+			dev_err(&bp->pdev->dev,
+				"Cannot find power management capability, aborting\n");
+			rc = -EIO;
+			goto err_out_release;
+		}
 	}
 
 	if (!pci_is_pcie(pdev)) {
@@ -11665,25 +11695,28 @@ static int bnx2x_init_dev(struct pci_dev *pdev, struct net_device *dev,
 	 * Clean the following indirect addresses for all functions since it
 	 * is not used by the driver.
 	 */
-	REG_WR(bp, PXP2_REG_PGL_ADDR_88_F0, 0);
-	REG_WR(bp, PXP2_REG_PGL_ADDR_8C_F0, 0);
-	REG_WR(bp, PXP2_REG_PGL_ADDR_90_F0, 0);
-	REG_WR(bp, PXP2_REG_PGL_ADDR_94_F0, 0);
+	if (IS_PF(bp)) {
+		REG_WR(bp, PXP2_REG_PGL_ADDR_88_F0, 0);
+		REG_WR(bp, PXP2_REG_PGL_ADDR_8C_F0, 0);
+		REG_WR(bp, PXP2_REG_PGL_ADDR_90_F0, 0);
+		REG_WR(bp, PXP2_REG_PGL_ADDR_94_F0, 0);
+
+		if (chip_is_e1x) {
+			REG_WR(bp, PXP2_REG_PGL_ADDR_88_F1, 0);
+			REG_WR(bp, PXP2_REG_PGL_ADDR_8C_F1, 0);
+			REG_WR(bp, PXP2_REG_PGL_ADDR_90_F1, 0);
+			REG_WR(bp, PXP2_REG_PGL_ADDR_94_F1, 0);
+		}
 
-	if (chip_is_e1x) {
-		REG_WR(bp, PXP2_REG_PGL_ADDR_88_F1, 0);
-		REG_WR(bp, PXP2_REG_PGL_ADDR_8C_F1, 0);
-		REG_WR(bp, PXP2_REG_PGL_ADDR_90_F1, 0);
-		REG_WR(bp, PXP2_REG_PGL_ADDR_94_F1, 0);
+		/* Enable internal target-read (in case we are probed after PF
+		 * FLR). Must be done prior to any BAR read access. Only for
+		 * 57712 and up
+		 */
+		if (!chip_is_e1x)
+			REG_WR(bp,
+			       PGLUE_B_REG_INTERNAL_PFID_ENABLE_TARGET_READ, 1);
 	}
 
-	/*
-	 * Enable internal target-read (in case we are probed after PF FLR).
-	 * Must be done prior to any BAR read access. Only for 57712 and up
-	 */
-	if (!chip_is_e1x)
-		REG_WR(bp, PGLUE_B_REG_INTERNAL_PFID_ENABLE_TARGET_READ, 1);
-
 	dev->watchdog_timeo = TX_TIMEOUT;
 
 	dev->netdev_ops = &bnx2x_netdev_ops;
@@ -11734,7 +11767,8 @@ err_out:
 
 static void bnx2x_get_pcie_width_speed(struct bnx2x *bp, int *width, int *speed)
 {
-	u32 val = REG_RD(bp, PCICFG_OFFSET + PCICFG_LINK_CONTROL);
+	u32 val = 0;
+	pci_read_config_dword(bp->pdev, PCICFG_LINK_CONTROL, &val);
 
 	*width = (val & PCICFG_LINK_WIDTH) >> PCICFG_LINK_WIDTH_SHIFT;
 
@@ -12012,10 +12046,10 @@ static int bnx2x_set_qm_cid_count(struct bnx2x *bp)
  *
  */
 static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev,
-				     int cnic_cnt)
+				     int cnic_cnt, bool is_vf)
 {
-	int pos;
-	u16 control;
+	int pos, index;
+	u16 control = 0;
 
 	pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX);
 
@@ -12023,85 +12057,115 @@ static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev,
 	 * If MSI-X is not supported - return number of SBs needed to support
 	 * one fast path queue: one FP queue + SB for CNIC
 	 */
-	if (!pos)
+	if (!pos) {
+		pr_info("no msix capability found");
 		return 1 + cnic_cnt;
+	}
+
+	pr_info("msix capability found");
 
 	/*
 	 * The value in the PCI configuration space is the index of the last
 	 * entry, namely one less than the actual size of the table, which is
 	 * exactly what we want to return from this function: number of all SBs
 	 * without the default SB.
+	 * For VFs there is no default SB, then we return (index+1).
 	 */
 	pci_read_config_word(pdev, pos  + PCI_MSI_FLAGS, &control);
-	return control & PCI_MSIX_FLAGS_QSIZE;
-}
 
-struct cnic_eth_dev *bnx2x_cnic_probe(struct net_device *);
+	index = control & PCI_MSIX_FLAGS_QSIZE;
 
-static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
-	struct net_device *dev = NULL;
-	struct bnx2x *bp;
-	int pcie_width, pcie_speed;
-	int rc, max_non_def_sbs;
-	int rx_count, tx_count, rss_count, doorbell_size;
-	int cnic_cnt;
-	/*
-	 * An estimated maximum supported CoS number according to the chip
-	 * version.
-	 * We will try to roughly estimate the maximum number of CoSes this chip
-	 * may support in order to minimize the memory allocated for Tx
-	 * netdev_queue's. This number will be accurately calculated during the
-	 * initialization of bp->max_cos based on the chip versions AND chip
-	 * revision in the bnx2x_init_bp().
-	 */
-	u8 max_cos_est = 0;
+	return is_vf ? index + 1 : index;
+}
 
-	switch (ent->driver_data) {
+static int set_max_cos_est(int chip_id)
+{
+	switch (chip_id) {
 	case BCM57710:
 	case BCM57711:
 	case BCM57711E:
-		max_cos_est = BNX2X_MULTI_TX_COS_E1X;
-		break;
-
+		return BNX2X_MULTI_TX_COS_E1X;
 	case BCM57712:
 	case BCM57712_MF:
-		max_cos_est = BNX2X_MULTI_TX_COS_E2_E3A0;
-		break;
-
+	case BCM57712_VF:
+		return BNX2X_MULTI_TX_COS_E2_E3A0;
 	case BCM57800:
 	case BCM57800_MF:
+	case BCM57800_VF:
 	case BCM57810:
 	case BCM57810_MF:
-	case BCM57840_O:
 	case BCM57840_4_10:
 	case BCM57840_2_20:
+	case BCM57840_O:
 	case BCM57840_MFO:
+	case BCM57810_VF:
 	case BCM57840_MF:
+	case BCM57840_VF:
 	case BCM57811:
 	case BCM57811_MF:
-		max_cos_est = BNX2X_MULTI_TX_COS_E3B0;
-		break;
-
+	case BCM57811_VF:
+		return BNX2X_MULTI_TX_COS_E3B0;
+		return 1;
 	default:
-		pr_err("Unknown board_type (%ld), aborting\n",
-			   ent->driver_data);
+		pr_err("Unknown board_type (%d), aborting\n", chip_id);
 		return -ENODEV;
 	}
+}
 
-	cnic_cnt = 1;
-	max_non_def_sbs = bnx2x_get_num_non_def_sbs(pdev, cnic_cnt);
+static int set_is_vf(int chip_id)
+{
+	switch (chip_id) {
+	case BCM57712_VF:
+	case BCM57800_VF:
+	case BCM57810_VF:
+	case BCM57840_VF:
+	case BCM57811_VF:
+		return true;
+	default:
+		return false;
+	}
+}
+
+struct cnic_eth_dev *bnx2x_cnic_probe(struct net_device *dev);
+
+static int bnx2x_init_one(struct pci_dev *pdev,
+				    const struct pci_device_id *ent)
+{
+	struct net_device *dev = NULL;
+	struct bnx2x *bp;
+	int pcie_width, pcie_speed;
+	int rc, max_non_def_sbs;
+	int rx_count, tx_count, rss_count, doorbell_size;
+	int max_cos_est;
+	bool is_vf;
+	int cnic_cnt;
+
+	/* An estimated maximum supported CoS number according to the chip
+	 * version.
+	 * We will try to roughly estimate the maximum number of CoSes this chip
+	 * may support in order to minimize the memory allocated for Tx
+	 * netdev_queue's. This number will be accurately calculated during the
+	 * initialization of bp->max_cos based on the chip versions AND chip
+	 * revision in the bnx2x_init_bp().
+	 */
+	max_cos_est = set_max_cos_est(ent->driver_data);
+	if (max_cos_est < 0)
+		return max_cos_est;
+	is_vf = set_is_vf(ent->driver_data);
+	cnic_cnt = is_vf ? 0 : 1;
 
-	WARN_ON(!max_non_def_sbs);
+	max_non_def_sbs = bnx2x_get_num_non_def_sbs(pdev, cnic_cnt, is_vf);
 
 	/* Maximum number of RSS queues: one IGU SB goes to CNIC */
-	rss_count = max_non_def_sbs - cnic_cnt;
+	rss_count = is_vf ? 1 : max_non_def_sbs - cnic_cnt;
+
+	if (rss_count < 1)
+		return -EINVAL;
 
 	/* Maximum number of netdev Rx queues: RSS + FCoE L2 */
 	rx_count = rss_count + cnic_cnt;
 
-	/*
-	 * Maximum number of netdev Tx queues:
+	/* Maximum number of netdev Tx queues:
 	 * Maximum TSS queues * Maximum supported number of CoS  + FCoE L2
 	 */
 	tx_count = rss_count * max_cos_est + cnic_cnt;
@@ -12113,22 +12177,28 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	bp = netdev_priv(dev);
 
+	bp->flags = 0;
+	if (is_vf)
+		bp->flags |= IS_VF_FLAG;
+
 	bp->igu_sb_cnt = max_non_def_sbs;
+	bp->igu_base_addr = IS_VF(bp) ? PXP_VF_ADDR_IGU_START : BAR_IGU_INTMEM;
 	bp->msg_enable = debug;
 	bp->cnic_support = cnic_cnt;
 	bp->cnic_probe = bnx2x_cnic_probe;
 
 	pci_set_drvdata(pdev, dev);
 
-	rc = bnx2x_init_dev(pdev, dev, ent->driver_data);
+	rc = bnx2x_init_dev(bp, pdev, dev, ent->driver_data);
 	if (rc < 0) {
 		free_netdev(dev);
 		return rc;
 	}
 
+	BNX2X_DEV_INFO("This is a %s function\n",
+		       IS_PF(bp) ? "physical" : "virtual");
 	BNX2X_DEV_INFO("Cnic support is %s\n", CNIC_SUPPORT(bp) ? "on" : "off");
-	BNX2X_DEV_INFO("max_non_def_sbs %d\n", max_non_def_sbs);
-
+	BNX2X_DEV_INFO("Max num of status blocks %d\n", max_non_def_sbs);
 	BNX2X_DEV_INFO("Allocated netdev with %d tx and %d rx queues\n",
 			  tx_count, rx_count);
 
@@ -12136,19 +12206,28 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	if (rc)
 		goto init_one_exit;
 
-	/*
-	 * Map doorbels here as we need the real value of bp->max_cos which
-	 * is initialized in bnx2x_init_bp().
+	/* Map doorbells here as we need the real value of bp->max_cos which
+	 * is initialized in bnx2x_init_bp() to determine the number of
+	 * l2 connections.
 	 */
-	doorbell_size = BNX2X_L2_MAX_CID(bp) * (1 << BNX2X_DB_SHIFT);
-	if (doorbell_size > pci_resource_len(pdev, 2)) {
-		dev_err(&bp->pdev->dev,
-			"Cannot map doorbells, bar size too small, aborting\n");
-		rc = -ENOMEM;
-		goto init_one_exit;
+	if (IS_VF(bp)) {
+		/* vf doorbells are embedded within the regview */
+		bp->doorbells = bp->regview + PXP_VF_ADDR_DB_START;
+
+		/* allocate vf2pf mailbox for vf to pf channel */
+		BNX2X_PCI_ALLOC(bp->vf2pf_mbox, &bp->vf2pf_mbox_mapping,
+				sizeof(struct bnx2x_vf_mbx_msg));
+	} else {
+		doorbell_size = BNX2X_L2_MAX_CID(bp) * (1 << BNX2X_DB_SHIFT);
+		if (doorbell_size > pci_resource_len(pdev, 2)) {
+			dev_err(&bp->pdev->dev,
+				"Cannot map doorbells, bar size too small, aborting\n");
+			rc = -ENOMEM;
+			goto init_one_exit;
+		}
+		bp->doorbells = ioremap_nocache(pci_resource_start(pdev, 2),
+						doorbell_size);
 	}
-	bp->doorbells = ioremap_nocache(pci_resource_start(pdev, 2),
-					doorbell_size);
 	if (!bp->doorbells) {
 		dev_err(&bp->pdev->dev,
 			"Cannot map doorbell space, aborting\n");
@@ -12158,6 +12237,7 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	/* calc qm_cid_count */
 	bp->qm_cid_count = bnx2x_set_qm_cid_count(bp);
+	BNX2X_DEV_INFO("qm_cid_count %d\n", bp->qm_cid_count);
 
 	/* disable FCOE L2 queue for E1x*/
 	if (CHIP_IS_E1x(bp))
@@ -12179,13 +12259,19 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	/* Configure interrupt mode: try to enable MSI-X/MSI if
 	 * needed.
 	 */
-	bnx2x_set_int_mode(bp);
+	rc = bnx2x_set_int_mode(bp);
+	if (rc) {
+		dev_err(&pdev->dev, "Cannot set interrupts\n");
+		goto init_one_exit;
+	}
 
+	/* register the net device */
 	rc = register_netdev(dev);
 	if (rc) {
 		dev_err(&pdev->dev, "Cannot register net device\n");
 		goto init_one_exit;
 	}
+	BNX2X_DEV_INFO("device name after netdev register %s\n", dev->name);
 
 
 	if (!NO_FCOE(bp)) {
@@ -12196,6 +12282,8 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	}
 
 	bnx2x_get_pcie_width_speed(bp, &pcie_width, &pcie_speed);
+	BNX2X_DEV_INFO("got pcie width %d and speed %d\n",
+		       pcie_width, pcie_speed);
 
 	BNX2X_DEV_INFO(
 		"%s (%c%d) PCI-E x%d %s found at mem %lx, IRQ %d, node addr %pM\n",
@@ -12209,11 +12297,16 @@ static int bnx2x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	return 0;
 
+alloc_mem_err:
+	BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->vf2pf_mbox_mapping,
+		       sizeof(struct bnx2x_vf_mbx_msg));
+	rc = -ENOMEM;
+
 init_one_exit:
 	if (bp->regview)
 		iounmap(bp->regview);
 
-	if (bp->doorbells)
+	if (IS_PF(bp) && bp->doorbells)
 		iounmap(bp->doorbells);
 
 	free_netdev(dev);
@@ -12253,13 +12346,15 @@ static void bnx2x_remove_one(struct pci_dev *pdev)
 	unregister_netdev(dev);
 
 	/* Power on: we can't let PCI layer write to us while we are in D3 */
-	bnx2x_set_power_state(bp, PCI_D0);
+	if (IS_PF(bp))
+		bnx2x_set_power_state(bp, PCI_D0);
 
 	/* Disable MSI/MSI-X */
 	bnx2x_disable_msi(bp);
 
 	/* Power off */
-	bnx2x_set_power_state(bp, PCI_D3hot);
+	if (IS_PF(bp))
+		bnx2x_set_power_state(bp, PCI_D3hot);
 
 	/* Make sure RESET task is not scheduled before continuing */
 	cancel_delayed_work_sync(&bp->sp_rtnl_task);
@@ -12267,11 +12362,15 @@ static void bnx2x_remove_one(struct pci_dev *pdev)
 	if (bp->regview)
 		iounmap(bp->regview);
 
-	if (bp->doorbells)
-		iounmap(bp->doorbells);
-
-	bnx2x_release_firmware(bp);
+	/* for vf doorbells are part of the regview and were unmapped along with
+	 * it. FW is only loaded by PF.
+	 */
+	if (IS_PF(bp)) {
+		if (bp->doorbells)
+			iounmap(bp->doorbells);
 
+		bnx2x_release_firmware(bp);
+	}
 	bnx2x_free_mem_bp(bp);
 
 	free_netdev(dev);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index bc2f65b..463a984 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -6554,6 +6554,15 @@
 	(7L<<ME_REG_ABS_PF_NUM_SHIFT) /* Absolute PF Num */
 
 
+#define PXP_VF_ADDR_IGU_START				0
+#define PXP_VF_ADDR_IGU_SIZE				0x3000
+#define PXP_VF_ADDR_IGU_END\
+	((PXP_VF_ADDR_IGU_START) + (PXP_VF_ADDR_IGU_SIZE) - 1)
+#define PXP_VF_ADDR_DB_START				0x7c00
+#define PXP_VF_ADDR_DB_SIZE				0x200
+#define PXP_VF_ADDR_DB_END\
+	((PXP_VF_ADDR_DB_START) + (PXP_VF_ADDR_DB_SIZE) - 1)
+
 #define MDIO_REG_BANK_CL73_IEEEB0	0x0
 #define MDIO_CL73_IEEEB0_CL73_AN_CONTROL	0x0
 #define MDIO_CL73_IEEEB0_CL73_AN_CONTROL_RESTART_AN	0x0200
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
new file mode 100644
index 0000000..1b14745
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -0,0 +1,27 @@
+/* bnx2x_sriov.h: Broadcom Everest network driver.
+ *
+ * Copyright 2009-2012 Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2, available
+ * at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a
+ * license other than the GPL, without Broadcom's express prior written
+ * consent.
+ *
+ * Maintained by: Eilon Greenstein <eilong@broadcom.com>
+ * Written by: Shmulik Ravid <shmulikr@broadcom.com>
+ *	       Ariel Elior <ariele@broadcom.com>
+ */
+#ifndef BNX2X_SRIOV_H
+#define BNX2X_SRIOV_H
+
+struct bnx2x_vf_mbx_msg {
+	union vfpf_tlvs req;
+	union pfvf_tlvs resp;
+};
+
+#endif /* bnx2x_sriov.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
new file mode 100644
index 0000000..bb37675
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -0,0 +1,37 @@
+/* bnx2x_vfpf.h: Broadcom Everest network driver.
+ *
+ * Copyright (c) 2011-2012 Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2, available
+ * at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a
+ * license other than the GPL, without Broadcom's express prior written
+ * consent.
+ *
+ * Maintained by: Eilon Greenstein <eilong@broadcom.com>
+ * Written by: Ariel Elior <ariele@broadcom.com>
+ */
+#ifndef VF_PF_IF_H
+#define VF_PF_IF_H
+
+/* HW VF-PF channel definitions
+ * A.K.A VF-PF mailbox
+ */
+#define TLV_BUFFER_SIZE			1024
+
+struct tlv_buffer_size {
+	u8 tlv_buffer[TLV_BUFFER_SIZE];
+};
+
+union vfpf_tlvs {
+	struct tlv_buffer_size		tlv_buf_size;
+};
+
+union pfvf_tlvs {
+	struct tlv_buffer_size		tlv_buf_size;
+};
+#endif /* VF_PF_IF_H */
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 18:37   ` David Miller
  2012-12-10 15:46 ` [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request Ariel Elior
                   ` (19 subsequent siblings)
  21 siblings, 1 reply; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Add the 'acquire' request to VF <-> PF channel and use it at
VF probe. In the acquire request the VF driver lists the resources
it would like to have. In the response the PF either ratifies the
request, or denies it and supplies the maximum values supported.
The VF may then attempt another acquire request.
This patch adds the bnx2x_vfpf.c file which contains the
implementation of the VF to PF hardware channel.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/Makefile      |    2 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |   13 ++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |  186 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |    6 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    5 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   79 +++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h  |  117 +++++++++++++
 7 files changed, 407 insertions(+), 1 deletions(-)
 create mode 100644 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c

diff --git a/drivers/net/ethernet/broadcom/bnx2x/Makefile b/drivers/net/ethernet/broadcom/bnx2x/Makefile
index 48fbdd4..d862ea6 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/Makefile
+++ b/drivers/net/ethernet/broadcom/bnx2x/Makefile
@@ -4,4 +4,4 @@
 
 obj-$(CONFIG_BNX2X) += bnx2x.o
 
-bnx2x-objs := bnx2x_main.o bnx2x_link.o bnx2x_cmn.o bnx2x_ethtool.o bnx2x_stats.o bnx2x_dcb.o bnx2x_sp.o
+bnx2x-objs := bnx2x_main.o bnx2x_link.o bnx2x_cmn.o bnx2x_ethtool.o bnx2x_stats.o bnx2x_dcb.o bnx2x_sp.o bnx2x_vfpf.o
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index b2f7425..a201ab2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -48,6 +48,7 @@
 #include "bnx2x_sp.h"
 #include "bnx2x_dcb.h"
 #include "bnx2x_stats.h"
+#include "bnx2x_vfpf.h"
 
 enum bnx2x_int_mode {
 	BNX2X_INT_MODE_MSIX,
@@ -1244,6 +1245,9 @@ struct bnx2x {
 	struct bnx2x_vf_mbx_msg	*vf2pf_mbox;
 	dma_addr_t		vf2pf_mbox_mapping;
 
+	/* we set aside a copy of the acquire response */
+	struct pfvf_acquire_resp_tlv acquire_resp;
+
 	struct net_device	*dev;
 	struct pci_dev		*pdev;
 
@@ -2207,6 +2211,15 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
 #define BNX2X_VPD_LEN			128
 #define VENDOR_ID_LEN			4
 
+#define VF_ACQUIRE_THRESH		3
+#define VF_ACQUIRE_MAC_FILTERS		1
+#define VF_ACQUIRE_MC_FILTERS		10
+
+#define GOOD_ME_REG(me_reg) (((me_reg) & ME_REG_VF_VALID) && \
+			    (!((me_reg) & ME_REG_VF_ERR)))
+int bnx2x_get_vf_id(struct bnx2x *bp, u32 *vf_id);
+int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping);
+int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count);
 /* Congestion management fairness mode */
 #define CMNG_FNS_NONE		0
 #define CMNG_FNS_MINMAX		1
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index b9bc677..061c25b 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -12235,6 +12235,12 @@ static int bnx2x_init_one(struct pci_dev *pdev,
 		goto init_one_exit;
 	}
 
+	if (IS_VF(bp)) {
+		rc = bnx2x_vfpf_acquire(bp, tx_count, rx_count);
+		if (rc)
+			goto init_one_exit;
+	}
+
 	/* calc qm_cid_count */
 	bp->qm_cid_count = bnx2x_set_qm_cid_count(bp);
 	BNX2X_DEV_INFO("qm_cid_count %d\n", bp->qm_cid_count);
@@ -13158,4 +13164,184 @@ struct cnic_eth_dev *bnx2x_cnic_probe(struct net_device *dev)
 	return cp;
 }
 
+int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping)
+{
+	struct cstorm_vf_zone_data __iomem *zone_data =
+		REG_ADDR(bp, PXP_VF_ADDR_CSDM_GLOBAL_START);
+	int tout = 600, interval = 100; /* wait for 60 seconds */
+
+	if (*done) {
+		BNX2X_ERR("done was non zero before message to pf was sent");
+		WARN_ON(true);
+		return -EINVAL;
+	}
+
+	/* Write message address */
+	writel(U64_LO(msg_mapping),
+	       &zone_data->non_trigger.vf_pf_channel.msg_addr_lo);
+	writel(U64_HI(msg_mapping),
+	       &zone_data->non_trigger.vf_pf_channel.msg_addr_hi);
+
+	/* make sure the address is written before FW accesses it */
+	wmb();
+
+	/* Trigger the PF FW */
+	writeb(1, &zone_data->trigger.vf_pf_channel.addr_valid);
+
+	/* Wait for PF to complete */
+	while ((tout >= 0) && (!*done)) {
+		msleep(interval);
+		tout -= 1;
+
+		/* progress indicator - HV can take its own sweet time in
+		 * answering VFs...
+		 */
+		DP_CONT(BNX2X_MSG_IOV, ".");
+	}
+
+	if (!*done) {
+		BNX2X_ERR("PF response has timed out\n");
+		return -EAGAIN;
+	}
+	DP(BNX2X_MSG_SP, "Got a response from PF\n");
+	return 0;
+}
+
+int bnx2x_get_vf_id(struct bnx2x *bp, u32 *vf_id)
+{
+	u32 me_reg;
+	int tout = 10, interval = 100; /* Wait for 1 sec */
+
+	do {
+		/* pxp traps vf read of doorbells and returns me reg value */
+		me_reg = readl(bp->doorbells);
+		if (GOOD_ME_REG(me_reg))
+			break;
+
+		msleep(interval);
+
+		BNX2X_ERR("Invalid ME register value: 0x%08x\n. Is pf driver up?",
+			  me_reg);
+	} while (tout-- > 0);
+
+	if (!GOOD_ME_REG(me_reg)) {
+		BNX2X_ERR("Invalid ME register value: 0x%08x\n", me_reg);
+		return -EINVAL;
+	}
+
+	BNX2X_ERR("valid ME register value: 0x%08x\n", me_reg);
+
+	*vf_id = (me_reg & ME_REG_VF_NUM_MASK) >> ME_REG_VF_NUM_SHIFT;
 
+	return 0;
+}
+
+int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
+{
+	int rc = 0, attempts = 0;
+	struct vfpf_acquire_tlv *req = &bp->vf2pf_mbox->req.acquire;
+	struct pfvf_acquire_resp_tlv *resp = &bp->vf2pf_mbox->resp.acquire_resp;
+	u32 vf_id;
+	bool resources_acquired = false;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_ACQUIRE, sizeof(*req));
+
+	if (bnx2x_get_vf_id(bp, &vf_id))
+		return -EAGAIN;
+
+	req->vfdev_info.vf_id = vf_id;
+	req->vfdev_info.vf_os = 0;
+
+	req->resc_request.num_rxqs = rx_count;
+	req->resc_request.num_txqs = tx_count;
+	req->resc_request.num_sbs = bp->igu_sb_cnt;
+	req->resc_request.num_mac_filters = VF_ACQUIRE_MAC_FILTERS;
+	req->resc_request.num_mc_filters = VF_ACQUIRE_MC_FILTERS;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	while (!resources_acquired) {
+		DP(BNX2X_MSG_SP, "attempting to acquire resources");
+
+		/* send acquire request */
+		rc = bnx2x_send_msg2pf(bp,
+				       &resp->hdr.status,
+				       bp->vf2pf_mbox_mapping);
+
+		/* PF timeout */
+		if (rc)
+			return rc;
+
+		/* copy acquire response from buffer to bp */
+		memcpy(&bp->acquire_resp, resp, sizeof(bp->acquire_resp));
+
+		attempts++;
+
+		/* PF agrees to allocate our resources */
+		if (bp->acquire_resp.hdr.status == PFVF_STATUS_SUCCESS) {
+			DP(BNX2X_MSG_SP, "resources acquired");
+			resources_acquired = true;
+
+		/* PF refuses to allocate our resources */
+		} else if (bp->acquire_resp.hdr.status ==
+			   PFVF_STATUS_NO_RESOURCE &&
+			   attempts < VF_ACQUIRE_THRESH) {
+			DP(BNX2X_MSG_SP,
+			   "PF unwilling to fulfill resource request. Try PF recommended amount");
+
+			/* humble our request */
+			req->resc_request.num_txqs =
+				bp->acquire_resp.resc.num_txqs;
+			req->resc_request.num_rxqs =
+				bp->acquire_resp.resc.num_rxqs;
+			req->resc_request.num_sbs =
+				bp->acquire_resp.resc.num_sbs;
+			req->resc_request.num_mac_filters =
+				bp->acquire_resp.resc.num_mac_filters;
+			req->resc_request.num_vlan_filters =
+				bp->acquire_resp.resc.num_vlan_filters;
+			req->resc_request.num_mc_filters =
+				bp->acquire_resp.resc.num_mc_filters;
+
+			/* Clear response buffer */
+			memset(&bp->vf2pf_mbox->resp, 0,
+			       sizeof(union pfvf_tlvs));
+
+		/* PF reports error */
+		} else {
+			BNX2X_ERR("Failed to get the requested amount of resources: %d. Breaking...\n",
+				  bp->acquire_resp.hdr.status);
+			return -EAGAIN;
+		}
+	}
+
+	/* get HW info */
+	bp->common.chip_id |= (bp->acquire_resp.pfdev_info.chip_num & 0xffff);
+	bp->link_params.chip_id = bp->common.chip_id;
+	bp->db_size = bp->acquire_resp.pfdev_info.db_size;
+	bp->common.int_block = INT_BLOCK_IGU;
+	bp->common.chip_port_mode = CHIP_2_PORT_MODE;
+	bp->igu_dsb_id = -1;
+	bp->mf_ov = 0;
+	bp->mf_mode = 0;
+	bp->common.flash_size = 0;
+	bp->flags |=
+		NO_WOL_FLAG | NO_ISCSI_OOO_FLAG | NO_ISCSI_FLAG | NO_FCOE_FLAG;
+	bp->igu_sb_cnt = 1;
+	bp->igu_base_sb = bp->acquire_resp.resc.hw_sbs[0].hw_sb_id;
+	strlcpy(bp->fw_ver, bp->acquire_resp.pfdev_info.fw_ver,
+		sizeof(bp->fw_ver));
+
+	if (is_valid_ether_addr(bp->acquire_resp.resc.current_mac_addr))
+		memcpy(bp->dev->dev_addr,
+		       bp->acquire_resp.resc.current_mac_addr,
+		       ETH_ALEN);
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 463a984..c302de4 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -6558,6 +6558,12 @@
 #define PXP_VF_ADDR_IGU_SIZE				0x3000
 #define PXP_VF_ADDR_IGU_END\
 	((PXP_VF_ADDR_IGU_START) + (PXP_VF_ADDR_IGU_SIZE) - 1)
+
+#define PXP_VF_ADDR_CSDM_GLOBAL_START			0x7600
+#define PXP_VF_ADDR_CSDM_GLOBAL_SIZE			(PXP_ADDR_REG_SIZE)
+#define PXP_VF_ADDR_CSDM_GLOBAL_END\
+	((PXP_VF_ADDR_CSDM_GLOBAL_START) + (PXP_VF_ADDR_CSDM_GLOBAL_SIZE) - 1)
+
 #define PXP_VF_ADDR_DB_START				0x7c00
 #define PXP_VF_ADDR_DB_SIZE				0x200
 #define PXP_VF_ADDR_DB_END\
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 1b14745..6d0df33 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -24,4 +24,9 @@ struct bnx2x_vf_mbx_msg {
 	union pfvf_tlvs resp;
 };
 
+void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
+		   u16 length);
+void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
+		     u16 type, u16 length);
+void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list);
 #endif /* bnx2x_sriov.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
new file mode 100644
index 0000000..cc9b83f
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -0,0 +1,79 @@
+/* bnx2x_vfpf.c: Broadcom Everest network driver.
+ *
+ * Copyright 2009-2012 Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2, available
+ * at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a
+ * license other than the GPL, without Broadcom's express prior written
+ * consent.
+ *
+ * Maintained by: Eilon Greenstein <eilong@broadcom.com>
+ * Written by: Shmulik Ravid <shmulikr@broadcom.com>
+ *	       Ariel Elior <ariele@broadcom.com>
+ */
+
+/* VF MBX (aka vf-pf channel) */
+#include "bnx2x.h"
+#include "bnx2x_sriov.h"
+/* place a given tlv on the tlv buffer at a given offset */
+void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
+		   u16 length)
+{
+	struct channel_tlv *tl =
+		(struct channel_tlv *)(tlvs_list + offset);
+	tl->type = type;
+	tl->length = length;
+}
+
+/* Clear the mailbox and init the header of the first tlv */
+void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
+		     u16 type, u16 length)
+{
+	DP(BNX2X_MSG_IOV, "preparing to send %d tlv over vf pf channel",
+	   type);
+
+	/* Clear mailbox */
+	memset(bp->vf2pf_mbox, 0, sizeof(struct bnx2x_vf_mbx_msg));
+
+	/* init type and length */
+	bnx2x_add_tlv(bp, &first_tlv->tl, 0, type, length);
+
+	/* init first tlv header */
+	first_tlv->resp_msg_offset = sizeof(bp->vf2pf_mbox->req);
+}
+
+/* list the types and lengths of the tlvs on the buffer */
+void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list)
+{
+	int i = 1;
+	struct channel_tlv *tlv = (struct channel_tlv *)tlvs_list;
+
+	while (tlv->type != CHANNEL_TLV_LIST_END) {
+
+		/* output tlv */
+		DP(BNX2X_MSG_IOV, "TLV number %d: type %d, length %d\n", i,
+		   tlv->type, tlv->length);
+
+		/* advance to next tlv */
+		tlvs_list += tlv->length;
+
+		/* cast general tlv list pointer to channel tlv header*/
+		tlv = (struct channel_tlv *)tlvs_list;
+
+		i++;
+
+		if (i > MAX_TLVS_IN_LIST) {
+			WARN(true, "corrupt tlvs");
+			return;
+		}
+	}
+
+	/* output last tlv */
+	DP(BNX2X_MSG_IOV, "TLV number %d: type %d, length %d\n", i,
+	   tlv->type, tlv->length);
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index bb37675..f201a53 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -18,20 +18,137 @@
 #ifndef VF_PF_IF_H
 #define VF_PF_IF_H
 
+/* Common definitions for all HVs */
+struct vf_pf_resc_request {
+	u8  num_rxqs;
+	u8  num_txqs;
+	u8  num_sbs;
+	u8  num_mac_filters;
+	u8  num_vlan_filters;
+	u8  num_mc_filters; /* No limit  so superfluous */
+};
+
+struct hw_sb_info {
+	u8 hw_sb_id;	/* aka absolute igu id, used to ack the sb */
+	u8 sb_qid;	/* used to update DHC for sb */
+};
+
 /* HW VF-PF channel definitions
  * A.K.A VF-PF mailbox
  */
 #define TLV_BUFFER_SIZE			1024
 
+enum {
+	PFVF_STATUS_WAITING = 0,
+	PFVF_STATUS_SUCCESS,
+	PFVF_STATUS_FAILURE,
+	PFVF_STATUS_NOT_SUPPORTED,
+	PFVF_STATUS_NO_RESOURCE
+};
+
+/* vf pf channel tlvs */
+/* general tlv header (used for both vf->pf request and pf->vf response) */
+struct channel_tlv {
+	u16 type;
+	u16 length;
+};
+
+/* header of first vf->pf tlv carries the offset used to calculate response
+ * buffer address
+ */
+struct vfpf_first_tlv {
+	struct channel_tlv tl;
+	u32 resp_msg_offset;
+};
+
+/* header of pf->vf tlvs, carries the status of handling the request */
+struct pfvf_tlv {
+	struct channel_tlv tl;
+	u8 status;
+	u8 padding[3];
+};
+
+/* used to terminate and pad a tlv list */
+struct channel_list_end_tlv {
+	struct channel_tlv tl;
+	u8 padding[4];
+};
+
+/* Acquire */
+struct vfpf_acquire_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	struct vf_pf_vfdev_info {
+		/* the following fields are for debug purposes */
+		u8  vf_id;		/* ME register value */
+		u8  vf_os;		/* e.g. Linux, W2K8 */
+		u8 padding[2];
+	} vfdev_info;
+
+	struct vf_pf_resc_request resc_request;
+
+	aligned_u64 bulletin_addr;
+};
+
+/* acquire response tlv - carries the allocated resources */
+struct pfvf_acquire_resp_tlv {
+	struct pfvf_tlv hdr;
+	struct pf_vf_pfdev_info {
+		u32 chip_num;
+		u32 pf_cap;
+		#define PFVF_CAP_RSS		0x00000001
+		#define PFVF_CAP_DHC		0x00000002
+		#define PFVF_CAP_TPA		0x00000004
+		char fw_ver[32];
+		u16 db_size;
+		u8  indices_per_sb;
+		u8  padding;
+	} pfdev_info;
+	struct pf_vf_resc {
+		/* in case of status NO_RESOURCE in message hdr, pf will fill
+		 * this struct with suggested amount of resources for next
+		 * acquire request
+		 */
+		#define PFVF_MAX_QUEUES_PER_VF         16
+		#define PFVF_MAX_SBS_PER_VF            16
+		struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF];
+		u8	hw_qid[PFVF_MAX_QUEUES_PER_VF];
+		u8	num_rxqs;
+		u8	num_txqs;
+		u8	num_sbs;
+		u8	num_mac_filters;
+		u8	num_vlan_filters;
+		u8	num_mc_filters;
+		u8	permanent_mac_addr[ETH_ALEN];
+		u8	current_mac_addr[ETH_ALEN];
+		u8	padding[2];
+	} resc;
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
 
 union vfpf_tlvs {
+	struct vfpf_first_tlv		first_tlv;
+	struct vfpf_acquire_tlv		acquire;
+	struct channel_list_end_tlv     list_end;
 	struct tlv_buffer_size		tlv_buf_size;
 };
 
 union pfvf_tlvs {
+	struct pfvf_acquire_resp_tlv	acquire_resp;
+	struct channel_list_end_tlv	list_end;
 	struct tlv_buffer_size		tlv_buf_size;
 };
+
+#define MAX_TLVS_IN_LIST 50
+
+enum channel_tlvs {
+	   CHANNEL_TLV_NONE, /* ends tlv sequence */
+	   CHANNEL_TLV_ACQUIRE,
+	   CHANNEL_TLV_LIST_END,
+	   CHANNEL_TLV_MAX
+};
+
 #endif /* VF_PF_IF_H */
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 18:39   ` David Miller
  2012-12-10 15:46 ` [PATCH net-next v3 04/22] bnx2x: Separate VF and PF logic Ariel Elior
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

VF driver uses this request when removed. The PF driver
reclaims all resources allocated for that VF at this
time.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h      |    1 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c |   45 ++++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h |   16 +++++++-
 3 files changed, 61 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index a201ab2..6cafa4e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -2220,6 +2220,7 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
 int bnx2x_get_vf_id(struct bnx2x *bp, u32 *vf_id);
 int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping);
 int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count);
+int bnx2x_vfpf_release(struct bnx2x *bp);
 /* Congestion management fairness mode */
 #define CMNG_FNS_NONE		0
 #define CMNG_FNS_MINMAX		1
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 061c25b..2a8c4c2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -12364,6 +12364,9 @@ static void bnx2x_remove_one(struct pci_dev *pdev)
 
 	/* Make sure RESET task is not scheduled before continuing */
 	cancel_delayed_work_sync(&bp->sp_rtnl_task);
+	/* send message via vfpf channel to release the resources of this vf */
+	if (IS_VF(bp))
+		bnx2x_vfpf_release(bp);
 
 	if (bp->regview)
 		iounmap(bp->regview);
@@ -13345,3 +13348,45 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
 
 	return 0;
 }
+
+int bnx2x_vfpf_release(struct bnx2x *bp)
+{
+	struct vfpf_release_tlv *req = &bp->vf2pf_mbox->req.release;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	u32 rc = 0, vf_id;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_RELEASE, sizeof(*req));
+
+	if (bnx2x_get_vf_id(bp, &vf_id))
+		return -EAGAIN;
+
+	req->vf_id = vf_id;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	/* send release request */
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+
+	/* PF timeout */
+	if (rc)
+		return rc;
+
+	/* PF released us */
+	if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
+		DP(BNX2X_MSG_SP, "vf released\n");
+
+	/* PF reports error */
+	} else {
+		BNX2X_ERR("PF failed our release request - are we out of sync? response status: %d\n",
+			  resp->hdr.status);
+		return -EAGAIN;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index f201a53..763d1c2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -68,6 +68,10 @@ struct pfvf_tlv {
 	u8 padding[3];
 };
 
+/* response tlv used for most tlvs */
+struct pfvf_general_resp_tlv {
+	struct pfvf_tlv hdr;
+};
 /* used to terminate and pad a tlv list */
 struct channel_list_end_tlv {
 	struct channel_tlv tl;
@@ -125,6 +129,13 @@ struct pfvf_acquire_resp_tlv {
 	} resc;
 };
 
+/* release the VF's acquired resources */
+struct vfpf_release_tlv {
+	struct vfpf_first_tlv	first_tlv;
+	u16			vf_id;
+	u8 padding[2];
+};
+
 struct tlv_buffer_size {
 	u8 tlv_buffer[TLV_BUFFER_SIZE];
 };
@@ -132,11 +143,13 @@ struct tlv_buffer_size {
 union vfpf_tlvs {
 	struct vfpf_first_tlv		first_tlv;
 	struct vfpf_acquire_tlv		acquire;
+	struct vfpf_release_tlv         release;
 	struct channel_list_end_tlv     list_end;
 	struct tlv_buffer_size		tlv_buf_size;
 };
 
 union pfvf_tlvs {
+	struct pfvf_general_resp_tlv    general_resp;
 	struct pfvf_acquire_resp_tlv	acquire_resp;
 	struct channel_list_end_tlv	list_end;
 	struct tlv_buffer_size		tlv_buf_size;
@@ -145,8 +158,9 @@ union pfvf_tlvs {
 #define MAX_TLVS_IN_LIST 50
 
 enum channel_tlvs {
-	   CHANNEL_TLV_NONE, /* ends tlv sequence */
+	   CHANNEL_TLV_NONE,
 	   CHANNEL_TLV_ACQUIRE,
+	   CHANNEL_TLV_RELEASE,
 	   CHANNEL_TLV_LIST_END,
 	   CHANNEL_TLV_MAX
 };
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 04/22] bnx2x: Separate VF and PF logic
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (2 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 05/22] bnx2x: Add init, setup_q, set_mac to VF <-> PF channel Ariel Elior
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Generally, the VF driver cannot access the chip, except by the
narrow window its BAR allows. Care had to be taken so the VF driver
will not reach code which accesses the chip elsewhere.
Refactor the nic_load flow into parts so it would be
easier to separate the VF-only logic from the PF-only logic.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h        |    1 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c    |  611 +++++++++++++-------
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h    |   15 +-
 .../net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c    |    2 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c   |   98 +++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h    |    6 +
 6 files changed, 500 insertions(+), 233 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 6cafa4e..17536ff 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -2221,6 +2221,7 @@ int bnx2x_get_vf_id(struct bnx2x *bp, u32 *vf_id);
 int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping);
 int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count);
 int bnx2x_vfpf_release(struct bnx2x *bp);
+int bnx2x_nic_load_analyze_req(struct bnx2x *bp, u32 load_code);
 /* Congestion management fairness mode */
 #define CMNG_FNS_NONE		0
 #define CMNG_FNS_MINMAX		1
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 0a493f4..0f6bb1f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -1048,7 +1048,7 @@ void __bnx2x_link_report(struct bnx2x *bp)
 	struct bnx2x_link_report_data cur_data;
 
 	/* reread mf_cfg */
-	if (!CHIP_IS_E1(bp))
+	if (IS_PF(bp) && !CHIP_IS_E1(bp))
 		bnx2x_read_mf_cfg(bp);
 
 	/* Read the current link report info */
@@ -1391,10 +1391,14 @@ static void bnx2x_free_msix_irqs(struct bnx2x *bp, int nvecs)
 
 	if (nvecs == offset)
 		return;
-	free_irq(bp->msix_table[offset].vector, bp->dev);
-	DP(NETIF_MSG_IFDOWN, "released sp irq (%d)\n",
-	   bp->msix_table[offset].vector);
-	offset++;
+
+	/* VFs don't have a default SB */
+	if (IS_PF(bp)) {
+		free_irq(bp->msix_table[offset].vector, bp->dev);
+		DP(NETIF_MSG_IFDOWN, "released sp irq (%d)\n",
+		   bp->msix_table[offset].vector);
+		offset++;
+	}
 
 	if (CNIC_SUPPORT(bp)) {
 		if (nvecs == offset)
@@ -1415,11 +1419,17 @@ static void bnx2x_free_msix_irqs(struct bnx2x *bp, int nvecs)
 void bnx2x_free_irq(struct bnx2x *bp)
 {
 	if (bp->flags & USING_MSIX_FLAG &&
-	    !(bp->flags & USING_SINGLE_MSIX_FLAG))
-		bnx2x_free_msix_irqs(bp, BNX2X_NUM_ETH_QUEUES(bp) +
-				     CNIC_SUPPORT(bp) + 1);
-	else
+	    !(bp->flags & USING_SINGLE_MSIX_FLAG)) {
+		int nvecs = BNX2X_NUM_ETH_QUEUES(bp) + CNIC_SUPPORT(bp);
+
+		/* vfs don't have a default status block */
+		if (IS_PF(bp))
+			nvecs++;
+
+		bnx2x_free_msix_irqs(bp, nvecs);
+	} else {
 		free_irq(bp->dev->irq, bp->dev);
+	}
 }
 
 int bnx2x_enable_msix(struct bnx2x *bp)
@@ -1515,12 +1525,15 @@ static int bnx2x_req_msix_irqs(struct bnx2x *bp)
 {
 	int i, rc, offset = 0;
 
-	rc = request_irq(bp->msix_table[offset++].vector,
-			 bnx2x_msix_sp_int, 0,
-			 bp->dev->name, bp->dev);
-	if (rc) {
-		BNX2X_ERR("request sp irq failed\n");
-		return -EBUSY;
+	/* no default status block for vf */
+	if (IS_PF(bp)) {
+		rc = request_irq(bp->msix_table[offset++].vector,
+				 bnx2x_msix_sp_int, 0,
+				 bp->dev->name, bp->dev);
+		if (rc) {
+			BNX2X_ERR("request sp irq failed\n");
+			return -EBUSY;
+		}
 	}
 
 	if (CNIC_SUPPORT(bp))
@@ -1544,12 +1557,20 @@ static int bnx2x_req_msix_irqs(struct bnx2x *bp)
 	}
 
 	i = BNX2X_NUM_ETH_QUEUES(bp);
-	offset = 1 + CNIC_SUPPORT(bp);
-	netdev_info(bp->dev, "using MSI-X  IRQs: sp %d  fp[%d] %d ... fp[%d] %d\n",
-	       bp->msix_table[0].vector,
-	       0, bp->msix_table[offset].vector,
-	       i - 1, bp->msix_table[offset + i - 1].vector);
-
+	if (IS_PF(bp)) {
+		offset = 1 + CNIC_SUPPORT(bp);
+		netdev_info(bp->dev,
+			    "using MSI-X  IRQs: sp %d  fp[%d] %d ... fp[%d] %d\n",
+			    bp->msix_table[0].vector,
+			    0, bp->msix_table[offset].vector,
+			    i - 1, bp->msix_table[offset + i - 1].vector);
+	} else {
+		offset = CNIC_SUPPORT(bp);
+		netdev_info(bp->dev,
+			    "using MSI-X  IRQs: fp[%d] %d ... fp[%d] %d\n",
+			    0, bp->msix_table[offset].vector,
+			    i - 1, bp->msix_table[offset + i - 1].vector);
+	}
 	return 0;
 }
 
@@ -1956,27 +1977,206 @@ static void bnx2x_squeeze_objects(struct bnx2x *bp)
 	} while (0)
 #endif /*BNX2X_STOP_ON_ERROR*/
 
-bool bnx2x_test_firmware_version(struct bnx2x *bp, bool is_err)
+static void bnx2x_free_fw_stats_mem(struct bnx2x *bp)
 {
-	/* build FW version dword */
-	u32 my_fw = (BCM_5710_FW_MAJOR_VERSION) +
-		    (BCM_5710_FW_MINOR_VERSION << 8) +
-		    (BCM_5710_FW_REVISION_VERSION << 16) +
-		    (BCM_5710_FW_ENGINEERING_VERSION << 24);
+	BNX2X_PCI_FREE(bp->fw_stats, bp->fw_stats_mapping,
+		       bp->fw_stats_data_sz + bp->fw_stats_req_sz);
+	return;
+}
+
+static int bnx2x_alloc_fw_stats_mem(struct bnx2x *bp)
+{
+	int num_groups;
+	int is_fcoe_stats = NO_FCOE(bp) ? 0 : 1;
+
+	/* number of queues for statistics is number of eth queues + FCoE */
+	u8 num_queue_stats = BNX2X_NUM_ETH_QUEUES(bp) + is_fcoe_stats;
+
+	/* Total number of FW statistics requests =
+	 * 1 for port stats + 1 for PF stats + potential 2 for FCoE (fcoe proper
+	 * and fcoe l2 queue) stats + num of queues (which includes another 1
+	 * for fcoe l2 queue if applicable)
+	 */
+	bp->fw_stats_num = 2 + is_fcoe_stats + num_queue_stats;
+	/* Request is built from stats_query_header and an array of
+	 * stats_query_cmd_group each of which contains
+	 * STATS_QUERY_CMD_COUNT rules. The real number or requests is
+	 * configured in the stats_query_header.
+	 */
+	num_groups =
+		(((bp->fw_stats_num) / STATS_QUERY_CMD_COUNT) +
+		 (((bp->fw_stats_num) % STATS_QUERY_CMD_COUNT) ?
+		 1 : 0));
+
+	DP(BNX2X_MSG_SP, "stats fw_stats_num %d, num_groups %d",
+	   bp->fw_stats_num, num_groups);
+
+	bp->fw_stats_req_sz = sizeof(struct stats_query_header) +
+		num_groups * sizeof(struct stats_query_cmd_group);
+
+	/* Data for statistics requests + stats_counter
+	 *
+	 * stats_counter holds per-STORM counters that are incremented
+	 * when STORM has finished with the current request.
+	 * memory for FCoE offloaded statistics are counted anyway,
+	 * even if they will not be sent.
+	 * VF stats are not accounted for here as the data of VF stats is stored
+	 * in memory allocated by the VF, not here.
+	 */
+	bp->fw_stats_data_sz = sizeof(struct per_port_stats) +
+		sizeof(struct per_pf_stats) +
+		sizeof(struct fcoe_statistics_params) +
+		sizeof(struct per_queue_stats) * num_queue_stats +
+		sizeof(struct stats_counter);
+
+	BNX2X_PCI_ALLOC(bp->fw_stats, &bp->fw_stats_mapping,
+			bp->fw_stats_data_sz + bp->fw_stats_req_sz);
+
+	/* Set shortcuts */
+	bp->fw_stats_req = (struct bnx2x_fw_stats_req *)bp->fw_stats;
+	bp->fw_stats_req_mapping = bp->fw_stats_mapping;
+
+	bp->fw_stats_data = (struct bnx2x_fw_stats_data *)
+		((u8 *)bp->fw_stats + bp->fw_stats_req_sz);
+
+	bp->fw_stats_data_mapping = bp->fw_stats_mapping +
+		bp->fw_stats_req_sz;
+
+	DP(BNX2X_MSG_SP, "statistics request base address set to %x %x",
+	   U64_HI(bp->fw_stats_req_mapping),
+	   U64_LO(bp->fw_stats_req_mapping));
+
+	DP(BNX2X_MSG_SP, "statistics data base address set to %x %x",
+	   U64_HI(bp->fw_stats_data_mapping),
+	   U64_LO(bp->fw_stats_data_mapping));
+	return 0;
 
-	/* read loaded FW from chip */
-	u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+alloc_mem_err:
+	bnx2x_free_fw_stats_mem(bp);
+	BNX2X_ERR("Can't allocate FW stats memory\n");
+	return -ENOMEM;
+}
 
-	DP(NETIF_MSG_IFUP, "loaded fw %x, my fw %x\n", loaded_fw, my_fw);
+/* send load request to mcp and analyze response */
+static int bnx2x_nic_load_request(struct bnx2x *bp, u32 *load_code)
+{
+	/* init fw_seq */
+	bp->fw_seq =
+		(SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
+		 DRV_MSG_SEQ_NUMBER_MASK);
+	BNX2X_DEV_INFO("fw_seq 0x%08x\n", bp->fw_seq);
+
+	/* Get current FW pulse sequence */
+	bp->fw_drv_pulse_wr_seq =
+		(SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_pulse_mb) &
+		 DRV_PULSE_SEQ_MASK);
+	BNX2X_DEV_INFO("drv_pulse 0x%x\n", bp->fw_drv_pulse_wr_seq);
+	/* load request */
+	(*load_code) = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_REQ,
+					DRV_MSG_CODE_LOAD_REQ_WITH_LFA);
+
+	/* if mcp fails to respond we must abort */
+	if (!(*load_code)) {
+		BNX2X_ERR("MCP response failure, aborting\n");
+		return -EBUSY;
+	}
 
-	if (loaded_fw != my_fw) {
-		if (is_err)
-			BNX2X_ERR("bnx2x with FW %x was already loaded, which mismatches my %x FW. aborting\n",
+	/* If mcp refused (e.g. other port is in diagnostic mode) we
+	 * must abort
+	 */
+	if ((*load_code) == FW_MSG_CODE_DRV_LOAD_REFUSED) {
+		BNX2X_ERR("MCP refused load request, aborting\n");
+		return -EBUSY;
+	}
+	return 0;
+}
+
+/* check whether another PF has already loaded FW to chip. In
+ * virtualized environments a pf from another VM may have already
+ * initialized the device including loading FW
+ */
+int bnx2x_nic_load_analyze_req(struct bnx2x *bp, u32 load_code)
+{
+	/* is another pf loaded on this engine? */
+	if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
+	    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
+
+		/* build my FW version dword */
+		u32 my_fw = (BCM_5710_FW_MAJOR_VERSION) +
+			(BCM_5710_FW_MINOR_VERSION << 8) +
+			(BCM_5710_FW_REVISION_VERSION << 16) +
+			(BCM_5710_FW_ENGINEERING_VERSION << 24);
+
+		/* read loaded FW from chip */
+		u32 loaded_fw = REG_RD(bp, XSEM_REG_PRAM);
+		DP(BNX2X_MSG_SP, "loaded fw %x, my fw %x",
+		   loaded_fw, my_fw);
+
+		/* abort nic load if version mismatch */
+		if (my_fw != loaded_fw) {
+			BNX2X_ERR("bnx2x with FW %x was already loaded which mismatches my %x FW. aborting",
 				  loaded_fw, my_fw);
-		return false;
+			return -EBUSY;
+		}
+	}
+	return 0;
+}
+
+/* returns the "mcp load_code" according to global load_count array */
+static int bnx2x_nic_load_no_mcp(struct bnx2x *bp, int port)
+{
+	int path = BP_PATH(bp);
+	DP(NETIF_MSG_IFUP, "NO MCP - load counts[%d]      %d, %d, %d\n",
+	   path, load_count[path][0], load_count[path][1],
+	   load_count[path][2]);
+	load_count[path][0]++;
+	load_count[path][1 + port]++;
+	DP(NETIF_MSG_IFUP, "NO MCP - new load counts[%d]  %d, %d, %d\n",
+	   path, load_count[path][0], load_count[path][1],
+	   load_count[path][2]);
+	if (load_count[path][0] == 1)
+		return FW_MSG_CODE_DRV_LOAD_COMMON;
+	else if (load_count[path][1 + port] == 1)
+		return FW_MSG_CODE_DRV_LOAD_PORT;
+	else
+		return FW_MSG_CODE_DRV_LOAD_FUNCTION;
+}
+
+/* mark PMF if applicable */
+static void bnx2x_nic_load_pmf(struct bnx2x *bp, u32 load_code)
+{
+	if ((load_code == FW_MSG_CODE_DRV_LOAD_COMMON) ||
+	    (load_code == FW_MSG_CODE_DRV_LOAD_COMMON_CHIP) ||
+	    (load_code == FW_MSG_CODE_DRV_LOAD_PORT)) {
+		bp->port.pmf = 1;
+		/* We need the barrier to ensure the ordering between the
+		 * writing to bp->port.pmf here and reading it from the
+		 * bnx2x_periodic_task().
+		 */
+		smp_mb();
+	} else {
+		bp->port.pmf = 0;
+	}
+
+	DP(NETIF_MSG_LINK, "pmf %d\n", bp->port.pmf);
+}
+
+static void bnx2x_nic_load_afex_dcc(struct bnx2x *bp, int load_code)
+{
+	if (((load_code == FW_MSG_CODE_DRV_LOAD_COMMON) ||
+	     (load_code == FW_MSG_CODE_DRV_LOAD_COMMON_CHIP)) &&
+	    (bp->common.shmem2_base)) {
+		if (SHMEM2_HAS(bp, dcc_support))
+			SHMEM2_WR(bp, dcc_support,
+				  (SHMEM_DCC_SUPPORT_DISABLE_ENABLE_PF_TLV |
+				   SHMEM_DCC_SUPPORT_BANDWIDTH_ALLOCATION_TLV));
+		if (SHMEM2_HAS(bp, afex_driver_support))
+			SHMEM2_WR(bp, afex_driver_support,
+				  SHMEM_AFEX_SUPPORTED_VERSION_ONE);
 	}
 
-	return true;
+	/* Set AFEX default VLAN tag to an invalid value */
+	bp->afex_def_vlan_tag = -1;
 }
 
 /**
@@ -2079,10 +2279,12 @@ int bnx2x_load_cnic(struct bnx2x *bp)
 
 	mutex_init(&bp->cnic_mutex);
 
-	rc = bnx2x_alloc_mem_cnic(bp);
-	if (rc) {
-		BNX2X_ERR("Unable to allocate bp memory for cnic\n");
-		LOAD_ERROR_EXIT_CNIC(bp, load_error_cnic0);
+	if (IS_PF(bp)) {
+		rc = bnx2x_alloc_mem_cnic(bp);
+		if (rc) {
+			BNX2X_ERR("Unable to allocate bp memory for cnic\n");
+			LOAD_ERROR_EXIT_CNIC(bp, load_error_cnic0);
+		}
 	}
 
 	rc = bnx2x_alloc_fp_mem_cnic(bp);
@@ -2109,14 +2311,16 @@ int bnx2x_load_cnic(struct bnx2x *bp)
 
 	bnx2x_nic_init_cnic(bp);
 
-	/* Enable Timer scan */
-	REG_WR(bp, TM_REG_EN_LINEAR0_TIMER + port*4, 1);
-
-	for_each_cnic_queue(bp, i) {
-		rc = bnx2x_setup_queue(bp, &bp->fp[i], 0);
-		if (rc) {
-			BNX2X_ERR("Queue setup failed\n");
-			LOAD_ERROR_EXIT(bp, load_error_cnic2);
+	if (IS_PF(bp)) {
+		/* Enable Timer scan */
+		REG_WR(bp, TM_REG_EN_LINEAR0_TIMER + port*4, 1);
+
+		for_each_cnic_queue(bp, i) {
+			rc = bnx2x_setup_queue(bp, &bp->fp[i], 0);
+			if (rc) {
+				BNX2X_ERR("Queue setup failed\n");
+				LOAD_ERROR_EXIT(bp, load_error_cnic2);
+			}
 		}
 	}
 
@@ -2162,8 +2366,7 @@ load_error_cnic0:
 int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 {
 	int port = BP_PORT(bp);
-	u32 load_code;
-	int i, rc;
+	int i, rc = 0, load_code = 0;
 
 	DP(NETIF_MSG_IFUP, "Starting NIC load\n");
 	DP(NETIF_MSG_IFUP,
@@ -2185,8 +2388,9 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 		&bp->last_reported_link.link_report_flags);
 	bnx2x_release_phy_lock(bp);
 
-	/* must be called before memory allocation and HW init */
-	bnx2x_ilt_set_info(bp);
+	if (IS_PF(bp))
+		/* must be called before memory allocation and HW init */
+		bnx2x_ilt_set_info(bp);
 
 	/*
 	 * Zero fastpath structures preserving invariants like napi, which are
@@ -2205,8 +2409,28 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	/* Set the receive queues buffer size */
 	bnx2x_set_rx_buf_size(bp);
 
-	if (bnx2x_alloc_mem(bp))
-		return -ENOMEM;
+	if (IS_PF(bp)) {
+		rc = bnx2x_alloc_mem(bp);
+		if (rc) {
+			BNX2X_ERR("Unable to allocate bp memory\n");
+			return rc;
+		}
+	}
+
+	/* Allocated memory for FW statistics  */
+	if (bnx2x_alloc_fw_stats_mem(bp))
+		LOAD_ERROR_EXIT(bp, load_error0);
+
+	/* fastpath: */
+
+	/* need to be done after alloc mem, since it's self adjusting to amount
+	 * of memory available for RSS queues
+	 */
+	rc = bnx2x_alloc_fp_mem(bp);
+	if (rc) {
+		BNX2X_ERR("Unable to allocate memory for fps\n");
+		LOAD_ERROR_EXIT(bp, load_error0);
+	}
 
 	/* As long as bnx2x_alloc_mem() may possibly update
 	 * bp->num_queues, bnx2x_set_real_num_queues() should always
@@ -2229,98 +2453,49 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	DP(NETIF_MSG_IFUP, "napi added\n");
 	bnx2x_napi_enable(bp);
 
-	/* set pf load just before approaching the MCP */
-	bnx2x_set_pf_load(bp);
+	if (IS_PF(bp)) {
+		/* set pf load just before approaching the MCP */
+		bnx2x_set_pf_load(bp);
 
-	/* Send LOAD_REQUEST command to MCP
-	 * Returns the type of LOAD command:
-	 * if it is the first port to be initialized
-	 * common blocks should be initialized, otherwise - not
-	 */
-	if (!BP_NOMCP(bp)) {
-		/* init fw_seq */
-		bp->fw_seq =
-			(SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
-			 DRV_MSG_SEQ_NUMBER_MASK);
-		BNX2X_DEV_INFO("fw_seq 0x%08x\n", bp->fw_seq);
-
-		/* Get current FW pulse sequence */
-		bp->fw_drv_pulse_wr_seq =
-			(SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_pulse_mb) &
-			 DRV_PULSE_SEQ_MASK);
-		BNX2X_DEV_INFO("drv_pulse 0x%x\n", bp->fw_drv_pulse_wr_seq);
-
-		load_code = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_REQ,
-					     DRV_MSG_CODE_LOAD_REQ_WITH_LFA);
-		if (!load_code) {
-			BNX2X_ERR("MCP response failure, aborting\n");
-			rc = -EBUSY;
-			LOAD_ERROR_EXIT(bp, load_error1);
-		}
-		if (load_code == FW_MSG_CODE_DRV_LOAD_REFUSED) {
-			BNX2X_ERR("Driver load refused\n");
-			rc = -EBUSY; /* other port in diagnostic mode */
-			LOAD_ERROR_EXIT(bp, load_error1);
-		}
-		if (load_code != FW_MSG_CODE_DRV_LOAD_COMMON_CHIP &&
-		    load_code != FW_MSG_CODE_DRV_LOAD_COMMON) {
-			/* abort nic load if version mismatch */
-			if (!bnx2x_test_firmware_version(bp, true)) {
-				rc = -EBUSY;
+		/* if mcp exists send load request and analyze response */
+		if (!BP_NOMCP(bp)) {
+
+			/* attempt to load pf */
+			rc = bnx2x_nic_load_request(bp, &load_code);
+			if (rc)
+				LOAD_ERROR_EXIT(bp, load_error1);
+
+			/* what did mcp say? */
+			rc = bnx2x_nic_load_analyze_req(bp, load_code);
+			if (rc) {
+				bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
 				LOAD_ERROR_EXIT(bp, load_error2);
 			}
+		} else {
+			load_code = bnx2x_nic_load_no_mcp(bp, port);
 		}
 
-	} else {
-		int path = BP_PATH(bp);
-
-		DP(NETIF_MSG_IFUP, "NO MCP - load counts[%d]      %d, %d, %d\n",
-		   path, load_count[path][0], load_count[path][1],
-		   load_count[path][2]);
-		load_count[path][0]++;
-		load_count[path][1 + port]++;
-		DP(NETIF_MSG_IFUP, "NO MCP - new load counts[%d]  %d, %d, %d\n",
-		   path, load_count[path][0], load_count[path][1],
-		   load_count[path][2]);
-		if (load_count[path][0] == 1)
-			load_code = FW_MSG_CODE_DRV_LOAD_COMMON;
-		else if (load_count[path][1 + port] == 1)
-			load_code = FW_MSG_CODE_DRV_LOAD_PORT;
-		else
-			load_code = FW_MSG_CODE_DRV_LOAD_FUNCTION;
-	}
-
-	if ((load_code == FW_MSG_CODE_DRV_LOAD_COMMON) ||
-	    (load_code == FW_MSG_CODE_DRV_LOAD_COMMON_CHIP) ||
-	    (load_code == FW_MSG_CODE_DRV_LOAD_PORT)) {
-		bp->port.pmf = 1;
-		/*
-		 * We need the barrier to ensure the ordering between the
-		 * writing to bp->port.pmf here and reading it from the
-		 * bnx2x_periodic_task().
-		 */
-		smp_mb();
-	} else
-		bp->port.pmf = 0;
-
-	DP(NETIF_MSG_IFUP, "pmf %d\n", bp->port.pmf);
+		/* mark pmf if applicable */
+		bnx2x_nic_load_pmf(bp, load_code);
 
-	/* Init Function state controlling object */
-	bnx2x__init_func_obj(bp);
+		/* Init Function state controlling object */
+		bnx2x__init_func_obj(bp);
 
-	/* Initialize HW */
-	rc = bnx2x_init_hw(bp, load_code);
-	if (rc) {
-		BNX2X_ERR("HW init failed, aborting\n");
-		bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
-		LOAD_ERROR_EXIT(bp, load_error2);
+		/* Initialize HW */
+		rc = bnx2x_init_hw(bp, load_code);
+		if (rc) {
+			BNX2X_ERR("HW init failed, aborting\n");
+			bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
+			LOAD_ERROR_EXIT(bp, load_error2);
+		}
 	}
 
 	/* Connect to IRQs */
 	rc = bnx2x_setup_irqs(bp);
 	if (rc) {
-		BNX2X_ERR("IRQs setup failed\n");
-		bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
+		BNX2X_ERR("setup irqs failed\n");
+		if (IS_PF(bp))
+			bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
 		LOAD_ERROR_EXIT(bp, load_error2);
 	}
 
@@ -2328,79 +2503,75 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	bnx2x_nic_init(bp, load_code);
 
 	/* Init per-function objects */
-	bnx2x_init_bp_objs(bp);
-
-	if (((load_code == FW_MSG_CODE_DRV_LOAD_COMMON) ||
-	    (load_code == FW_MSG_CODE_DRV_LOAD_COMMON_CHIP)) &&
-	    (bp->common.shmem2_base)) {
-		if (SHMEM2_HAS(bp, dcc_support))
-			SHMEM2_WR(bp, dcc_support,
-				  (SHMEM_DCC_SUPPORT_DISABLE_ENABLE_PF_TLV |
-				   SHMEM_DCC_SUPPORT_BANDWIDTH_ALLOCATION_TLV));
-		if (SHMEM2_HAS(bp, afex_driver_support))
-			SHMEM2_WR(bp, afex_driver_support,
-				  SHMEM_AFEX_SUPPORTED_VERSION_ONE);
-	}
+	if (IS_PF(bp)) {
+		bnx2x_init_bp_objs(bp);
 
-	/* Set AFEX default VLAN tag to an invalid value */
-	bp->afex_def_vlan_tag = -1;
 
-	bp->state = BNX2X_STATE_OPENING_WAIT4_PORT;
-	rc = bnx2x_func_start(bp);
-	if (rc) {
-		BNX2X_ERR("Function start failed!\n");
-		bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
-		LOAD_ERROR_EXIT(bp, load_error3);
-	}
+		/* Set AFEX default VLAN tag to an invalid value */
+		bp->afex_def_vlan_tag = -1;
+		bnx2x_nic_load_afex_dcc(bp, load_code);
+		bp->state = BNX2X_STATE_OPENING_WAIT4_PORT;
+		rc = bnx2x_func_start(bp);
+		if (rc) {
+			BNX2X_ERR("Function start failed!\n");
+			bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
 
-	/* Send LOAD_DONE command to MCP */
-	if (!BP_NOMCP(bp)) {
-		load_code = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE, 0);
-		if (!load_code) {
-			BNX2X_ERR("MCP response failure, aborting\n");
-			rc = -EBUSY;
 			LOAD_ERROR_EXIT(bp, load_error3);
 		}
-	}
 
-	rc = bnx2x_setup_leading(bp);
-	if (rc) {
-		BNX2X_ERR("Setup leading failed!\n");
-		LOAD_ERROR_EXIT(bp, load_error3);
-	}
+		/* Send LOAD_DONE command to MCP */
+		if (!BP_NOMCP(bp)) {
+			load_code = bnx2x_fw_command(bp,
+						     DRV_MSG_CODE_LOAD_DONE, 0);
+			if (!load_code) {
+				BNX2X_ERR("MCP response failure, aborting\n");
+				rc = -EBUSY;
+				LOAD_ERROR_EXIT(bp, load_error3);
+			}
+		}
 
-	for_each_nondefault_eth_queue(bp, i) {
-		rc = bnx2x_setup_queue(bp, &bp->fp[i], 0);
+		rc = bnx2x_setup_leading(bp);
 		if (rc) {
-			BNX2X_ERR("Queue setup failed\n");
+			BNX2X_ERR("Setup leading failed!\n");
+			LOAD_ERROR_EXIT(bp, load_error3);
+		}
+
+		for_each_nondefault_eth_queue(bp, i) {
+			rc = bnx2x_setup_queue(bp, &bp->fp[i], 0);
+			if (rc) {
+				BNX2X_ERR("Queue setup failed\n");
+				LOAD_ERROR_EXIT(bp, load_error3);
+			}
+		}
+		rc = bnx2x_init_rss_pf(bp);
+		if (rc) {
+			BNX2X_ERR("PF RSS init failed\n");
 			LOAD_ERROR_EXIT(bp, load_error3);
 		}
-	}
 
-	rc = bnx2x_init_rss_pf(bp);
-	if (rc) {
-		BNX2X_ERR("PF RSS init failed\n");
-		LOAD_ERROR_EXIT(bp, load_error3);
 	}
 
 	/* Now when Clients are configured we are ready to work */
 	bp->state = BNX2X_STATE_OPEN;
 
 	/* Configure a ucast MAC */
-	rc = bnx2x_set_eth_mac(bp, true);
+	if (IS_PF(bp))
+		rc = bnx2x_set_eth_mac(bp, true);
 	if (rc) {
 		BNX2X_ERR("Setting Ethernet MAC failed\n");
 		LOAD_ERROR_EXIT(bp, load_error3);
 	}
 
-	if (bp->pending_max) {
+	if (IS_PF(bp) && bp->pending_max) {
 		bnx2x_update_max_mf_config(bp, bp->pending_max);
 		bp->pending_max = 0;
 	}
 
-	if (bp->port.pmf)
-		bnx2x_initial_phy_init(bp, load_mode);
-	bp->link_params.feature_config_flags &= ~FEATURE_CONFIG_BOOT_FROM_SAN;
+	if (bp->port.pmf) {
+		rc = bnx2x_initial_phy_init(bp, load_mode);
+		if (rc)
+			LOAD_ERROR_EXIT(bp, load_error3);
+	}
 
 	/* Start fast path */
 
@@ -2441,8 +2612,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	if (CNIC_ENABLED(bp))
 		bnx2x_load_cnic(bp);
 
-	/* mark driver is loaded in shmem2 */
-	if (SHMEM2_HAS(bp, drv_capabilities_flag)) {
+	if (IS_PF(bp) && SHMEM2_HAS(bp, drv_capabilities_flag)) {
+		/* mark driver is loaded in shmem2 */
 		u32 val;
 		val = SHMEM2_RD(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)]);
 		SHMEM2_WR(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)],
@@ -2451,7 +2622,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	}
 
 	/* Wait for all pending SP commands to complete */
-	if (!bnx2x_wait_sp_comp(bp, ~0x0UL)) {
+	if (IS_PF(bp) && !bnx2x_wait_sp_comp(bp, ~0x0UL)) {
 		BNX2X_ERR("Timeout waiting for SP elements to complete\n");
 		bnx2x_nic_unload(bp, UNLOAD_CLOSE, false);
 		return -EBUSY;
@@ -2467,10 +2638,12 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 
 #ifndef BNX2X_STOP_ON_ERROR
 load_error3:
-	bnx2x_int_disable_sync(bp, 1);
+	if (IS_PF(bp)) {
+		bnx2x_int_disable_sync(bp, 1);
 
-	/* Clean queueable objects */
-	bnx2x_squeeze_objects(bp);
+		/* Clean queueable objects */
+		bnx2x_squeeze_objects(bp);
+	}
 
 	/* Free SKBs, SGEs, TPA pool and driver internals */
 	bnx2x_free_skbs(bp);
@@ -2480,7 +2653,7 @@ load_error3:
 	/* Release IRQs */
 	bnx2x_free_irq(bp);
 load_error2:
-	if (!BP_NOMCP(bp)) {
+	if (IS_PF(bp) && !BP_NOMCP(bp)) {
 		bnx2x_fw_command(bp, DRV_MSG_CODE_UNLOAD_REQ_WOL_MCP, 0);
 		bnx2x_fw_command(bp, DRV_MSG_CODE_UNLOAD_DONE, 0);
 	}
@@ -2488,15 +2661,34 @@ load_error2:
 	bp->port.pmf = 0;
 load_error1:
 	bnx2x_napi_disable(bp);
+
 	/* clear pf_load status, as it was already set */
-	bnx2x_clear_pf_load(bp);
+	if (IS_PF(bp))
+		bnx2x_clear_pf_load(bp);
 load_error0:
+	bnx2x_free_fp_mem(bp);
+	bnx2x_free_fw_stats_mem(bp);
 	bnx2x_free_mem(bp);
 
 	return rc;
 #endif /* ! BNX2X_STOP_ON_ERROR */
 }
 
+static int bnx2x_drain_tx_queues(struct bnx2x *bp)
+{
+	u8 rc = 0, cos, i;
+
+	/* Wait until tx fastpath tasks complete */
+	for_each_tx_queue(bp, i) {
+		struct bnx2x_fastpath *fp = &bp->fp[i];
+		for_each_cos_in_tx_queue(fp, cos)
+			rc = bnx2x_clean_tx_queue(bp, fp->txdata_ptr[cos]);
+		if (rc)
+			return rc;
+	}
+	return 0;
+}
+
 /* must be called with rtnl_lock */
 int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 {
@@ -2506,15 +2698,16 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 	DP(NETIF_MSG_IFUP, "Starting NIC unload\n");
 
 	/* mark driver is unloaded in shmem2 */
-	if (SHMEM2_HAS(bp, drv_capabilities_flag)) {
+	if (IS_PF(bp) && SHMEM2_HAS(bp, drv_capabilities_flag)) {
 		u32 val;
 		val = SHMEM2_RD(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)]);
 		SHMEM2_WR(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)],
 			  val & ~DRV_FLAGS_CAPABILITIES_LOADED_L2);
 	}
 
-	if ((bp->state == BNX2X_STATE_CLOSED) ||
-	    (bp->state == BNX2X_STATE_ERROR)) {
+	if (IS_PF(bp) &&
+	    (bp->state == BNX2X_STATE_CLOSED ||
+	     bp->state == BNX2X_STATE_ERROR)) {
 		/* We can get here if the driver has been unloaded
 		 * during parity error recovery and is either waiting for a
 		 * leader to complete or for other functions to unload and
@@ -2551,13 +2744,19 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 
 	del_timer_sync(&bp->timer);
 
-	/* Set ALWAYS_ALIVE bit in shmem */
-	bp->fw_drv_pulse_wr_seq |= DRV_PULSE_ALWAYS_ALIVE;
+	if (IS_PF(bp)) {
+
+		/* Set ALWAYS_ALIVE bit in shmem */
+		bp->fw_drv_pulse_wr_seq |= DRV_PULSE_ALWAYS_ALIVE;
 
-	bnx2x_drv_pulse(bp);
+		bnx2x_drv_pulse(bp);
 
-	bnx2x_stats_handle(bp, STATS_EVENT_STOP);
-	bnx2x_save_statistics(bp);
+		bnx2x_stats_handle(bp, STATS_EVENT_STOP);
+		bnx2x_save_statistics(bp);
+	}
+
+	/* wait till consumers catch up with producers in all queues */
+	bnx2x_drain_tx_queues(bp);
 
 	/* Cleanup the chip if needed */
 	if (unload_mode != UNLOAD_RECOVERY)
@@ -2593,7 +2792,8 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 	 * At this stage no more interrupts will arrive so we may safly clean
 	 * the queueable objects here in case they failed to get cleaned so far.
 	 */
-	bnx2x_squeeze_objects(bp);
+	if (IS_PF(bp))
+		bnx2x_squeeze_objects(bp);
 
 	/* There should be no more pending SP commands at this stage */
 	bp->sp_state = 0;
@@ -2607,19 +2807,22 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 	for_each_rx_queue(bp, i)
 		bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
 
-	if (CNIC_LOADED(bp)) {
+	bnx2x_free_fp_mem(bp);
+	if (CNIC_LOADED(bp))
 		bnx2x_free_fp_mem_cnic(bp);
-		bnx2x_free_mem_cnic(bp);
-	}
-	bnx2x_free_mem(bp);
 
+	if (IS_PF(bp)) {
+		bnx2x_free_mem(bp);
+		if (CNIC_LOADED(bp))
+			bnx2x_free_mem_cnic(bp);
+	}
 	bp->state = BNX2X_STATE_CLOSED;
 	bp->cnic_loaded = false;
 
 	/* Check if there are pending parity attentions. If there are - set
 	 * RECOVERY_IN_PROGRESS.
 	 */
-	if (bnx2x_chk_parity_attn(bp, &global, false)) {
+	if (IS_PF(bp) && bnx2x_chk_parity_attn(bp, &global, false)) {
 		bnx2x_set_reset_in_progress(bp);
 
 		/* Set RESET_IS_GLOBAL if needed */
@@ -2631,7 +2834,9 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 	/* The last driver must disable a "close the gate" if there is no
 	 * parity attention or "process kill" pending.
 	 */
-	if (!bnx2x_clear_pf_load(bp) && bnx2x_reset_is_done(bp, BP_PATH(bp)))
+	if (IS_PF(bp) &&
+	    !bnx2x_clear_pf_load(bp) &&
+	    bnx2x_reset_is_done(bp, BP_PATH(bp)))
 		bnx2x_disable_close_the_gate(bp);
 
 	DP(NETIF_MSG_IFUP, "Ending NIC unload\n");
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index bca371e..91e432d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -1128,11 +1128,18 @@ static inline u8 bnx2x_fp_qzone_id(struct bnx2x_fastpath *fp)
 static inline u32 bnx2x_rx_ustorm_prods_offset(struct bnx2x_fastpath *fp)
 {
 	struct bnx2x *bp = fp->bp;
-
-	if (!CHIP_IS_E1x(bp))
-		return USTORM_RX_PRODS_E2_OFFSET(fp->cl_qzone_id);
+	u32 offset = BAR_USTRORM_INTMEM;
+
+	if (IS_VF(bp))
+		return PXP_VF_ADDR_USDM_QUEUES_START +
+			bp->acquire_resp.resc.hw_qid[fp->index] *
+			sizeof(struct ustorm_queue_zone_data);
+	else if (!CHIP_IS_E1x(bp))
+		offset += USTORM_RX_PRODS_E2_OFFSET(fp->cl_qzone_id);
 	else
-		return USTORM_RX_PRODS_E1X_OFFSET(BP_PORT(bp), fp->cl_id);
+		offset += USTORM_RX_PRODS_E1X_OFFSET(BP_PORT(bp), fp->cl_id);
+
+	return offset;
 }
 
 static inline void bnx2x_init_txdata(struct bnx2x *bp,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index 277f17e..b7c82f9 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -890,7 +890,7 @@ static void bnx2x_set_msglevel(struct net_device *dev, u32 level)
 
 	if (capable(CAP_NET_ADMIN)) {
 		/* dump MCP trace */
-		if (level & BNX2X_MSG_MCP)
+		if (IS_PF(bp) && (level & BNX2X_MSG_MCP))
 			bnx2x_fw_dump_lvl(bp, KERN_INFO);
 		bp->msg_enable = level;
 	}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 2a8c4c2..158dcec 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -2460,17 +2460,49 @@ void bnx2x__link_status_update(struct bnx2x *bp)
 		return;
 
 	/* read updated dcb configuration */
-	bnx2x_dcbx_pmf_update(bp);
-
-	bnx2x_link_status_update(&bp->link_params, &bp->link_vars);
+	if (IS_PF(bp)) {
+		bnx2x_dcbx_pmf_update(bp);
+		bnx2x_link_status_update(&bp->link_params, &bp->link_vars);
+		if (bp->link_vars.link_up)
+			bnx2x_stats_handle(bp, STATS_EVENT_LINK_UP);
+		else
+			bnx2x_stats_handle(bp, STATS_EVENT_STOP);
+			/* indicate link status */
+		bnx2x_link_report(bp);
 
-	if (bp->link_vars.link_up)
+	} else { /* VF */
+		bp->port.supported[0] |= (SUPPORTED_10baseT_Half |
+					  SUPPORTED_10baseT_Full |
+					  SUPPORTED_100baseT_Half |
+					  SUPPORTED_100baseT_Full |
+					  SUPPORTED_1000baseT_Full |
+					  SUPPORTED_2500baseX_Full |
+					  SUPPORTED_10000baseT_Full |
+					  SUPPORTED_TP |
+					  SUPPORTED_FIBRE |
+					  SUPPORTED_Autoneg |
+					  SUPPORTED_Pause |
+					  SUPPORTED_Asym_Pause);
+		bp->port.advertising[0] = bp->port.supported[0];
+
+		bp->link_params.bp = bp;
+		bp->link_params.port = BP_PORT(bp);
+		bp->link_params.req_duplex[0] = DUPLEX_FULL;
+		bp->link_params.req_flow_ctrl[0] = BNX2X_FLOW_CTRL_NONE;
+		bp->link_params.req_line_speed[0] = SPEED_10000;
+		bp->link_params.speed_cap_mask[0] = 0x7f0000;
+		bp->link_params.switch_cfg = SWITCH_CFG_10G;
+		bp->link_vars.mac_type = MAC_TYPE_BMAC;
+		bp->link_vars.line_speed = SPEED_10000;
+		bp->link_vars.link_status =
+			(LINK_STATUS_LINK_UP |
+			 LINK_STATUS_SPEED_AND_DUPLEX_10GTFD);
+		bp->link_vars.link_up = 1;
+		bp->link_vars.duplex = DUPLEX_FULL;
+		bp->link_vars.flow_ctrl = BNX2X_FLOW_CTRL_NONE;
+		__bnx2x_link_report(bp);
 		bnx2x_stats_handle(bp, STATS_EVENT_LINK_UP);
-	else
-		bnx2x_stats_handle(bp, STATS_EVENT_STOP);
-
-	/* indicate link status */
-	bnx2x_link_report(bp);
+	}
 }
 
 static int bnx2x_afex_func_update(struct bnx2x *bp, u16 vifid,
@@ -5700,6 +5732,15 @@ static void bnx2x_init_eth_fp(struct bnx2x *bp, int fp_idx)
 		cids[cos] = fp->txdata_ptr[cos]->cid;
 	}
 
+	/* nothing more for vf to do here */
+	if (IS_VF(bp))
+		return;
+
+	bnx2x_init_sb(bp, fp->status_blk_mapping, BNX2X_VF_ID_INVALID, false,
+		      fp->fw_sb_id, fp->igu_sb_id);
+
+	bnx2x_update_fpsb_idx(fp);
+
 	bnx2x_init_queue_obj(bp, &bnx2x_sp_obj(bp, fp).q_obj, fp->cl_id, cids,
 			     fp->max_cos, BP_FUNC(bp), bnx2x_sp(bp, q_rdata),
 			     bnx2x_sp_mapping(bp, q_rdata), q_type);
@@ -5709,13 +5750,10 @@ static void bnx2x_init_eth_fp(struct bnx2x *bp, int fp_idx)
 	 */
 	bnx2x_init_vlan_mac_fp_objs(fp, BNX2X_OBJ_TYPE_RX_TX);
 
-	DP(NETIF_MSG_IFUP, "queue[%d]:  bnx2x_init_sb(%p,%p)  cl_id %d  fw_sb %d  igu_sb %d\n",
-		   fp_idx, bp, fp->status_blk.e2_sb, fp->cl_id, fp->fw_sb_id,
-		   fp->igu_sb_id);
-	bnx2x_init_sb(bp, fp->status_blk_mapping, BNX2X_VF_ID_INVALID, false,
-		      fp->fw_sb_id, fp->igu_sb_id);
-
-	bnx2x_update_fpsb_idx(fp);
+	DP(NETIF_MSG_IFUP,
+	   "queue[%d]:  bnx2x_init_sb(%p,%p)  cl_id %d  fw_sb %d  igu_sb %d\n",
+	   fp_idx, bp, fp->status_blk.e2_sb, fp->cl_id, fp->fw_sb_id,
+	   fp->igu_sb_id);
 }
 
 static void bnx2x_init_tx_ring_one(struct bnx2x_fp_txdata *txdata)
@@ -5752,6 +5790,7 @@ static void bnx2x_init_tx_rings_cnic(struct bnx2x *bp)
 	for_each_tx_queue_cnic(bp, i)
 		bnx2x_init_tx_ring_one(bp->fp[i].txdata_ptr[0]);
 }
+
 static void bnx2x_init_tx_rings(struct bnx2x *bp)
 {
 	int i;
@@ -5787,17 +5826,22 @@ void bnx2x_nic_init(struct bnx2x *bp, u32 load_code)
 
 	for_each_eth_queue(bp, i)
 		bnx2x_init_eth_fp(bp, i);
+
+	/* ensure status block indices were read */
+	rmb();
+	bnx2x_init_rx_rings(bp);
+	bnx2x_init_tx_rings(bp);
+
+	if (IS_VF(bp))
+		return;
+
 	/* Initialize MOD_ABS interrupts */
 	bnx2x_init_mod_abs_int(bp, &bp->link_vars, bp->common.chip_id,
 			       bp->common.shmem_base, bp->common.shmem2_base,
 			       BP_PORT(bp));
-	/* ensure status block indices were read */
-	rmb();
 
 	bnx2x_init_def_sb(bp);
 	bnx2x_update_dsb_idx(bp);
-	bnx2x_init_rx_rings(bp);
-	bnx2x_init_tx_rings(bp);
 	bnx2x_init_sp_ring(bp);
 	bnx2x_init_eq_ring(bp);
 	bnx2x_init_internal(bp, load_code);
@@ -9656,7 +9700,7 @@ static int bnx2x_prev_unload_uncommon(struct bnx2x *bp)
 	 * the one required, then FLR will be sufficient to clean any residue
 	 * left by previous driver
 	 */
-	rc = bnx2x_test_firmware_version(bp, false);
+	rc = bnx2x_nic_load_analyze_req(bp, FW_MSG_CODE_DRV_LOAD_FUNCTION);
 
 	if (!rc) {
 		/* fw version is good */
@@ -11236,17 +11280,21 @@ static int bnx2x_open(struct net_device *dev)
 
 	bnx2x_set_power_state(bp, PCI_D0);
 
-	other_load_status = bnx2x_get_load_status(bp, other_engine);
-	load_status = bnx2x_get_load_status(bp, BP_PATH(bp));
+	if (IS_PF(bp)) {
+		other_load_status = bnx2x_get_load_status(bp, other_engine);
+		load_status = bnx2x_get_load_status(bp, BP_PATH(bp));
+	}
 
 	/*
 	 * If parity had happen during the unload, then attentions
 	 * and/or RECOVERY_IN_PROGRES may still be set. In this case we
 	 * want the first function loaded on the current engine to
 	 * complete the recovery.
+	 * Parity recovery is only relevant for PF driver.
 	 */
-	if (!bnx2x_reset_is_done(bp, BP_PATH(bp)) ||
-	    bnx2x_chk_parity_attn(bp, &global, true))
+	if (IS_PF(bp) &&
+	    (!bnx2x_reset_is_done(bp, BP_PATH(bp)) ||
+	    bnx2x_chk_parity_attn(bp, &global, true)))
 		do {
 			/*
 			 * If there are attentions and they are in a global
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index c302de4..3f01526 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -6559,6 +6559,12 @@
 #define PXP_VF_ADDR_IGU_END\
 	((PXP_VF_ADDR_IGU_START) + (PXP_VF_ADDR_IGU_SIZE) - 1)
 
+#define PXP_VF_ADDR_USDM_QUEUES_START			0x3000
+#define PXP_VF_ADDR_USDM_QUEUES_SIZE\
+	(PXP_VF_ADRR_NUM_QUEUES * PXP_ADDR_QUEUE_SIZE)
+#define PXP_VF_ADDR_USDM_QUEUES_END\
+	((PXP_VF_ADDR_USDM_QUEUES_START) + (PXP_VF_ADDR_USDM_QUEUES_SIZE) - 1)
+
 #define PXP_VF_ADDR_CSDM_GLOBAL_START			0x7600
 #define PXP_VF_ADDR_CSDM_GLOBAL_SIZE			(PXP_ADDR_REG_SIZE)
 #define PXP_VF_ADDR_CSDM_GLOBAL_END\
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 05/22] bnx2x: Add init, setup_q, set_mac to VF <-> PF channel
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (3 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 04/22] bnx2x: Separate VF and PF logic Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 06/22] bnx2x: Add teardown_q and close " Ariel Elior
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

'init' - init an acquired VF. Supply allocation GPAs to PF.
'setup_q' - PF to allocate a queue in device on behalf of the VF.
'set_mac' - PF to configure a mac in device on behalf of the VF.
VF driver uses these requests in the VF <-> PF channel in nic_load
flow.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h      |    6 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c  |   17 +++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c |  167 ++++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h |  123 ++++++++++++++++
 4 files changed, 313 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 17536ff..5c71002 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -341,6 +341,9 @@ union db_prod {
 #define SGE_PAGE_SIZE		PAGE_SIZE
 #define SGE_PAGE_SHIFT		PAGE_SHIFT
 #define SGE_PAGE_ALIGN(addr)	PAGE_ALIGN((typeof(PAGE_SIZE))(addr))
+#define SGE_PAGES		(SGE_PAGE_SIZE * PAGES_PER_SGE)
+#define TPA_AGG_SIZE		min_t(u32, (min_t(u32, 8, MAX_SKB_FRAGS) * \
+					    SGE_PAGES), 0xffff)
 
 /* SGE ring related macros */
 #define NUM_RX_SGE_PAGES	2
@@ -2221,6 +2224,9 @@ int bnx2x_get_vf_id(struct bnx2x *bp, u32 *vf_id);
 int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping);
 int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count);
 int bnx2x_vfpf_release(struct bnx2x *bp);
+int bnx2x_vfpf_init(struct bnx2x *bp);
+int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx);
+int bnx2x_vfpf_set_mac(struct bnx2x *bp);
 int bnx2x_nic_load_analyze_req(struct bnx2x *bp, u32 load_code);
 /* Congestion management fairness mode */
 #define CMNG_FNS_NONE		0
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 0f6bb1f..092cbee 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -2432,6 +2432,13 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 		LOAD_ERROR_EXIT(bp, load_error0);
 	}
 
+	/* request pf to initialize status blocks */
+	if (IS_VF(bp)) {
+		rc = bnx2x_vfpf_init(bp);
+		if (rc)
+			LOAD_ERROR_EXIT(bp, load_error0);
+	}
+
 	/* As long as bnx2x_alloc_mem() may possibly update
 	 * bp->num_queues, bnx2x_set_real_num_queues() should always
 	 * come after it. At this stage cnic queues are not counted.
@@ -2549,6 +2556,14 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 			LOAD_ERROR_EXIT(bp, load_error3);
 		}
 
+	} else { /* vf */
+		for_each_eth_queue(bp, i) {
+			rc = bnx2x_vfpf_setup_q(bp, i);
+			if (rc) {
+				BNX2X_ERR("Queue setup failed\n");
+				LOAD_ERROR_EXIT(bp, load_error3);
+			}
+		}
 	}
 
 	/* Now when Clients are configured we are ready to work */
@@ -2557,6 +2572,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	/* Configure a ucast MAC */
 	if (IS_PF(bp))
 		rc = bnx2x_set_eth_mac(bp, true);
+	else /* vf */
+		rc = bnx2x_vfpf_set_mac(bp);
 	if (rc) {
 		BNX2X_ERR("Setting Ethernet MAC failed\n");
 		LOAD_ERROR_EXIT(bp, load_error3);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 158dcec..b0fa707 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -13438,3 +13438,170 @@ int bnx2x_vfpf_release(struct bnx2x *bp)
 
 	return 0;
 }
+
+/* Tell PF about SB addresses */
+int bnx2x_vfpf_init(struct bnx2x *bp)
+{
+	struct vfpf_init_tlv *req = &bp->vf2pf_mbox->req.init;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int rc, i;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_INIT, sizeof(*req));
+
+	/* status blocks */
+	for_each_eth_queue(bp, i)
+		req->sb_addr[i] = (dma_addr_t)bnx2x_fp(bp, i,
+							  status_blk_mapping);
+
+	/* statistics - requests only supports single queue for now */
+	req->stats_addr = bp->fw_stats_data_mapping +
+		offsetof(struct bnx2x_fw_stats_data, queue_stats);
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+	if (rc)
+		return rc;
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("INIT VF failed: %d. Breaking...\n",
+			  resp->hdr.status);
+		return -EAGAIN;
+	}
+
+	DP(BNX2X_MSG_SP, "INIT VF Succeeded\n");
+
+	return 0;
+}
+
+/* ask the pf to open a queue for the vf */
+int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx)
+{
+	struct vfpf_setup_q_tlv *req = &bp->vf2pf_mbox->req.setup_q;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	struct bnx2x_fastpath *fp = &bp->fp[fp_idx];
+	u16 tpa_agg_size = 0, flags = 0;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SETUP_Q, sizeof(*req));
+
+	/* select tpa mode to request */
+	if (!fp->disable_tpa) {
+		flags |= VFPF_QUEUE_FLG_TPA;
+		flags |= VFPF_QUEUE_FLG_TPA_IPV6;
+		if (fp->mode == TPA_MODE_GRO)
+			flags |= VFPF_QUEUE_FLG_TPA_GRO;
+		tpa_agg_size = TPA_AGG_SIZE;
+	}
+
+	/* calculate queue flags */
+	flags |= VFPF_QUEUE_FLG_STATS;
+	flags |= VFPF_QUEUE_FLG_CACHE_ALIGN;
+	flags |= IS_MF_SD(bp) ? VFPF_QUEUE_FLG_OV : 0;
+
+	flags |= VFPF_QUEUE_FLG_VLAN;
+	DP(NETIF_MSG_IFUP, "vlan removal enabled\n");
+
+	/* Common */
+	req->vf_qid = fp_idx;
+	req->param_valid = VFPF_RXQ_VALID | VFPF_TXQ_VALID;
+
+	/* Rx */
+	req->rxq.rcq_addr = fp->rx_comp_mapping;
+	req->rxq.rcq_np_addr = fp->rx_comp_mapping + BCM_PAGE_SIZE;
+	req->rxq.rxq_addr = fp->rx_desc_mapping;
+	req->rxq.sge_addr = fp->rx_sge_mapping;
+
+	req->rxq.vf_sb = fp_idx;
+	req->rxq.sb_index = HC_INDEX_ETH_RX_CQ_CONS;
+	req->rxq.hc_rate = bp->rx_ticks ? 1000000/bp->rx_ticks : 0;
+
+	req->rxq.mtu = bp->dev->mtu;
+	req->rxq.buf_sz = fp->rx_buf_size;
+	req->rxq.sge_buf_sz = BCM_PAGE_SIZE * PAGES_PER_SGE;
+	req->rxq.tpa_agg_sz = tpa_agg_size;
+	req->rxq.max_sge_pkt = SGE_PAGE_ALIGN(bp->dev->mtu) >> SGE_PAGE_SHIFT;
+	req->rxq.max_sge_pkt = ((req->rxq.max_sge_pkt + PAGES_PER_SGE - 1) &
+			  (~(PAGES_PER_SGE-1))) >> PAGES_PER_SGE_SHIFT;
+	req->rxq.flags = flags;
+	req->rxq.drop_flags = 0;
+	req->rxq.cache_line_log = BNX2X_RX_ALIGN_SHIFT;
+	req->rxq.stat_id = -1; /* No stats at the moment */
+
+	/* Tx */
+	req->txq.txq_addr = fp->txdata_ptr[FIRST_TX_COS_INDEX]->tx_desc_mapping;
+	req->txq.vf_sb = fp_idx;
+	req->txq.sb_index = HC_INDEX_ETH_TX_CQ_CONS_COS0;
+	req->txq.hc_rate = bp->tx_ticks ? 1000000/bp->tx_ticks : 0;
+	req->txq.flags = flags;
+	req->txq.traffic_type = LLFC_TRAFFIC_TYPE_NW;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+	if (rc)
+		BNX2X_ERR("Sending SETUP_Q message for queue[%d] failed!\n",
+			  fp_idx);
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("Status of SETUP_Q for queue[%d] is %d\n",
+			  fp_idx, resp->hdr.status);
+		return -EINVAL;
+	}
+	return rc;
+}
+
+/* request pf to add a mac for the vf */
+int bnx2x_vfpf_set_mac(struct bnx2x *bp)
+{
+	struct vfpf_set_q_filters_tlv *req = &bp->vf2pf_mbox->req.set_q_filters;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SET_Q_FILTERS,
+			sizeof(*req));
+
+	req->flags = VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED;
+	req->vf_qid = 0;
+	req->n_mac_vlan_filters = 1;
+
+	req->filters[0].flags =
+		VFPF_Q_FILTER_DEST_MAC_VALID | VFPF_Q_FILTER_SET_MAC;
+
+	/* copy mac from device to request */
+	memcpy(req->filters[0].mac, bp->dev->dev_addr, ETH_ALEN);
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	/* send message to pf */
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+	if (rc) {
+		BNX2X_ERR("failed to send message to pf. rc was %d\n", rc);
+		return rc;
+	}
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("vfpf SET MAC failed: %d\n", resp->hdr.status);
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index 763d1c2..ded37bc 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -38,6 +38,22 @@ struct hw_sb_info {
  */
 #define TLV_BUFFER_SIZE			1024
 
+#define VFPF_QUEUE_FLG_TPA		0x0001
+#define VFPF_QUEUE_FLG_TPA_IPV6		0x0002
+#define VFPF_QUEUE_FLG_TPA_GRO		0x0004
+#define VFPF_QUEUE_FLG_CACHE_ALIGN	0x0008
+#define VFPF_QUEUE_FLG_STATS		0x0010
+#define VFPF_QUEUE_FLG_OV		0x0020
+#define VFPF_QUEUE_FLG_VLAN		0x0040
+#define VFPF_QUEUE_FLG_COS		0x0080
+#define VFPF_QUEUE_FLG_HC		0x0100
+#define VFPF_QUEUE_FLG_DHC		0x0200
+
+#define VFPF_QUEUE_DROP_IP_CS_ERR	(1 << 0)
+#define VFPF_QUEUE_DROP_TCP_CS_ERR	(1 << 1)
+#define VFPF_QUEUE_DROP_TTL0		(1 << 2)
+#define VFPF_QUEUE_DROP_UDP_CS_ERR	(1 << 3)
+
 enum {
 	PFVF_STATUS_WAITING = 0,
 	PFVF_STATUS_SUCCESS,
@@ -129,6 +145,107 @@ struct pfvf_acquire_resp_tlv {
 	} resc;
 };
 
+/* Init VF */
+struct vfpf_init_tlv {
+	struct vfpf_first_tlv first_tlv;
+	aligned_u64 sb_addr[PFVF_MAX_SBS_PER_VF]; /* vf_sb based */
+	aligned_u64 spq_addr;
+	aligned_u64 stats_addr;
+};
+
+/* Setup Queue */
+struct vfpf_setup_q_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	struct vf_pf_rxq_params {
+		/* physical addresses */
+		aligned_u64 rcq_addr;
+		aligned_u64 rcq_np_addr;
+		aligned_u64 rxq_addr;
+		aligned_u64 sge_addr;
+
+		/* sb + hc info */
+		u8  vf_sb;		/* index in hw_sbs[] */
+		u8  sb_index;		/* Index in the SB */
+		u16 hc_rate;		/* desired interrupts per sec. */
+		/* valid iff VFPF_QUEUE_FLG_HC */
+		/* rx buffer info */
+		u16 mtu;
+		u16 buf_sz;
+		u16 flags;		/* VFPF_QUEUE_FLG_X flags */
+		u16 stat_id;		/* valid iff VFPF_QUEUE_FLG_STATS */
+
+		/* valid iff VFPF_QUEUE_FLG_TPA */
+		u16 sge_buf_sz;
+		u16 tpa_agg_sz;
+		u8 max_sge_pkt;
+
+		u8 drop_flags;		/* VFPF_QUEUE_DROP_X, for Linux VMs
+					 * all the flags are turned off
+					 */
+
+		u8 cache_line_log;	/* VFPF_QUEUE_FLG_CACHE_ALIGN */
+		u8 padding;
+	} rxq;
+
+	struct vf_pf_txq_params {
+		/* physical addresses */
+		aligned_u64 txq_addr;
+
+		/* sb + hc info */
+		u8  vf_sb;		/* index in hw_sbs[] */
+		u8  sb_index;		/* Index in the SB */
+		u16 hc_rate;		/* desired interrupts per sec. */
+		/* valid iff VFPF_QUEUE_FLG_HC */
+		u32 flags;		/* VFPF_QUEUE_FLG_X flags */
+		u16 stat_id;		/* valid iff VFPF_QUEUE_FLG_STATS */
+		u8  traffic_type;	/* see in setup_context() */
+		u8  padding;
+	} txq;
+
+	u8 vf_qid;			/* index in hw_qid[] */
+	u8 param_valid;
+	#define VFPF_RXQ_VALID		0x01
+	#define VFPF_TXQ_VALID		0x02
+	u8 padding[2];
+};
+
+/* Set Queue Filters */
+struct vfpf_q_mac_vlan_filter {
+	u32 flags;
+	#define VFPF_Q_FILTER_DEST_MAC_VALID	0x01
+	#define VFPF_Q_FILTER_VLAN_TAG_VALID	0x02
+	#define VFPF_Q_FILTER_SET_MAC		0x100	/* set/clear */
+	u8  mac[ETH_ALEN];
+	u16 vlan_tag;
+};
+
+/* configure queue filters */
+struct vfpf_set_q_filters_tlv {
+	struct vfpf_first_tlv first_tlv;
+
+	u32 flags;
+	#define VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED	0x01
+	#define VFPF_SET_Q_FILTERS_MULTICAST_CHANGED	0x02
+	#define VFPF_SET_Q_FILTERS_RX_MASK_CHANGED	0x04
+
+	u8 vf_qid;			/* index in hw_qid[] */
+	u8 n_mac_vlan_filters;
+	u8 n_multicast;
+	u8 padding;
+
+	#define PFVF_MAX_MAC_FILTERS                   16
+	#define PFVF_MAX_VLAN_FILTERS                  16
+	#define PFVF_MAX_FILTERS               (PFVF_MAX_MAC_FILTERS +\
+						PFVF_MAX_VLAN_FILTERS)
+	struct vfpf_q_mac_vlan_filter filters[PFVF_MAX_FILTERS];
+
+#define PFVF_MAX_MULTICAST_PER_VF              32
+	u8  multicast[PFVF_MAX_MULTICAST_PER_VF][ETH_ALEN];
+
+	u32 rx_mask;	/* see mask constants at the top of the file */
+};
+
 /* release the VF's acquired resources */
 struct vfpf_release_tlv {
 	struct vfpf_first_tlv	first_tlv;
@@ -143,6 +260,9 @@ struct tlv_buffer_size {
 union vfpf_tlvs {
 	struct vfpf_first_tlv		first_tlv;
 	struct vfpf_acquire_tlv		acquire;
+	struct vfpf_init_tlv		init;
+	struct vfpf_setup_q_tlv		setup_q;
+	struct vfpf_set_q_filters_tlv	set_q_filters;
 	struct vfpf_release_tlv         release;
 	struct channel_list_end_tlv     list_end;
 	struct tlv_buffer_size		tlv_buf_size;
@@ -160,6 +280,9 @@ union pfvf_tlvs {
 enum channel_tlvs {
 	   CHANNEL_TLV_NONE,
 	   CHANNEL_TLV_ACQUIRE,
+	   CHANNEL_TLV_INIT,
+	   CHANNEL_TLV_SETUP_Q,
+	   CHANNEL_TLV_SET_Q_FILTERS,
 	   CHANNEL_TLV_RELEASE,
 	   CHANNEL_TLV_LIST_END,
 	   CHANNEL_TLV_MAX
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 06/22] bnx2x: Add teardown_q and close to VF <-> PF channel
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (4 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 05/22] bnx2x: Add init, setup_q, set_mac to VF <-> PF channel Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 07/22] bnx2x: Support ndo_set_rxmode in VF driver Ariel Elior
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

When a VF is being closed its queues are released via
the 'teardown_q' and the VF itself is closed with
'close'. These are essentially the unload counterparts of
'init' and 'setup_q' from the load flow.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h      |    2 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c  |   12 +++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c |   85 ++++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h |   18 +++++
 4 files changed, 115 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 5c71002..4f1e06e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -2225,7 +2225,9 @@ int bnx2x_send_msg2pf(struct bnx2x *bp, u8 *done, dma_addr_t msg_mapping);
 int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count);
 int bnx2x_vfpf_release(struct bnx2x *bp);
 int bnx2x_vfpf_init(struct bnx2x *bp);
+void bnx2x_vfpf_close_vf(struct bnx2x *bp);
 int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx);
+int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx);
 int bnx2x_vfpf_set_mac(struct bnx2x *bp);
 int bnx2x_nic_load_analyze_req(struct bnx2x *bp, u32 load_code);
 /* Congestion management fairness mode */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 092cbee..4e87eec 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -2775,9 +2775,17 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
 	/* wait till consumers catch up with producers in all queues */
 	bnx2x_drain_tx_queues(bp);
 
-	/* Cleanup the chip if needed */
-	if (unload_mode != UNLOAD_RECOVERY)
+	/* if VF indicate to PF this function is going down (PF will delete sp
+	 * elsements and clear initializations
+	 */
+	if (IS_VF(bp))
+		bnx2x_vfpf_close_vf(bp);
+
+	/* if this is a normal/close unload need to clean up chip*/
+	else if (unload_mode != UNLOAD_RECOVERY)
 		bnx2x_chip_cleanup(bp, unload_mode, keep_link);
+
+	/* recovery */
 	else {
 		/* Send the UNLOAD_REQUEST to the MCP */
 		bnx2x_send_unload_req(bp, unload_mode);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index b0fa707..1317813 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -13480,6 +13480,55 @@ int bnx2x_vfpf_init(struct bnx2x *bp)
 	return 0;
 }
 
+/* CLOSE VF - opposite to INIT_VF */
+void bnx2x_vfpf_close_vf(struct bnx2x *bp)
+{
+	struct vfpf_close_tlv *req = &bp->vf2pf_mbox->req.close;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int i, rc;
+	u32 vf_id;
+
+	/* If we haven't got a valid VF id, there is no sense to
+	 * continue with sending messages
+	 */
+	if (bnx2x_get_vf_id(bp, &vf_id))
+		goto free_irq;
+
+	/* Close the queues */
+	for_each_queue(bp, i)
+		bnx2x_vfpf_teardown_queue(bp, i);
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_CLOSE, sizeof(*req));
+
+	req->vf_id = vf_id;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+
+	if (rc)
+		BNX2X_ERR("Sending CLOSE failed. rc was: %d\n", rc);
+
+	else if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+		BNX2X_ERR("Sending CLOSE failed: pf response was %d\n",
+			  resp->hdr.status);
+
+free_irq:
+	/* Disable HW interrupts, NAPI */
+	bnx2x_netif_stop(bp, 0);
+	/* Delete all NAPI objects */
+	bnx2x_del_all_napi(bp);
+
+	/* Release IRQs */
+	bnx2x_free_irq(bp);
+}
+
 /* ask the pf to open a queue for the vf */
 int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx)
 {
@@ -13563,6 +13612,42 @@ int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx)
 	return rc;
 }
 
+int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx)
+{
+	struct vfpf_q_op_tlv *req = &bp->vf2pf_mbox->req.q_op;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_TEARDOWN_Q,
+			sizeof(*req));
+
+	req->vf_qid = qidx;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+
+	if (rc) {
+		BNX2X_ERR("Sending TEARDOWN for queue %d failed: %d\n", qidx,
+			  rc);
+		return rc;
+	}
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("TEARDOWN for queue %d failed: %d\n", qidx,
+			  resp->hdr.status);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 /* request pf to add a mac for the vf */
 int bnx2x_vfpf_set_mac(struct bnx2x *bp)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index ded37bc..116a17a 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -110,6 +110,13 @@ struct vfpf_acquire_tlv {
 	aligned_u64 bulletin_addr;
 };
 
+/* simple operation request on queue */
+struct vfpf_q_op_tlv {
+	struct vfpf_first_tlv	first_tlv;
+	u8 vf_qid;
+	u8 padding[3];
+};
+
 /* acquire response tlv - carries the allocated resources */
 struct pfvf_acquire_resp_tlv {
 	struct pfvf_tlv hdr;
@@ -246,6 +253,13 @@ struct vfpf_set_q_filters_tlv {
 	u32 rx_mask;	/* see mask constants at the top of the file */
 };
 
+/* close VF (disable VF) */
+struct vfpf_close_tlv {
+	struct vfpf_first_tlv   first_tlv;
+	u16			vf_id;  /* for debug */
+	u8 padding[2];
+};
+
 /* release the VF's acquired resources */
 struct vfpf_release_tlv {
 	struct vfpf_first_tlv	first_tlv;
@@ -261,6 +275,8 @@ union vfpf_tlvs {
 	struct vfpf_first_tlv		first_tlv;
 	struct vfpf_acquire_tlv		acquire;
 	struct vfpf_init_tlv		init;
+	struct vfpf_close_tlv		close;
+	struct vfpf_q_op_tlv		q_op;
 	struct vfpf_setup_q_tlv		setup_q;
 	struct vfpf_set_q_filters_tlv	set_q_filters;
 	struct vfpf_release_tlv         release;
@@ -283,6 +299,8 @@ enum channel_tlvs {
 	   CHANNEL_TLV_INIT,
 	   CHANNEL_TLV_SETUP_Q,
 	   CHANNEL_TLV_SET_Q_FILTERS,
+	   CHANNEL_TLV_TEARDOWN_Q,
+	   CHANNEL_TLV_CLOSE,
 	   CHANNEL_TLV_RELEASE,
 	   CHANNEL_TLV_LIST_END,
 	   CHANNEL_TLV_MAX
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 07/22] bnx2x: Support ndo_set_rxmode in VF driver
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (5 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 06/22] bnx2x: Add teardown_q and close " Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 08/22] bnx2x: VF fastpath Ariel Elior
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The VF driver uses the 'q_filter' request in the VF <-> PF channel to
have the PF configure the requested rxmode to device. ndo_set_rxmode
is called under bottom half lock, so sleeping until the response
arrives over the VF <-> PF channel is out of the question. For this reason
the VF driver returns from the ndo after scheduling a work item, which
in turn processes the rx mode request and adds the classification
information through the VF <-> PF channel accordingly.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h      |    5 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c |  174 +++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h |    7 +
 3 files changed, 180 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 4f1e06e..88cbdb9 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1191,6 +1191,8 @@ enum {
 	BNX2X_SP_RTNL_TX_TIMEOUT,
 	BNX2X_SP_RTNL_AFEX_F_UPDATE,
 	BNX2X_SP_RTNL_FAN_FAILURE,
+	BNX2X_SP_RTNL_VFPF_MCAST,
+	BNX2X_SP_RTNL_VFPF_STORM_RX_MODE,
 };
 
 
@@ -2229,6 +2231,9 @@ void bnx2x_vfpf_close_vf(struct bnx2x *bp);
 int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx);
 int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx);
 int bnx2x_vfpf_set_mac(struct bnx2x *bp);
+int bnx2x_vfpf_set_mcast(struct net_device *dev);
+int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp);
+
 int bnx2x_nic_load_analyze_req(struct bnx2x *bp, u32 load_code);
 /* Congestion management fairness mode */
 #define CMNG_FNS_NONE		0
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 1317813..6ac79ba 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -9406,6 +9406,19 @@ sp_rtnl_not_reset:
 		bnx2x_close(bp->dev);
 	}
 
+	if (test_and_clear_bit(BNX2X_SP_RTNL_VFPF_MCAST, &bp->sp_rtnl_state)) {
+		DP(BNX2X_MSG_SP,
+		   "sending set mcast vf pf channel message from rtnl sp-task\n");
+		bnx2x_vfpf_set_mcast(bp->dev);
+	}
+
+	if (test_and_clear_bit(BNX2X_SP_RTNL_VFPF_STORM_RX_MODE,
+			       &bp->sp_rtnl_state)) {
+		DP(BNX2X_MSG_SP,
+		   "sending set storm rx mode vf pf channel message from rtnl sp-task\n");
+		bnx2x_vfpf_storm_rx_mode(bp);
+	}
+
 sp_rtnl_exit:
 	rtnl_unlock();
 }
@@ -11485,12 +11498,25 @@ void bnx2x_set_rx_mode(struct net_device *dev)
 		  CHIP_IS_E1(bp)))
 		rx_mode = BNX2X_RX_MODE_ALLMULTI;
 	else {
-		/* some multicasts */
-		if (bnx2x_set_mc_list(bp) < 0)
-			rx_mode = BNX2X_RX_MODE_ALLMULTI;
+		if (IS_PF(bp)) {
+			/* some multicasts */
+			if (bnx2x_set_mc_list(bp) < 0)
+				rx_mode = BNX2X_RX_MODE_ALLMULTI;
 
-		if (bnx2x_set_uc_list(bp) < 0)
-			rx_mode = BNX2X_RX_MODE_PROMISC;
+			if (bnx2x_set_uc_list(bp) < 0)
+				rx_mode = BNX2X_RX_MODE_PROMISC;
+		} else {
+			/* configuring mcast to a vf involves sleeping (when we
+			 * wait for the pf's response). Since this function is
+			 * called from non sleepable context we must schedule
+			 * a work item for this purpose
+			 */
+			smp_mb__before_clear_bit();
+			set_bit(BNX2X_SP_RTNL_VFPF_MCAST,
+				&bp->sp_rtnl_state);
+			smp_mb__after_clear_bit();
+			schedule_delayed_work(&bp->sp_rtnl_task, 0);
+		}
 	}
 
 	bp->rx_mode = rx_mode;
@@ -11504,7 +11530,20 @@ void bnx2x_set_rx_mode(struct net_device *dev)
 		return;
 	}
 
-	bnx2x_set_storm_rx_mode(bp);
+	if (IS_PF(bp)) {
+		bnx2x_set_storm_rx_mode(bp);
+	} else {
+		/* configuring rx mode to storms in a vf involves sleeping (when
+		 * we wait for the pf's response). Since this function is
+		 * called from non sleepable context we must schedule
+		 * a work item for this purpose
+		 */
+		smp_mb__before_clear_bit();
+		set_bit(BNX2X_SP_RTNL_VFPF_STORM_RX_MODE,
+			&bp->sp_rtnl_state);
+		smp_mb__after_clear_bit();
+		schedule_delayed_work(&bp->sp_rtnl_task, 0);
+	}
 }
 
 /* called with rtnl_lock */
@@ -13690,3 +13729,126 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
 
 	return 0;
 }
+
+int bnx2x_vfpf_set_mcast(struct net_device *dev)
+{
+	struct bnx2x *bp = netdev_priv(dev);
+	struct vfpf_set_q_filters_tlv *req = &bp->vf2pf_mbox->req.set_q_filters;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int rc, i = 0;
+	struct netdev_hw_addr *ha;
+
+	if (bp->state != BNX2X_STATE_OPEN) {
+		DP(NETIF_MSG_IFUP, "state is %x, returning\n", bp->state);
+		return -EINVAL;
+	}
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SET_Q_FILTERS,
+			sizeof(*req));
+
+	/* Get Rx mode requested */
+	DP(NETIF_MSG_IFUP, "dev->flags = %x\n", dev->flags);
+
+	netdev_for_each_mc_addr(ha, dev) {
+		DP(NETIF_MSG_IFUP, "Adding mcast MAC: %pM\n",
+		   bnx2x_mc_addr(ha));
+		memcpy(req->multicast[i], bnx2x_mc_addr(ha), ETH_ALEN);
+		i++;
+	}
+
+	/* We support four PFVF_MAX_MULTICAST_PER_VF mcast
+	 * addresses tops
+	 */
+	if (i >= PFVF_MAX_MULTICAST_PER_VF) {
+		DP(NETIF_MSG_IFUP,
+		   "VF supports not more than %d multicast MAC addresses\n",
+		   PFVF_MAX_MULTICAST_PER_VF);
+		return -EINVAL;
+	}
+
+	req->n_multicast = i;
+	req->flags |= VFPF_SET_Q_FILTERS_MULTICAST_CHANGED;
+
+	req->vf_qid = 0;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+	if (rc) {
+		BNX2X_ERR("Sending a message failed: %d\n", rc);
+		return rc;
+	}
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("Set Rx mode/multicast failed: %d\n",
+			  resp->hdr.status);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp)
+{
+	int mode = bp->rx_mode;
+	struct vfpf_set_q_filters_tlv *req = &bp->vf2pf_mbox->req.set_q_filters;
+	struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
+	int rc;
+
+	/* clear mailbox and prep first tlv */
+	bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SET_Q_FILTERS,
+			sizeof(*req));
+
+	DP(NETIF_MSG_IFUP, "Rx mode is %d\n", mode);
+
+	switch (mode) {
+	case BNX2X_RX_MODE_NONE: /* no Rx */
+		req->rx_mask = VFPF_RX_MASK_ACCEPT_NONE;
+		break;
+	case BNX2X_RX_MODE_NORMAL:
+		req->rx_mask = VFPF_RX_MASK_ACCEPT_MATCHED_MULTICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_BROADCAST;
+		break;
+	case BNX2X_RX_MODE_ALLMULTI:
+		req->rx_mask = VFPF_RX_MASK_ACCEPT_ALL_MULTICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_BROADCAST;
+		break;
+	case BNX2X_RX_MODE_PROMISC:
+		req->rx_mask = VFPF_RX_MASK_ACCEPT_ALL_UNICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_ALL_MULTICAST;
+		req->rx_mask |= VFPF_RX_MASK_ACCEPT_BROADCAST;
+		break;
+	default:
+		BNX2X_ERR("BAD rx mode (%d)\n", mode);
+		return -EINVAL;
+	}
+
+	req->flags |= VFPF_SET_Q_FILTERS_RX_MASK_CHANGED;
+	req->vf_qid = 0;
+
+	/* add list termination tlv */
+	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+
+	/* output tlvs list */
+	bnx2x_dp_tlv_list(bp, req);
+
+	rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
+	if (rc)
+		BNX2X_ERR("Sending a message failed: %d\n", rc);
+
+	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+		BNX2X_ERR("Set Rx mode failed: %d\n", resp->hdr.status);
+		return -EINVAL;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index 116a17a..a289e59 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -54,6 +54,13 @@ struct hw_sb_info {
 #define VFPF_QUEUE_DROP_TTL0		(1 << 2)
 #define VFPF_QUEUE_DROP_UDP_CS_ERR	(1 << 3)
 
+#define VFPF_RX_MASK_ACCEPT_NONE		0x00000000
+#define VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST	0x00000001
+#define VFPF_RX_MASK_ACCEPT_MATCHED_MULTICAST	0x00000002
+#define VFPF_RX_MASK_ACCEPT_ALL_UNICAST		0x00000004
+#define VFPF_RX_MASK_ACCEPT_ALL_MULTICAST	0x00000008
+#define VFPF_RX_MASK_ACCEPT_BROADCAST		0x00000010
+
 enum {
 	PFVF_STATUS_WAITING = 0,
 	PFVF_STATUS_SUCCESS,
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 08/22] bnx2x: VF fastpath
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (6 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 07/22] bnx2x: Support ndo_set_rxmode in VF driver Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 09/22] bnx2x: Allocate VF database in PF when VFs are present Ariel Elior
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

When VF driver is transmitting it must supply the correct mac
address in the parsing BD. This is used for firmware validation
and enforcement and also for tx-switching.
Refactor interrupt ack flow to allow for different BAR addresses of
the hardware in the PF BAR vs the VF BAR.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c  |   18 +++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h  |   68 +++++++++++-----------
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c |   13 +----
 3 files changed, 50 insertions(+), 49 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 4e87eec..e1bf928 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -3460,8 +3460,19 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		    cpu_to_le16(vlan_tx_tag_get(skb));
 		tx_start_bd->bd_flags.as_bitfield |=
 		    (X_ETH_OUTBAND_VLAN << ETH_TX_BD_FLAGS_VLAN_MODE_SHIFT);
-	} else
-		tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod);
+	} else {
+
+		/* when transmitting in a vf, start bd must hold the ethertype
+		 * for fw to enforce it
+		 */
+		if (IS_VF(bp)) {
+			tx_start_bd->vlan_or_ethertype =
+				cpu_to_le16(ntohs(eth->h_proto));
+		} else {
+			/* used by FW for packet accounting */
+			tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod);
+		}
+	}
 
 	/* turn on parsing and get a BD */
 	bd_prod = TX_BD(NEXT_TX_IDX(bd_prod));
@@ -3477,7 +3488,8 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
 			hlen = bnx2x_set_pbd_csum_e2(bp, skb,
 						     &pbd_e2_parsing_data,
 						     xmit_type);
-		if (IS_MF_SI(bp)) {
+
+		if (IS_MF_SI(bp) || IS_VF(bp)) {
 			/*
 			 * fill in the MAC addresses in the PBD - for local
 			 * switching
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 91e432d..6b0add2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -499,6 +499,39 @@ int bnx2x_setup_tc(struct net_device *dev, u8 num_tc);
 /* select_queue callback */
 u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb);
 
+static inline void bnx2x_update_rx_prod(struct bnx2x *bp,
+					struct bnx2x_fastpath *fp,
+					u16 bd_prod, u16 rx_comp_prod,
+					u16 rx_sge_prod)
+{
+	struct ustorm_eth_rx_producers rx_prods = {0};
+	u32 i;
+
+	/* Update producers */
+	rx_prods.bd_prod = bd_prod;
+	rx_prods.cqe_prod = rx_comp_prod;
+	rx_prods.sge_prod = rx_sge_prod;
+
+	/* Make sure that the BD and SGE data is updated before updating the
+	 * producers since FW might read the BD/SGE right after the producer
+	 * is updated.
+	 * This is only applicable for weak-ordered memory model archs such
+	 * as IA-64. The following barrier is also mandatory since FW will
+	 * assumes BDs must have buffers.
+	 */
+	wmb();
+
+	for (i = 0; i < sizeof(rx_prods)/4; i++)
+		REG_WR(bp, fp->ustorm_rx_prods_offset + i*4,
+		       ((u32 *)&rx_prods)[i]);
+
+	mmiowb(); /* keep prod updates ordered */
+
+	DP(NETIF_MSG_RX_STATUS,
+	   "queue[%d]:  wrote  bd_prod %u  cqe_prod %u  sge_prod %u\n",
+	   fp->index, bd_prod, rx_comp_prod, rx_sge_prod);
+}
+
 /* reload helper */
 int bnx2x_reload_if_running(struct net_device *dev);
 
@@ -507,9 +540,6 @@ int bnx2x_change_mac_addr(struct net_device *dev, void *p);
 /* NAPI poll Rx part */
 int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget);
 
-void bnx2x_update_rx_prod(struct bnx2x *bp, struct bnx2x_fastpath *fp,
-			u16 bd_prod, u16 rx_comp_prod, u16 rx_sge_prod);
-
 /* NAPI poll Tx part */
 int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata);
 
@@ -612,38 +642,6 @@ static inline void bnx2x_update_fpsb_idx(struct bnx2x_fastpath *fp)
 	fp->fp_hc_idx = fp->sb_running_index[SM_RX_ID];
 }
 
-static inline void bnx2x_update_rx_prod_gen(struct bnx2x *bp,
-			struct bnx2x_fastpath *fp, u16 bd_prod,
-			u16 rx_comp_prod, u16 rx_sge_prod, u32 start)
-{
-	struct ustorm_eth_rx_producers rx_prods = {0};
-	u32 i;
-
-	/* Update producers */
-	rx_prods.bd_prod = bd_prod;
-	rx_prods.cqe_prod = rx_comp_prod;
-	rx_prods.sge_prod = rx_sge_prod;
-
-	/*
-	 * Make sure that the BD and SGE data is updated before updating the
-	 * producers since FW might read the BD/SGE right after the producer
-	 * is updated.
-	 * This is only applicable for weak-ordered memory model archs such
-	 * as IA-64. The following barrier is also mandatory since FW will
-	 * assumes BDs must have buffers.
-	 */
-	wmb();
-
-	for (i = 0; i < sizeof(rx_prods)/4; i++)
-		REG_WR(bp, start + i*4, ((u32 *)&rx_prods)[i]);
-
-	mmiowb(); /* keep prod updates ordered */
-
-	DP(NETIF_MSG_RX_STATUS,
-	   "queue[%d]:  wrote  bd_prod %u  cqe_prod %u  sge_prod %u\n",
-	   fp->index, bd_prod, rx_comp_prod, rx_sge_prod);
-}
-
 static inline void bnx2x_igu_ack_sb_gen(struct bnx2x *bp, u8 igu_sb_id,
 					u8 segment, u16 index, u8 op,
 					u8 update, u32 igu_addr)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 6ac79ba..977d06a 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -1697,15 +1697,6 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
 	return;
 }
 
-void bnx2x_update_rx_prod(struct bnx2x *bp, struct bnx2x_fastpath *fp,
-			u16 bd_prod, u16 rx_comp_prod, u16 rx_sge_prod)
-{
-	u32 start = BAR_USTRORM_INTMEM + fp->ustorm_rx_prods_offset;
-
-	bnx2x_update_rx_prod_gen(bp, fp, bd_prod, rx_comp_prod, rx_sge_prod,
-				 start);
-}
-
 irqreturn_t bnx2x_interrupt(int irq, void *dev_instance)
 {
 	struct bnx2x *bp = netdev_priv(dev_instance);
@@ -4620,8 +4611,8 @@ static void bnx2x_attn_int(struct bnx2x *bp)
 void bnx2x_igu_ack_sb(struct bnx2x *bp, u8 igu_sb_id, u8 segment,
 		      u16 index, u8 op, u8 update)
 {
-	u32 igu_addr = BAR_IGU_INTMEM + (IGU_CMD_INT_ACK_BASE + igu_sb_id)*8;
-
+	u32 igu_addr = bp->igu_base_addr;
+	igu_addr += (IGU_CMD_INT_ACK_BASE + igu_sb_id)*8;
 	bnx2x_igu_ack_sb_gen(bp, igu_sb_id, segment, index, op, update,
 			     igu_addr);
 }
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 09/22] bnx2x: Allocate VF database in PF when VFs are present
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (7 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 08/22] bnx2x: VF fastpath Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 10/22] bnx2x: Prepare device and initialize VF database Ariel Elior
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

When A PF determines that it may have to manage SRIOV VFs it
allocates a database for this purpose. The database is intended to
keep track of the VF state, the resources allocated for each VF
(queues, interrupt vectors, etc), the state of the VF's queues.
When the VF loads the database is updated accordingly.
When A VF closes the database is consulted to determine which
resources need to be released (close queues against device, reclaim
interrupt vectors, etc).

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/Makefile      |    2 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    8 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   31 ++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |    9 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  302 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |  221 +++++++++++++++
 6 files changed, 569 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c

diff --git a/drivers/net/ethernet/broadcom/bnx2x/Makefile b/drivers/net/ethernet/broadcom/bnx2x/Makefile
index d862ea6..2ef6803 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/Makefile
+++ b/drivers/net/ethernet/broadcom/bnx2x/Makefile
@@ -4,4 +4,4 @@
 
 obj-$(CONFIG_BNX2X) += bnx2x.o
 
-bnx2x-objs := bnx2x_main.o bnx2x_link.o bnx2x_cmn.o bnx2x_ethtool.o bnx2x_stats.o bnx2x_dcb.o bnx2x_sp.o bnx2x_vfpf.o
+bnx2x-objs := bnx2x_main.o bnx2x_link.o bnx2x_cmn.o bnx2x_ethtool.o bnx2x_stats.o bnx2x_dcb.o bnx2x_sp.o bnx2x_vfpf.o bnx2x_sriov.o
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 88cbdb9..254f9ee 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -13,9 +13,12 @@
 
 #ifndef BNX2X_H
 #define BNX2X_H
+
+#include <linux/pci.h>
 #include <linux/netdevice.h>
 #include <linux/dma-mapping.h>
 #include <linux/types.h>
+#include <linux/pci_regs.h>
 
 /* compilation time flags */
 
@@ -966,6 +969,7 @@ extern struct workqueue_struct *bnx2x_wq;
 #define BNX2X_MAX_NUM_OF_VFS	64
 #define BNX2X_VF_CID_WND	0
 #define BNX2X_CIDS_PER_VF	(1 << BNX2X_VF_CID_WND)
+#define BNX2X_FIRST_VF_CID	256
 #define BNX2X_VF_CIDS		(BNX2X_MAX_NUM_OF_VFS * BNX2X_CIDS_PER_VF)
 #define BNX2X_VF_ID_INVALID	0xFF
 
@@ -1117,6 +1121,7 @@ struct hw_context {
 /* forward */
 struct bnx2x_ilt;
 
+struct bnx2x_vfdb;
 
 enum bnx2x_recovery_state {
 	BNX2X_RECOVERY_DONE,
@@ -1606,6 +1611,9 @@ struct bnx2x {
 	char			fw_ver[32];
 	const struct firmware	*firmware;
 
+	struct bnx2x_vfdb	*vfdb;
+#define IS_SRIOV(bp)		((bp)->vfdb)
+
 	/* DCB support on/off */
 	u16 dcb_state;
 #define BNX2X_DCB_STATE_OFF			0
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 977d06a..569acdc 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -125,8 +125,6 @@ static int debug;
 module_param(debug, int, 0);
 MODULE_PARM_DESC(debug, " Default debug msglevel");
 
-
-
 struct workqueue_struct *bnx2x_wq;
 
 enum bnx2x_board_type {
@@ -2209,7 +2207,6 @@ u8 bnx2x_link_test(struct bnx2x *bp, u8 is_serdes)
 	return rc;
 }
 
-
 /* Calculates the sum of vn_min_rates.
    It's needed for further normalizing of the min_rates.
    Returns:
@@ -7273,12 +7270,21 @@ static int bnx2x_init_hw_func(struct bnx2x *bp)
 	ilt = BP_ILT(bp);
 	cdu_ilt_start = ilt->clients[ILT_CLIENT_CDU].start;
 
+	if (IS_SRIOV(bp))
+		cdu_ilt_start += BNX2X_FIRST_VF_CID/ILT_PAGE_CIDS;
+	cdu_ilt_start = bnx2x_iov_init_ilt(bp, cdu_ilt_start);
+
+	/* since BNX2X_FIRST_VF_CID > 0 the PF L2 cids precedes
+	 * those of the VFs, so start line should be reset
+	 */
+	cdu_ilt_start = ilt->clients[ILT_CLIENT_CDU].start;
 	for (i = 0; i < L2_ILT_LINES(bp); i++) {
 		ilt->lines[cdu_ilt_start + i].page = bp->context[i].vcxt;
 		ilt->lines[cdu_ilt_start + i].page_mapping =
 			bp->context[i].cxt_mapping;
 		ilt->lines[cdu_ilt_start + i].size = bp->context[i].size;
 	}
+
 	bnx2x_ilt_init_op(bp, INITOP_SET);
 
 	if (!CONFIGURE_NIC_MODE(bp)) {
@@ -7884,6 +7890,8 @@ int bnx2x_set_int_mode(struct bnx2x *bp)
 /* must be called prior to any HW initializations */
 static inline u16 bnx2x_cid_ilt_lines(struct bnx2x *bp)
 {
+	if (IS_SRIOV(bp))
+		return (BNX2X_FIRST_VF_CID + BNX2X_VF_CIDS)/ILT_PAGE_CIDS;
 	return L2_ILT_LINES(bp);
 }
 
@@ -12112,8 +12120,12 @@ static int bnx2x_set_qm_cid_count(struct bnx2x *bp)
 {
 	int cid_count = BNX2X_L2_MAX_CID(bp);
 
+	if (IS_SRIOV(bp))
+		cid_count += BNX2X_VF_CIDS;
+
 	if (CNIC_SUPPORT(bp))
 		cid_count += CNIC_CID_MAX;
+
 	return roundup(cid_count, QM_CID_ROUND);
 }
 
@@ -12319,6 +12331,16 @@ static int bnx2x_init_one(struct pci_dev *pdev,
 			goto init_one_exit;
 	}
 
+	/* Enable SRIOV if capability found in configuration space.
+	 * Once the generic SR-IOV framework makes it in from the
+	 * pci tree this will be revised, to allow dynamic control
+	 * over the number of VFs. Right now, change the num of vfs
+	 * param below to enable SR-IOV.
+	 */
+	rc = bnx2x_iov_init_one(bp, int_mode, 0/*num vfs*/);
+	if (rc)
+		goto init_one_exit;
+
 	/* calc qm_cid_count */
 	bp->qm_cid_count = bnx2x_set_qm_cid_count(bp);
 	BNX2X_DEV_INFO("qm_cid_count %d\n", bp->qm_cid_count);
@@ -12442,6 +12464,9 @@ static void bnx2x_remove_one(struct pci_dev *pdev)
 
 	/* Make sure RESET task is not scheduled before continuing */
 	cancel_delayed_work_sync(&bp->sp_rtnl_task);
+
+	bnx2x_iov_remove_one(bp);
+
 	/* send message via vfpf channel to release the resources of this vf */
 	if (IS_VF(bp))
 		bnx2x_vfpf_release(bp);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 3f01526..00e4398 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -6305,6 +6305,15 @@
 #define PCI_PM_DATA_B					0x414
 #define PCI_ID_VAL1					0x434
 #define PCI_ID_VAL2					0x438
+#define GRC_CONFIG_REG_PF_INIT_VF		0x624
+#define GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK	0xf
+/* First VF_NUM for PF is encoded in this register.
+ * The number of VFs assigned to a PF is assumed to be a multiple of 8.
+ * Software should program these bits based on Total Number of VFs \
+ * programmed for each PF.
+ * Since registers from 0x000-0x7ff are split across functions, each PF will
+ * have the same location for the same 4 bits
+ */
 
 #define PXPCS_TL_CONTROL_5		    0x814
 #define PXPCS_TL_CONTROL_5_UNKNOWNTYPE_ERR_ATTN    (1 << 29) /*WC*/
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
new file mode 100644
index 0000000..42d4c4d
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -0,0 +1,302 @@
+/* bnx2x_sriov.c: Broadcom Everest network driver.
+ *
+ * Copyright 2009-2012 Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2, available
+ * at http://www.gnu.org/licenses/old-licenses/gpl-2.0.html (the "GPL").
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a
+ * license other than the GPL, without Broadcom's express prior written
+ * consent.
+ *
+ * Maintained by: Eilon Greenstein <eilong@broadcom.com>
+ * Written by: Shmulik Ravid <shmulikr@broadcom.com>
+ *	       Ariel Elior <ariele@broadcom.com>
+ *
+ */
+#include "bnx2x.h"
+#include "bnx2x_init.h"
+#include "bnx2x_sriov.h"
+int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid)
+{
+	int idx;
+	for_each_vf(bp, idx)
+		if (bnx2x_vf(bp, idx, abs_vfid) == abs_vfid)
+			break;
+	return idx;
+}
+
+static
+struct bnx2x_virtf *bnx2x_vf_by_abs_fid(struct bnx2x *bp, u16 abs_vfid)
+{
+	u16 idx =  (u16)bnx2x_vf_idx_by_abs_fid(bp, abs_vfid);
+	return (idx < BNX2X_NR_VIRTFN(bp)) ? BP_VF(bp, idx) : NULL;
+}
+
+static int bnx2x_ari_enabled(struct pci_dev *dev)
+{
+	return dev->bus->self && dev->bus->self->ari_enabled;
+}
+
+static void
+bnx2x_vf_set_igu_info(struct bnx2x *bp, u8 igu_sb_id, u8 abs_vfid)
+{
+	struct bnx2x_virtf *vf = bnx2x_vf_by_abs_fid(bp, abs_vfid);
+	if (vf) {
+		if (!vf_sb_count(vf))
+			vf->igu_base_id = igu_sb_id;
+		++vf_sb_count(vf);
+	}
+}
+
+static void
+bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
+{
+	int sb_id;
+	u32 val;
+	u8 fid;
+
+	/* IGU in normal mode - read CAM */
+	for (sb_id = 0; sb_id < IGU_REG_MAPPING_MEMORY_SIZE; sb_id++) {
+		val = REG_RD(bp, IGU_REG_MAPPING_MEMORY + sb_id * 4);
+		if (!(val & IGU_REG_MAPPING_MEMORY_VALID))
+			continue;
+		fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID);
+		if (!(fid & IGU_FID_ENCODE_IS_PF))
+			bnx2x_vf_set_igu_info(bp, sb_id,
+					      (fid & IGU_FID_VF_NUM_MASK));
+
+		DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n",
+		   ((fid & IGU_FID_ENCODE_IS_PF) ? "PF" : "VF"),
+		   ((fid & IGU_FID_ENCODE_IS_PF) ? (fid & IGU_FID_PF_NUM_MASK) :
+		   (fid & IGU_FID_VF_NUM_MASK)), sb_id,
+		   GET_FIELD((val), IGU_REG_MAPPING_MEMORY_VECTOR));
+	}
+}
+
+static void __bnx2x_iov_free_vfdb(struct bnx2x *bp)
+{
+	if (bp->vfdb) {
+		kfree(bp->vfdb->vfqs);
+		kfree(bp->vfdb->vfs);
+		kfree(bp->vfdb);
+	}
+	bp->vfdb = NULL;
+}
+
+static int bnx2x_sriov_pci_cfg_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+{
+	int pos;
+	struct pci_dev *dev = bp->pdev;
+
+	pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV);
+	if (!pos) {
+		BNX2X_ERR("failed to find SRIOV capability in device\n");
+		return -ENODEV;
+	}
+
+	iov->pos = pos;
+	DP(BNX2X_MSG_IOV, "sriov ext pos %d\n", pos);
+	pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &iov->ctrl);
+	pci_read_config_word(dev, pos + PCI_SRIOV_TOTAL_VF, &iov->total);
+	pci_read_config_word(dev, pos + PCI_SRIOV_INITIAL_VF, &iov->initial);
+	pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &iov->offset);
+	pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &iov->stride);
+	pci_read_config_dword(dev, pos + PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
+	pci_read_config_dword(dev, pos + PCI_SRIOV_CAP, &iov->cap);
+	pci_read_config_byte(dev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
+
+	return 0;
+}
+
+static int bnx2x_sriov_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+{
+	u32 val;
+
+	/* read the SRIOV capability structure
+	 * The fields can be read via configuration read or
+	 * directly from the device (starting at offset PCICFG_OFFSET)
+	 */
+	if (bnx2x_sriov_pci_cfg_info(bp, iov))
+		return -ENODEV;
+
+	/* get the number of SRIOV bars */
+	iov->nres = 0;
+
+	/* read the first_vfid */
+	val = REG_RD(bp, PCICFG_OFFSET + GRC_CONFIG_REG_PF_INIT_VF);
+	iov->first_vf_in_pf = ((val & GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK)
+			       * 8) - (BNX2X_MAX_NUM_OF_VFS * BP_PATH(bp));
+
+	DP(BNX2X_MSG_IOV,
+	   "IOV info[%d]: first vf %d, nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n",
+	   BP_FUNC(bp),
+	   iov->first_vf_in_pf, iov->nres, iov->cap, iov->ctrl, iov->total,
+	   iov->initial, iov->nr_virtfn, iov->offset, iov->stride, iov->pgsz);
+
+	return 0;
+}
+
+static u8 bnx2x_iov_get_max_queue_count(struct bnx2x *bp)
+{
+	int i;
+	u8 queue_count = 0;
+
+	if (IS_SRIOV(bp))
+		for_each_vf(bp, i)
+			queue_count += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+
+	return queue_count;
+}
+
+/* must be called after PF bars are mapped */
+int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
+				 int num_vfs_param)
+{
+	int err, i, qcount;
+	struct bnx2x_sriov *iov;
+	struct pci_dev *dev = bp->pdev;
+
+	bp->vfdb = NULL;
+
+	/* verify sriov capability is present in configuration space */
+	if (!pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV)) {
+		DP(BNX2X_MSG_IOV, "no sriov - capability not found");
+		return 0;
+	}
+
+	/* verify is pf */
+	if (IS_VF(bp))
+		return 0;
+
+	/* verify chip revision */
+	if (CHIP_IS_E1x(bp))
+		return 0;
+
+	/* check if SRIOV support is turned off */
+	if (!num_vfs_param)
+		return 0;
+
+	/* SRIOV assumes that num of PF CIDs < BNX2X_FIRST_VF_CID */
+	if (BNX2X_L2_MAX_CID(bp) >= BNX2X_FIRST_VF_CID) {
+		BNX2X_ERR("PF cids %d are overspilling into vf space (starts at %d). Abort SRIOV",
+			  BNX2X_L2_MAX_CID(bp), BNX2X_FIRST_VF_CID);
+		return 0;
+	}
+
+	/* SRIOV can be enabled only with MSIX */
+	if (int_mode_param == BNX2X_INT_MODE_MSI ||
+	    int_mode_param == BNX2X_INT_MODE_INTX) {
+		BNX2X_ERR("Forced MSI/INTx mode is incompatible with SRIOV\n");
+		return 0;
+	}
+
+	/* verify ari is enabled */
+	if (!bnx2x_ari_enabled(bp->pdev)) {
+		BNX2X_ERR("ARI not supported, SRIOV can not be enabled\n");
+		return 0;
+	}
+
+	/* verify igu is in normal mode */
+	if (CHIP_INT_MODE_IS_BC(bp)) {
+		BNX2X_ERR("IGU not normal mode,  SRIOV can not be enabled\n");
+		return 0;
+	}
+
+	/* allocate the vfs database */
+	bp->vfdb = kzalloc(sizeof(*(bp->vfdb)), GFP_KERNEL);
+	if (!bp->vfdb) {
+		BNX2X_ERR("failed to allocate vf database\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	/* get the sriov info - Linux already collected all the pertinent
+	 * information, however the sriov structure is for the private use
+	 * of the pci module. Also we want this information regardless
+	 *  of the hyper-visor.
+	 */
+	iov = &(bp->vfdb->sriov);
+	err = bnx2x_sriov_info(bp, iov);
+	if (err)
+		goto failed;
+
+	/* SR-IOV capability was enabled but there are no VFs*/
+	if (iov->total == 0)
+		goto failed;
+
+	/* calcuate the actual number of VFs */
+	iov->nr_virtfn = min_t(u16, iov->total, (u16)num_vfs_param);
+
+	/* allcate the vf array */
+	bp->vfdb->vfs = kzalloc(sizeof(struct bnx2x_virtf) *
+				BNX2X_NR_VIRTFN(bp), GFP_KERNEL);
+	if (!bp->vfdb->vfs) {
+		BNX2X_ERR("failed to allocate vf array\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	/* Initial VF init - index and abs_vfid - nr_virtfn must be set */
+	for_each_vf(bp, i) {
+		bnx2x_vf(bp, i, index) = i;
+		bnx2x_vf(bp, i, abs_vfid) = iov->first_vf_in_pf + i;
+		bnx2x_vf(bp, i, state) = VF_FREE;
+		INIT_LIST_HEAD(&bnx2x_vf(bp, i, op_list_head));
+		mutex_init(&bnx2x_vf(bp, i, op_mutex));
+		bnx2x_vf(bp, i, op_current) = CHANNEL_TLV_NONE;
+	}
+
+	/* re-read the IGU CAM for VFs - index and abs_vfid must be set */
+	bnx2x_get_vf_igu_cam_info(bp);
+
+	/* get the total queue count and allocate the global queue arrays */
+	qcount = bnx2x_iov_get_max_queue_count(bp);
+
+	/* allocate the queue arrays for all VFs */
+	bp->vfdb->vfqs = kzalloc(qcount * sizeof(struct bnx2x_vf_queue),
+				 GFP_KERNEL);
+	if (!bp->vfdb->vfqs) {
+		BNX2X_ERR("failed to allocate vf queue array\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	return 0;
+failed:
+	DP(BNX2X_MSG_IOV, "Failed err=%d\n", err);
+	__bnx2x_iov_free_vfdb(bp);
+	return err;
+}
+
+/* called by bnx2x_init_hw_func, returns the next ilt line */
+int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line)
+{
+	int i;
+	struct bnx2x_ilt *ilt = BP_ILT(bp);
+
+	if (!IS_SRIOV(bp))
+		return line;
+
+	/* set vfs ilt lines */
+	for (i = 0; i < BNX2X_VF_CIDS/ILT_PAGE_CIDS; i++) {
+		struct hw_dma *hw_cxt = BP_VF_CXT_PAGE(bp, i);
+		ilt->lines[line+i].page = hw_cxt->addr;
+		ilt->lines[line+i].page_mapping = hw_cxt->mapping;
+		ilt->lines[line+i].size = hw_cxt->size; /* doesn't matter */
+	}
+	return line + i;
+}
+
+void bnx2x_iov_remove_one(struct bnx2x *bp)
+{
+	/* if SRIOV is not enabled there's nothing to do */
+	if (!IS_SRIOV(bp))
+		return;
+
+	/* free vf database */
+	__bnx2x_iov_free_vfdb(bp);
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 6d0df33..b9b0583 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -19,11 +19,232 @@
 #ifndef BNX2X_SRIOV_H
 #define BNX2X_SRIOV_H
 
+/* The bnx2x device structure holds vfdb structure described below.
+ * The VF array is indexed by the relative vfid.
+ */
+struct bnx2x_sriov {
+	u32 first_vf_in_pf;
+
+	/* standard SRIOV capability fields, mostly for debugging */
+	int pos;		/* capability position */
+	int nres;		/* number of resources */
+	u32 cap;		/* SR-IOV Capabilities */
+	u16 ctrl;		/* SR-IOV Control */
+	u16 total;		/* total VFs associated with the PF */
+	u16 initial;		/* initial VFs associated with the PF */
+	u16 nr_virtfn;		/* number of VFs available */
+	u16 offset;		/* first VF Routing ID offset */
+	u16 stride;		/* following VF stride */
+	u32 pgsz;		/* page size for BAR alignment */
+	u8 link;		/* Function Dependency Link */
+};
+
+/* bars */
+struct bnx2x_vf_bar {
+	u64 bar;
+	u32 size;
+};
+
+/* vf queue (used both for rx or tx) */
+struct bnx2x_vf_queue {
+	struct eth_context		*cxt;
+
+	/* MACs object */
+	struct bnx2x_vlan_mac_obj	mac_obj;
+
+	/* VLANs object */
+	struct bnx2x_vlan_mac_obj	vlan_obj;
+	atomic_t vlan_count;		/* 0 means vlan-0 is set  ~ untagged */
+
+	/* Queue Slow-path State object */
+	struct bnx2x_queue_sp_obj	sp_obj;
+
+	u32 cid;
+	u16 index;
+	u16 sb_idx;
+};
+
+/* struct bnx2x_vfop_qctor_params - prepare queue construction parameters:
+ * q-init, q-setup and SB index
+ */
+struct bnx2x_vfop_qctor_params {
+	struct bnx2x_queue_state_params		qstate;
+	struct bnx2x_queue_setup_params		prep_qsetup;
+};
+
+/* VFOP parameters (one copy per VF) */
+union bnx2x_vfop_params {
+	struct bnx2x_vlan_mac_ramrod_params	vlan_mac;
+	struct bnx2x_rx_mode_ramrod_params	rx_mode;
+	struct bnx2x_mcast_ramrod_params	mcast;
+	struct bnx2x_config_rss_params		rss;
+	struct bnx2x_vfop_qctor_params		qctor;
+};
+
+/* forward */
+struct bnx2x_virtf;
+/* vf context */
+struct bnx2x_virtf {
+	u16 cfg_flags;
+#define VF_CFG_STATS		0x0001
+#define VF_CFG_FW_FC		0x0002
+#define VF_CFG_TPA		0x0004
+#define VF_CFG_INT_SIMD		0x0008
+#define VF_CACHE_LINE		0x0010
+
+	u8 state;
+#define VF_FREE		0	/* VF ready to be acquired holds no resc */
+#define VF_ACQUIRED	1	/* VF aquired, but not initalized */
+#define VF_ENABLED	2	/* VF Enabled */
+#define VF_RESET	3	/* VF FLR'd, pending cleanup */
+
+	/* non 0 during flr cleanup */
+	u8 flr_clnup_stage;
+#define VF_FLR_CLN	1	/* reclaim resources and do 'final cleanup'
+				 * sans the end-wait
+				 */
+#define VF_FLR_ACK	2	/* ACK flr notification */
+#define VF_FLR_EPILOG	3	/* wait for VF remnants to dissipate in the HW
+				 * ~ final cleanup' end wait
+				 */
+
+	/* dma */
+	dma_addr_t fw_stat_map;		/* valid iff VF_CFG_STATS */
+	dma_addr_t spq_map;
+	dma_addr_t bulletin_map;
+
+	/* Allocated resources counters. Before the VF is acquired, the
+	 * counters hold the following values:
+	 *
+	 * - xxq_count = 0 as the queues memory is not allocated yet.
+	 *
+	 * - sb_count  = The number of status blocks configured for this VF in
+	 *		 the IGU CAM. Initially read during probe.
+	 *
+	 * - xx_rules_count = The number of rules statically and equally
+	 *		      allocated for each VF, during PF load.
+	 */
+	struct vf_pf_resc_request	alloc_resc;
+#define vf_rxq_count(vf)		((vf)->alloc_resc.num_rxqs)
+#define vf_txq_count(vf)		((vf)->alloc_resc.num_txqs)
+#define vf_sb_count(vf)			((vf)->alloc_resc.num_sbs)
+#define vf_mac_rules_cnt(vf)		((vf)->alloc_resc.num_mac_filters)
+#define vf_vlan_rules_cnt(vf)		((vf)->alloc_resc.num_vlan_filters)
+#define vf_mc_rules_cnt(vf)		((vf)->alloc_resc.num_mc_filters)
+
+	u8 sb_count;	/* actual number of SBs */
+	u8 igu_base_id;	/* base igu status block id */
+
+	struct bnx2x_vf_queue	*vfqs;
+#define bnx2x_vfq(vf, nr, var)	((vf)->vfqs[(nr)].var)
+
+	u8 index;	/* index in the vf array */
+	u8 abs_vfid;
+	u8 sp_cl_id;
+	u32 error;	/* 0 means all's-well */
+
+	/* BDF */
+	unsigned int bus;
+	unsigned int devfn;
+
+	/* bars */
+	struct bnx2x_vf_bar bars[PCI_SRIOV_NUM_BARS];
+
+	/* set-mac ramrod state 1-pending, 0-done */
+	unsigned long	filter_state;
+
+	/* leading rss client id ~~ the client id of the first rxq, must be
+	 * set for each txq.
+	 */
+	int leading_rss;
+
+	/* MCAST object */
+	struct bnx2x_mcast_obj		mcast_obj;
+
+	/* RSS configuration object */
+	struct bnx2x_rss_config_obj     rss_conf_obj;
+
+	/* slow-path operations */
+	atomic_t			op_in_progress;
+	int				op_rc;
+	bool				op_wait_blocking;
+	struct list_head		op_list_head;
+	union bnx2x_vfop_params		op_params;
+	struct mutex			op_mutex; /* one vfop at a time mutex */
+	enum channel_tlvs		op_current;
+};
+
+#define BNX2X_NR_VIRTFN(bp)	((bp)->vfdb->sriov.nr_virtfn)
+
+#define for_each_vf(bp, var) \
+		for ((var) = 0; (var) < BNX2X_NR_VIRTFN(bp); (var)++)
+
 struct bnx2x_vf_mbx_msg {
 	union vfpf_tlvs req;
 	union pfvf_tlvs resp;
 };
 
+struct bnx2x_vf_mbx {
+	struct bnx2x_vf_mbx_msg *msg;
+	dma_addr_t msg_mapping;
+
+	/* VF GPA address */
+	u32 vf_addr_lo;
+	u32 vf_addr_hi;
+
+	struct vfpf_first_tlv first_tlv;	/* saved VF request header */
+
+	u8 flags;
+#define VF_MSG_INPROCESS	0x1	/* failsafe - the FW should prevent
+					 * more then one pending msg
+					 */
+};
+
+struct hw_dma {
+	void *addr;
+	dma_addr_t mapping;
+	size_t size;
+};
+
+struct bnx2x_vfdb {
+
+#define BP_VFDB(bp)		((bp)->vfdb)
+	/* vf array */
+	struct bnx2x_virtf	*vfs;
+#define BP_VF(bp, idx)		(&((bp)->vfdb->vfs[(idx)]))
+#define bnx2x_vf(bp, idx, var)	((bp)->vfdb->vfs[(idx)].var)
+
+	/* queue array - for all vfs */
+	struct bnx2x_vf_queue *vfqs;
+
+	/* vf HW contexts */
+	struct hw_dma		context[BNX2X_VF_CIDS/ILT_PAGE_CIDS];
+#define	BP_VF_CXT_PAGE(bp, i)	(&(bp)->vfdb->context[(i)])
+
+	/* SR-IOV information */
+	struct bnx2x_sriov	sriov;
+	struct hw_dma		mbx_dma;
+#define BP_VF_MBX_DMA(bp)	(&((bp)->vfdb->mbx_dma))
+	struct bnx2x_vf_mbx	mbxs[BNX2X_MAX_NUM_OF_VFS];
+#define BP_VF_MBX(bp, vfid)	(&((bp)->vfdb->mbxs[(vfid)]))
+
+	struct hw_dma		sp_dma;
+#define bnx2x_vf_sp(bp, vf, field) ((bp)->vfdb->sp_dma.addr +		\
+		(vf)->index * sizeof(struct bnx2x_vf_sp) +		\
+		offsetof(struct bnx2x_vf_sp, field))
+#define bnx2x_vf_sp_map(bp, vf, field) ((bp)->vfdb->sp_dma.mapping +	\
+		(vf)->index * sizeof(struct bnx2x_vf_sp) +		\
+		offsetof(struct bnx2x_vf_sp, field))
+
+#define FLRD_VFS_DWORDS (BNX2X_MAX_NUM_OF_VFS / 32)
+	u32 flrd_vfs[FLRD_VFS_DWORDS];
+};
+
+/* global iov routines */
+int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line);
+int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param, int num_vfs_param);
+void bnx2x_iov_remove_one(struct bnx2x *bp);
+int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
 		   u16 length);
 void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 10/22] bnx2x: Prepare device and initialize VF database
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (8 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 09/22] bnx2x: Allocate VF database in PF when VFs are present Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 11/22] bnx2x: Infrastructure for VF <-> PF request on PF side Ariel Elior
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

At nic load of the PF, if VFs may be present, prepare the device
for the VFs. Initialize the VF database in preparation of VF arrival.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |   10 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c   |    2 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h   |    3 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   57 +---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |   18 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  398 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |   56 +++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   37 ++
 8 files changed, 533 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 254f9ee..14ca777 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1633,6 +1633,10 @@ struct bnx2x {
 	int					dcb_version;
 
 	/* CAM credit pools */
+
+	/* used only in sriov */
+	struct bnx2x_credit_pool_obj		vlans_pool;
+
 	struct bnx2x_credit_pool_obj		macs_pool;
 
 	/* RX_MODE object */
@@ -1847,12 +1851,14 @@ int bnx2x_del_all_macs(struct bnx2x *bp,
 
 /* Init Function API  */
 void bnx2x_func_init(struct bnx2x *bp, struct bnx2x_func_init_params *p);
+u32 bnx2x_get_pretend_reg(struct bnx2x *bp);
 int bnx2x_get_gpio(struct bnx2x *bp, int gpio_num, u8 port);
 int bnx2x_set_gpio(struct bnx2x *bp, int gpio_num, u32 mode, u8 port);
 int bnx2x_set_mult_gpio(struct bnx2x *bp, u8 pins, u32 mode);
 int bnx2x_set_gpio_int(struct bnx2x *bp, int gpio_num, u32 mode, u8 port);
 void bnx2x_read_mf_cfg(struct bnx2x *bp);
 
+int bnx2x_pretend_func(struct bnx2x *bp, u16 pretend_func_val);
 
 /* dmae */
 void bnx2x_read_dmae(struct bnx2x *bp, u32 src_addr, u32 len32);
@@ -1864,6 +1870,7 @@ u32 bnx2x_dmae_opcode_clr_src_reset(u32 opcode);
 u32 bnx2x_dmae_opcode(struct bnx2x *bp, u8 src_type, u8 dst_type,
 		      bool with_comp, u8 comp_type);
 
+u8 bnx2x_is_pcie_pending(struct pci_dev *dev);
 
 void bnx2x_calc_fc_adv(struct bnx2x *bp);
 int bnx2x_sp_post(struct bnx2x *bp, int command, int cid,
@@ -1888,6 +1895,9 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
 	return val;
 }
 
+void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
+			    bool is_pf);
+
 #define BNX2X_ILT_ZALLOC(x, y, size) \
 	do { \
 		x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL); \
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index e1bf928..e8a05f4 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -29,6 +29,7 @@
 #include "bnx2x_sp.h"
 
 
+#include "bnx2x_sriov.h"
 
 /**
  * bnx2x_move_fp - move content of the fastpath structure.
@@ -2513,6 +2514,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
 	if (IS_PF(bp)) {
 		bnx2x_init_bp_objs(bp);
 
+		bnx2x_iov_nic_init(bp);
 
 		/* Set AFEX default VLAN tag to an invalid value */
 		bp->afex_def_vlan_tag = -1;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 6b0add2..9ee67fa 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -1106,6 +1106,9 @@ static inline void bnx2x_init_bp_objs(struct bnx2x *bp)
 	bnx2x_init_mac_credit_pool(bp, &bp->macs_pool, BP_FUNC(bp),
 				   bnx2x_get_path_func_num(bp));
 
+	bnx2x_init_vlan_credit_pool(bp, &bp->vlans_pool, BP_ABS_FUNC(bp)>>1,
+				    bnx2x_get_path_func_num(bp));
+
 	/* RSS configuration object */
 	bnx2x_init_rss_config_obj(bp, &bp->rss_conf_obj, bp->fp->cl_id,
 				  bp->fp->cid, BP_FUNC(bp), BP_FUNC(bp),
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 569acdc..3133692 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -1169,7 +1169,7 @@ static int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func,
 	return ret;
 }
 
-static u8 bnx2x_is_pcie_pending(struct pci_dev *dev)
+u8 bnx2x_is_pcie_pending(struct pci_dev *dev)
 {
 	u16 status;
 
@@ -6269,49 +6269,6 @@ static void bnx2x_setup_fan_failure_detection(struct bnx2x *bp)
 	REG_WR(bp, MISC_REG_SPIO_EVENT_EN, val);
 }
 
-static void bnx2x_pretend_func(struct bnx2x *bp, u8 pretend_func_num)
-{
-	u32 offset = 0;
-
-	if (CHIP_IS_E1(bp))
-		return;
-	if (CHIP_IS_E1H(bp) && (pretend_func_num >= E1H_FUNC_MAX))
-		return;
-
-	switch (BP_ABS_FUNC(bp)) {
-	case 0:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F0;
-		break;
-	case 1:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F1;
-		break;
-	case 2:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F2;
-		break;
-	case 3:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F3;
-		break;
-	case 4:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F4;
-		break;
-	case 5:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F5;
-		break;
-	case 6:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F6;
-		break;
-	case 7:
-		offset = PXP2_REG_PGL_PRETEND_FUNC_F7;
-		break;
-	default:
-		return;
-	}
-
-	REG_WR(bp, offset, pretend_func_num);
-	REG_RD(bp, offset);
-	DP(NETIF_MSG_HW, "Pretending to func %d\n", pretend_func_num);
-}
-
 void bnx2x_pf_disable(struct bnx2x *bp)
 {
 	u32 val = REG_RD(bp, IGU_REG_PF_CONFIGURATION);
@@ -6568,6 +6525,8 @@ static int bnx2x_init_hw_common(struct bnx2x *bp)
 
 	bnx2x_init_block(bp, BLOCK_DMAE, PHASE_COMMON);
 
+	bnx2x_iov_init_dmae(bp);
+
 	/* clean the DMAE memory */
 	bp->dmae_ready = 1;
 	bnx2x_init_fill(bp, TSEM_REG_PRAM, 0, 8, 1);
@@ -7053,15 +7012,14 @@ static void bnx2x_ilt_wr(struct bnx2x *bp, u32 index, dma_addr_t addr)
 	REG_WR_DMAE(bp, reg, wb_write, 2);
 }
 
-static void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func,
-				   u8 idu_sb_id, bool is_Pf)
+void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id, bool is_pf)
 {
 	u32 data, ctl, cnt = 100;
 	u32 igu_addr_data = IGU_REG_COMMAND_REG_32LSB_DATA;
 	u32 igu_addr_ctl = IGU_REG_COMMAND_REG_CTRL;
 	u32 igu_addr_ack = IGU_REG_CSTORM_TYPE_0_SB_CLEANUP + (idu_sb_id/32)*4;
 	u32 sb_bit =  1 << (idu_sb_id%32);
-	u32 func_encode = func | (is_Pf ? 1 : 0) << IGU_FID_ENCODE_IS_PF_SHIFT;
+	u32 func_encode = func | (is_pf ? 1 : 0) << IGU_FID_ENCODE_IS_PF_SHIFT;
 	u32 addr_encode = IGU_CMD_E2_PROD_UPD_BASE + idu_sb_id;
 
 	/* Not supported in BC mode */
@@ -7357,6 +7315,9 @@ static int bnx2x_init_hw_func(struct bnx2x *bp)
 
 	bnx2x_init_block(bp, BLOCK_TM, init_phase);
 	bnx2x_init_block(bp, BLOCK_DORQ, init_phase);
+
+	bnx2x_iov_init_dq(bp);
+
 	bnx2x_init_block(bp, BLOCK_BRB1, init_phase);
 	bnx2x_init_block(bp, BLOCK_PRS, init_phase);
 	bnx2x_init_block(bp, BLOCK_TSDM, init_phase);
@@ -9459,7 +9420,7 @@ period_task_exit:
  * Init service functions
  */
 
-static u32 bnx2x_get_pretend_reg(struct bnx2x *bp)
+u32 bnx2x_get_pretend_reg(struct bnx2x *bp)
 {
 	u32 base = PXP2_REG_PGL_PRETEND_FUNC_F0;
 	u32 stride = PXP2_REG_PGL_PRETEND_FUNC_F1 - base;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 00e4398..cefe448 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -825,6 +825,7 @@
 /* [RW 28] The value sent to CM header in the case of CFC load error. */
 #define DORQ_REG_ERR_CMHEAD					 0x170058
 #define DORQ_REG_IF_EN						 0x170004
+#define DORQ_REG_MAX_RVFID_SIZE				 0x1701ec
 #define DORQ_REG_MODE_ACT					 0x170008
 /* [RW 5] The normal mode CID extraction offset. */
 #define DORQ_REG_NORM_CID_OFST					 0x17002c
@@ -847,6 +848,22 @@
    writes the same initial credit to the rspa_crd_cnt and rspb_crd_cnt. The
    read reads this written value. */
 #define DORQ_REG_RSP_INIT_CRD					 0x170048
+#define DORQ_REG_RSPB_CRD_CNT					 0x1700b0
+#define DORQ_REG_VF_NORM_CID_BASE				 0x1701a0
+#define DORQ_REG_VF_NORM_CID_OFST				 0x1701f4
+#define DORQ_REG_VF_NORM_CID_WND_SIZE				 0x1701a4
+#define DORQ_REG_VF_NORM_MAX_CID_COUNT				 0x1701e4
+#define DORQ_REG_VF_NORM_VF_BASE				 0x1701a8
+/* [RW 10] VF type validation mask value */
+#define DORQ_REG_VF_TYPE_MASK_0					 0x170218
+/* [RW 17] VF type validation Min MCID value */
+#define DORQ_REG_VF_TYPE_MAX_MCID_0				 0x1702d8
+/* [RW 17] VF type validation Max MCID value */
+#define DORQ_REG_VF_TYPE_MIN_MCID_0				 0x170298
+/* [RW 10] VF type validation comp value */
+#define DORQ_REG_VF_TYPE_VALUE_0				 0x170258
+#define DORQ_REG_VF_USAGE_CT_LIMIT				 0x170340
+
 /* [RW 4] Initial activity counter value on the load request; when the
    shortcut is done. */
 #define DORQ_REG_SHRT_ACT_CNT					 0x170070
@@ -2571,6 +2588,7 @@
    current task in process). */
 #define PBF_REG_DISABLE_NEW_TASK_PROC_P4			 0x14006c
 #define PBF_REG_DISABLE_PF					 0x1402e8
+#define PBF_REG_DISABLE_VF					 0x1402ec
 /* [RW 18] For port 0: For each client that is subject to WFQ (the
  * corresponding bit is 1); indicates to which of the credit registers this
  * client is mapped. For clients which are not credit blocked; their mapping
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 42d4c4d..bafb7fb 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -19,7 +19,36 @@
  */
 #include "bnx2x.h"
 #include "bnx2x_init.h"
+#include "bnx2x_cmn.h"
 #include "bnx2x_sriov.h"
+
+/* General service functions */
+static void storm_memset_vf_to_pf(struct bnx2x *bp, u16 abs_fid,
+					 u16 pf_id)
+{
+	REG_WR8(bp, BAR_XSTRORM_INTMEM + XSTORM_VF_TO_PF_OFFSET(abs_fid),
+		pf_id);
+	REG_WR8(bp, BAR_CSTRORM_INTMEM + CSTORM_VF_TO_PF_OFFSET(abs_fid),
+		pf_id);
+	REG_WR8(bp, BAR_TSTRORM_INTMEM + TSTORM_VF_TO_PF_OFFSET(abs_fid),
+		pf_id);
+	REG_WR8(bp, BAR_USTRORM_INTMEM + USTORM_VF_TO_PF_OFFSET(abs_fid),
+		pf_id);
+}
+
+static void storm_memset_func_en(struct bnx2x *bp, u16 abs_fid,
+					u8 enable)
+{
+	REG_WR8(bp, BAR_XSTRORM_INTMEM + XSTORM_FUNC_EN_OFFSET(abs_fid),
+		enable);
+	REG_WR8(bp, BAR_CSTRORM_INTMEM + CSTORM_FUNC_EN_OFFSET(abs_fid),
+		enable);
+	REG_WR8(bp, BAR_TSTRORM_INTMEM + TSTORM_FUNC_EN_OFFSET(abs_fid),
+		enable);
+	REG_WR8(bp, BAR_USTRORM_INTMEM + USTORM_FUNC_EN_OFFSET(abs_fid),
+		enable);
+}
+
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid)
 {
 	int idx;
@@ -271,6 +300,375 @@ failed:
 	__bnx2x_iov_free_vfdb(bp);
 	return err;
 }
+/* VF enable primitives
+ *
+ * when pretend is required the caller is responsible
+ * for calling pretend prior to calling these routines
+ */
+
+/* called only on E1H or E2.
+ * When pretending to be PF, the pretend value is the function number 0...7
+ * When pretending to be VF, the pretend val is the PF-num:VF-valid:ABS-VFID
+ * combination
+ */
+int bnx2x_pretend_func(struct bnx2x *bp, u16 pretend_func_val)
+{
+	u32 pretend_reg;
+
+	if (CHIP_IS_E1H(bp) && pretend_func_val > E1H_FUNC_MAX)
+		return -1;
+
+	/* get my own pretend register */
+	pretend_reg = bnx2x_get_pretend_reg(bp);
+	REG_WR(bp, pretend_reg, pretend_func_val);
+	REG_RD(bp, pretend_reg);
+	return 0;
+}
+
+/* internal vf enable - until vf is enabled internally all transactions
+ * are blocked. this routine should always be called last with pretend.
+ */
+static void bnx2x_vf_enable_internal(struct bnx2x *bp, u8 enable)
+{
+	REG_WR(bp, PGLUE_B_REG_INTERNAL_VFID_ENABLE, enable ? 1 : 0);
+}
+
+/* clears vf error in all semi blocks */
+static void bnx2x_vf_semi_clear_err(struct bnx2x *bp, u8 abs_vfid)
+{
+	REG_WR(bp, TSEM_REG_VFPF_ERR_NUM, abs_vfid);
+	REG_WR(bp, USEM_REG_VFPF_ERR_NUM, abs_vfid);
+	REG_WR(bp, CSEM_REG_VFPF_ERR_NUM, abs_vfid);
+	REG_WR(bp, XSEM_REG_VFPF_ERR_NUM, abs_vfid);
+}
+
+static void bnx2x_vf_pglue_clear_err(struct bnx2x *bp, u8 abs_vfid)
+{
+	u32 was_err_group = (2 * BP_PATH(bp) + abs_vfid) >> 5;
+	u32 was_err_reg = 0;
+
+	switch (was_err_group) {
+	case 0:
+	    was_err_reg = PGLUE_B_REG_WAS_ERROR_VF_31_0_CLR;
+	    break;
+	case 1:
+	    was_err_reg = PGLUE_B_REG_WAS_ERROR_VF_63_32_CLR;
+	    break;
+	case 2:
+	    was_err_reg = PGLUE_B_REG_WAS_ERROR_VF_95_64_CLR;
+	    break;
+	case 3:
+	    was_err_reg = PGLUE_B_REG_WAS_ERROR_VF_127_96_CLR;
+	    break;
+	}
+	REG_WR(bp, was_err_reg, 1 << (abs_vfid & 0x1f));
+}
+
+void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid)
+{
+	/* set the VF-PF association in the FW */
+	storm_memset_vf_to_pf(bp, FW_VF_HANDLE(abs_vfid), BP_FUNC(bp));
+	storm_memset_func_en(bp, FW_VF_HANDLE(abs_vfid), 1);
+
+	/* clear vf errors*/
+	bnx2x_vf_semi_clear_err(bp, abs_vfid);
+	bnx2x_vf_pglue_clear_err(bp, abs_vfid);
+
+	/* internal vf-enable - pretend */
+	bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, abs_vfid));
+	DP(BNX2X_MSG_IOV, "enabling internal access for vf %x\n", abs_vfid);
+	bnx2x_vf_enable_internal(bp, true);
+	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
+}
+
+static u8 bnx2x_vf_is_pcie_pending(struct bnx2x *bp, u8 abs_vfid)
+{
+	struct pci_dev *dev;
+	struct bnx2x_virtf *vf = bnx2x_vf_by_abs_fid(bp, abs_vfid);
+	if (!vf)
+		goto unknown_dev;
+
+	dev = pci_get_bus_and_slot(vf->bus, vf->devfn);
+	if (dev)
+		return bnx2x_is_pcie_pending(dev);
+
+unknown_dev:
+	BNX2X_ERR("Unknown device\n");
+	return false;
+}
+
+int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid)
+{
+	/* Wait 100ms */
+	msleep(100);
+
+	/* Verify no pending pci transactions */
+	if (bnx2x_vf_is_pcie_pending(bp, abs_vfid))
+		BNX2X_ERR("PCIE Transactions still pending\n");
+
+	return 0;
+}
+
+/* must be called after the number of PF queues and the number of VFs are
+ * both known
+ */
+static void
+bnx2x_iov_static_resc(struct bnx2x *bp, struct vf_pf_resc_request *resc)
+{
+	u16 vlan_count = 0;
+
+	/* will be set only during VF-ACQUIRE */
+	resc->num_rxqs = 0;
+	resc->num_txqs = 0;
+
+	/* no credit calculcis for macs (just yet) */
+	resc->num_mac_filters = 1;
+
+	/* divvy up vlan rules */
+	vlan_count = bp->vlans_pool.check(&bp->vlans_pool);
+	vlan_count = 1 << ilog2(vlan_count);
+	resc->num_vlan_filters = vlan_count / BNX2X_NR_VIRTFN(bp);
+
+	/* no real limitation */
+	resc->num_mc_filters = 0;
+
+	/* num_sbs already set */
+}
+
+/* IOV global initialization routines  */
+void bnx2x_iov_init_dq(struct bnx2x *bp)
+{
+	if (!IS_SRIOV(bp))
+		return;
+
+	/* Set the DQ such that the CID reflect the abs_vfid */
+	REG_WR(bp, DORQ_REG_VF_NORM_VF_BASE, 0);
+	REG_WR(bp, DORQ_REG_MAX_RVFID_SIZE, ilog2(BNX2X_MAX_NUM_OF_VFS));
+
+	/* Set VFs starting CID. If its > 0 the preceding CIDs are belong to
+	 * the PF L2 queues
+	 */
+	REG_WR(bp, DORQ_REG_VF_NORM_CID_BASE, BNX2X_FIRST_VF_CID);
+
+	/* The VF window size is the log2 of the max number of CIDs per VF */
+	REG_WR(bp, DORQ_REG_VF_NORM_CID_WND_SIZE, BNX2X_VF_CID_WND);
+
+	/* The VF doorbell size  0 - *B, 4 - 128B. We set it here to match
+	 * the Pf doorbell size although the 2 are independent.
+	 */
+	REG_WR(bp, DORQ_REG_VF_NORM_CID_OFST,
+	       BNX2X_DB_SHIFT - BNX2X_DB_MIN_SHIFT);
+
+	/* No security checks for now -
+	 * configure single rule (out of 16) mask = 0x1, value = 0x0,
+	 * CID range 0 - 0x1ffff
+	 */
+	REG_WR(bp, DORQ_REG_VF_TYPE_MASK_0, 1);
+	REG_WR(bp, DORQ_REG_VF_TYPE_VALUE_0, 0);
+	REG_WR(bp, DORQ_REG_VF_TYPE_MIN_MCID_0, 0);
+	REG_WR(bp, DORQ_REG_VF_TYPE_MAX_MCID_0, 0x1ffff);
+
+	/* set the number of VF alllowed doorbells to the full DQ range */
+	REG_WR(bp, DORQ_REG_VF_NORM_MAX_CID_COUNT, 0x20000);
+
+	/* set the VF doorbell threshold */
+	REG_WR(bp, DORQ_REG_VF_USAGE_CT_LIMIT, 4);
+}
+
+void bnx2x_iov_init_dmae(struct bnx2x *bp)
+{
+	DP(BNX2X_MSG_IOV, "SRIOV is %s\n", IS_SRIOV(bp) ? "ON" : "OFF");
+	if (!IS_SRIOV(bp))
+		return;
+
+	REG_WR(bp, DMAE_REG_BACKWARD_COMP_EN, 0);
+}
+
+static int bnx2x_vf_bus(struct bnx2x *bp, int vfid)
+{
+	struct pci_dev *dev = bp->pdev;
+	struct bnx2x_sriov *iov = &bp->vfdb->sriov;
+
+	return dev->bus->number + ((dev->devfn + iov->offset +
+				    iov->stride * vfid) >> 8);
+}
+
+static int bnx2x_vf_devfn(struct bnx2x *bp, int vfid)
+{
+	struct pci_dev *dev = bp->pdev;
+	struct bnx2x_sriov *iov = &bp->vfdb->sriov;
+
+	return (dev->devfn + iov->offset + iov->stride * vfid) & 0xff;
+}
+
+static void bnx2x_vf_set_bars(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	int i, n;
+	struct pci_dev *dev = bp->pdev;
+	struct bnx2x_sriov *iov = &bp->vfdb->sriov;
+
+	for (i = 0, n = 0; i < PCI_SRIOV_NUM_BARS; i += 2, n++) {
+		u64 start = pci_resource_start(dev, PCI_IOV_RESOURCES + i);
+		u32 size = pci_resource_len(dev, PCI_IOV_RESOURCES + i);
+		do_div(size, iov->total);
+		vf->bars[n].bar = start + size * vf->abs_vfid;
+		vf->bars[n].size = size;
+	}
+}
+
+void bnx2x_iov_free_mem(struct bnx2x *bp)
+{
+	int i;
+
+	if (!IS_SRIOV(bp))
+		return;
+
+	/* free vfs hw contexts */
+	for (i = 0; i < BNX2X_VF_CIDS/ILT_PAGE_CIDS; i++) {
+		struct hw_dma *cxt = &bp->vfdb->context[i];
+		BNX2X_PCI_FREE(cxt->addr, cxt->mapping, cxt->size);
+	}
+
+	BNX2X_PCI_FREE(BP_VFDB(bp)->sp_dma.addr,
+		       BP_VFDB(bp)->sp_dma.mapping,
+		       BP_VFDB(bp)->sp_dma.size);
+
+	BNX2X_PCI_FREE(BP_VF_MBX_DMA(bp)->addr,
+		       BP_VF_MBX_DMA(bp)->mapping,
+		       BP_VF_MBX_DMA(bp)->size);
+}
+
+int bnx2x_iov_alloc_mem(struct bnx2x *bp)
+{
+	size_t tot_size;
+	int i, rc = 0;
+
+	if (!IS_SRIOV(bp))
+		return rc;
+
+	/* allocate vfs hw contexts */
+	tot_size = (BP_VFDB(bp)->sriov.first_vf_in_pf + BNX2X_NR_VIRTFN(bp)) *
+		BNX2X_CIDS_PER_VF * sizeof(union cdu_context);
+
+	for (i = 0; i < BNX2X_VF_CIDS/ILT_PAGE_CIDS; i++) {
+		struct hw_dma *cxt = BP_VF_CXT_PAGE(bp, i);
+		cxt->size = min_t(size_t, tot_size, CDU_ILT_PAGE_SZ);
+
+		if (cxt->size) {
+			BNX2X_PCI_ALLOC(cxt->addr, &cxt->mapping, cxt->size);
+		} else {
+			cxt->addr = NULL;
+			cxt->mapping = 0;
+		}
+		tot_size -= cxt->size;
+	}
+
+	/* allocate vfs ramrods dma memory - client_init and set_mac */
+	tot_size = BNX2X_NR_VIRTFN(bp) * sizeof(struct bnx2x_vf_sp);
+	BNX2X_PCI_ALLOC(BP_VFDB(bp)->sp_dma.addr, &BP_VFDB(bp)->sp_dma.mapping,
+			tot_size);
+	BP_VFDB(bp)->sp_dma.size = tot_size;
+
+	/* allocate mailboxes */
+	tot_size = BNX2X_NR_VIRTFN(bp) * MBX_MSG_ALIGNED_SIZE;
+	BNX2X_PCI_ALLOC(BP_VF_MBX_DMA(bp)->addr, &BP_VF_MBX_DMA(bp)->mapping,
+			tot_size);
+	BP_VF_MBX_DMA(bp)->size = tot_size;
+
+	return 0;
+
+alloc_mem_err:
+	return -ENOMEM;
+}
+
+/* called by bnx2x_nic_load */
+int bnx2x_iov_nic_init(struct bnx2x *bp)
+{
+	int vfid, qcount, i;
+
+	if (!IS_SRIOV(bp)) {
+		DP(BNX2X_MSG_IOV, "vfdb was not allocated\n");
+		return 0;
+	}
+
+	DP(BNX2X_MSG_IOV, "num of vfs: %d\n", (bp)->vfdb->sriov.nr_virtfn);
+
+	/* initialize vf database */
+	for_each_vf(bp, vfid) {
+		struct bnx2x_virtf *vf = BP_VF(bp, vfid);
+
+		int base_vf_cid = (BP_VFDB(bp)->sriov.first_vf_in_pf + vfid) *
+			BNX2X_CIDS_PER_VF;
+
+		union cdu_context *base_cxt = (union cdu_context *)
+			BP_VF_CXT_PAGE(bp, base_vf_cid/ILT_PAGE_CIDS)->addr +
+			(base_vf_cid & (ILT_PAGE_CIDS-1));
+
+		DP(BNX2X_MSG_IOV,
+		   "VF[%d] Max IGU SBs: %d, base vf cid 0x%x, base cid 0x%x, base cxt %p\n",
+		   vf->abs_vfid, vf_sb_count(vf), base_vf_cid,
+		   BNX2X_FIRST_VF_CID + base_vf_cid, base_cxt);
+
+		/* init statically provisioned resources */
+		bnx2x_iov_static_resc(bp, &vf->alloc_resc);
+
+		/* queues are initialized during VF-ACQUIRE */
+
+		/* reserve the vf vlan credit */
+		bp->vlans_pool.get(&bp->vlans_pool, vf_vlan_rules_cnt(vf));
+
+		vf->filter_state = 0;
+		vf->sp_cl_id = bnx2x_fp(bp, 0, cl_id);
+
+		/*  init mcast object - This object will be re-initialized
+		 *  during VF-ACQUIRE with the proper cl_id and cid.
+		 *  It needs to be initialized here so that it can be safely
+		 *  handled by a subsequent FLR flow.
+		 */
+		bnx2x_init_mcast_obj(bp, &vf->mcast_obj, 0xFF,
+				     0xFF, 0xFF, 0xFF,
+				     bnx2x_vf_sp(bp, vf, mcast_rdata),
+				     bnx2x_vf_sp_map(bp, vf, mcast_rdata),
+				     BNX2X_FILTER_MCAST_PENDING,
+				     &vf->filter_state,
+				     BNX2X_OBJ_TYPE_RX_TX);
+
+		/* set the mailbox message addresses */
+		BP_VF_MBX(bp, vfid)->msg = (struct bnx2x_vf_mbx_msg *)
+			(((u8 *)BP_VF_MBX_DMA(bp)->addr) + vfid *
+			MBX_MSG_ALIGNED_SIZE);
+
+		BP_VF_MBX(bp, vfid)->msg_mapping = BP_VF_MBX_DMA(bp)->mapping +
+			vfid * MBX_MSG_ALIGNED_SIZE;
+
+		/* Enable vf mailbox */
+		bnx2x_vf_enable_mbx(bp, vf->abs_vfid);
+	}
+
+	/* Final VF init */
+	qcount = 0;
+	for_each_vf(bp, i) {
+		struct bnx2x_virtf *vf = BP_VF(bp, i);
+
+		/* fill in the BDF and bars */
+		vf->bus = bnx2x_vf_bus(bp, i);
+		vf->devfn = bnx2x_vf_devfn(bp, i);
+		bnx2x_vf_set_bars(bp, vf);
+
+		DP(BNX2X_MSG_IOV,
+		   "VF info[%d]: bus 0x%x, devfn 0x%x, bar0 [0x%x, %d], bar1 [0x%x, %d], bar2 [0x%x, %d]\n",
+		   vf->abs_vfid, vf->bus, vf->devfn,
+		   (unsigned)vf->bars[0].bar, vf->bars[0].size,
+		   (unsigned)vf->bars[1].bar, vf->bars[1].size,
+		   (unsigned)vf->bars[2].bar, vf->bars[2].size);
+
+		/* set local queue arrays */
+		vf->vfqs = &bp->vfdb->vfqs[qcount];
+		qcount += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+	}
+
+	return 0;
+}
 
 /* called by bnx2x_init_hw_func, returns the next ilt line */
 int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index b9b0583..c49b6e0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -179,6 +179,25 @@ struct bnx2x_virtf {
 #define for_each_vf(bp, var) \
 		for ((var) = 0; (var) < BNX2X_NR_VIRTFN(bp); (var)++)
 
+#define HW_VF_HANDLE(bp, abs_vfid) \
+	(u16)(BP_ABS_FUNC((bp)) | (1<<3) |  ((u16)(abs_vfid) << 4))
+
+#define FW_PF_MAX_HANDLE	8
+
+#define FW_VF_HANDLE(abs_vfid)	\
+	(abs_vfid + FW_PF_MAX_HANDLE)
+
+/* VF mail box (aka vf-pf channel) */
+
+/* a container for the bi-directional vf<-->pf messages.
+ *  The actual response will be placed according to the offset parameter
+ *  provided in the request
+ */
+
+#define MBX_MSG_ALIGN	8
+#define MBX_MSG_ALIGNED_SIZE	(roundup(sizeof(struct bnx2x_vf_mbx_msg), \
+				MBX_MSG_ALIGN))
+
 struct bnx2x_vf_mbx_msg {
 	union vfpf_tlvs req;
 	union pfvf_tlvs resp;
@@ -200,6 +219,29 @@ struct bnx2x_vf_mbx {
 					 */
 };
 
+struct bnx2x_vf_sp {
+	union {
+		struct eth_classify_rules_ramrod_data	e2;
+	} mac_rdata;
+
+	union {
+		struct eth_classify_rules_ramrod_data	e2;
+	} vlan_rdata;
+
+	union {
+		struct eth_filter_rules_ramrod_data	e2;
+	} rx_mode_rdata;
+
+	union {
+		struct eth_multicast_rules_ramrod_data  e2;
+	} mcast_rdata;
+
+	union {
+		struct client_init_ramrod_data  init_data;
+		struct client_update_ramrod_data update_data;
+	} q_data;
+};
+
 struct hw_dma {
 	void *addr;
 	dma_addr_t mapping;
@@ -240,11 +282,25 @@ struct bnx2x_vfdb {
 	u32 flrd_vfs[FLRD_VFS_DWORDS];
 };
 
+static inline u8 vf_igu_sb(struct bnx2x_virtf *vf, u16 sb_idx)
+{
+	return vf->igu_base_id + sb_idx;
+}
+
 /* global iov routines */
 int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line);
 int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param, int num_vfs_param);
 void bnx2x_iov_remove_one(struct bnx2x *bp);
+void bnx2x_iov_free_mem(struct bnx2x *bp);
+int bnx2x_iov_alloc_mem(struct bnx2x *bp);
+int bnx2x_iov_nic_init(struct bnx2x *bp);
+void bnx2x_iov_init_dq(struct bnx2x *bp);
+void bnx2x_iov_init_dmae(struct bnx2x *bp);
+void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid);
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
+/* VF FLR helpers */
+int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid);
+void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid);
 void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
 		   u16 length);
 void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index cc9b83f..52daa3bdf 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -77,3 +77,40 @@ void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list)
 	DP(BNX2X_MSG_IOV, "TLV number %d: type %d, length %d\n", i,
 	   tlv->type, tlv->length);
 }
+
+/* General service functions */
+static void storm_memset_vf_mbx_ack(struct bnx2x *bp, u16 abs_fid)
+{
+	u32 addr = BAR_CSTRORM_INTMEM +
+			CSTORM_VF_PF_CHANNEL_STATE_OFFSET(abs_fid);
+
+	REG_WR8(bp, addr, VF_PF_CHANNEL_STATE_READY);
+}
+
+static void storm_memset_vf_mbx_valid(struct bnx2x *bp, u16 abs_fid)
+{
+	u32 addr = BAR_CSTRORM_INTMEM +
+			CSTORM_VF_PF_CHANNEL_VALID_OFFSET(abs_fid);
+
+	REG_WR8(bp, addr, 1);
+}
+
+static inline void bnx2x_set_vf_mbxs_valid(struct bnx2x *bp)
+{
+	int i;
+	for_each_vf(bp, i)
+		storm_memset_vf_mbx_valid(bp, bnx2x_vf(bp, i, abs_vfid));
+}
+
+/* enable vf_pf mailbox (aka vf-pf-chanell) */
+void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid)
+{
+	bnx2x_vf_flr_clnup_epilog(bp, abs_vfid);
+
+	/* enable the mailbox in the FW */
+	storm_memset_vf_mbx_ack(bp, abs_vfid);
+	storm_memset_vf_mbx_valid(bp, abs_vfid);
+
+	/* enable the VF access to the mailbox */
+	bnx2x_vf_enable_access(bp, abs_vfid);
+}
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 11/22] bnx2x: Infrastructure for VF <-> PF request on PF side
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (9 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 10/22] bnx2x: Prepare device and initialize VF database Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 12/22] bnx2x: Support of PF driver of a VF acquire request Ariel Elior
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Support interrupt from device which indicates VF has placed
A request on the VF <-> PF channel.
The PF driver issues a DMAE to retrieve the request from the VM
memory (the Ghost Physical Address of the request is contained
in the interrupt. The PF driver uses the GPA in the DMAE request,
which is translated by the IOMMU to the correct physical address).
The request which arrives is examined to recognize the sending VF.
The PF driver allocates a workitem to handle the VF Operation (vfop).

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    6 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |  202 +++++++++++++----
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  248 ++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |  102 +++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |  159 +++++++++++++
 5 files changed, 672 insertions(+), 45 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 14ca777..6fc3fef 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1379,6 +1379,7 @@ struct bnx2x {
 	int			mrrs;
 
 	struct delayed_work	sp_task;
+	atomic_t		interrupt_occurred;
 	struct delayed_work	sp_rtnl_task;
 
 	struct delayed_work	period_task;
@@ -1870,6 +1871,11 @@ u32 bnx2x_dmae_opcode_clr_src_reset(u32 opcode);
 u32 bnx2x_dmae_opcode(struct bnx2x *bp, u8 src_type, u8 dst_type,
 		      bool with_comp, u8 comp_type);
 
+void bnx2x_prep_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
+			       u8 src_type, u8 dst_type);
+int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae);
+void bnx2x_dp_dmae(struct bnx2x *bp, struct dmae_command *dmae, int msglvl);
+
 u8 bnx2x_is_pcie_pending(struct pci_dev *dev);
 
 void bnx2x_calc_fc_adv(struct bnx2x *bp);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 3133692..07e2963 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -345,6 +345,65 @@ static u32 bnx2x_reg_rd_ind(struct bnx2x *bp, u32 addr)
 #define DMAE_DP_DST_PCI		"pci dst_addr [%x:%08x]"
 #define DMAE_DP_DST_NONE	"dst_addr [none]"
 
+void bnx2x_dp_dmae(struct bnx2x *bp, struct dmae_command *dmae, int msglvl)
+{
+	u32 src_type = dmae->opcode & DMAE_COMMAND_SRC;
+
+	switch (dmae->opcode & DMAE_COMMAND_DST) {
+	case DMAE_CMD_DST_PCI:
+		if (src_type == DMAE_CMD_SRC_PCI)
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src [%x:%08x], len [%d*4], dst [%x:%08x]\n"
+			   "comp_addr [%x:%08x], comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_hi, dmae->src_addr_lo,
+			   dmae->len, dmae->dst_addr_hi, dmae->dst_addr_lo,
+			   dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		else
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src [%08x], len [%d*4], dst [%x:%08x]\n"
+			   "comp_addr [%x:%08x], comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_lo >> 2,
+			   dmae->len, dmae->dst_addr_hi, dmae->dst_addr_lo,
+			   dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		break;
+	case DMAE_CMD_DST_GRC:
+		if (src_type == DMAE_CMD_SRC_PCI)
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src [%x:%08x], len [%d*4], dst_addr [%08x]\n"
+			   "comp_addr [%x:%08x], comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_hi, dmae->src_addr_lo,
+			   dmae->len, dmae->dst_addr_lo >> 2,
+			   dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		else
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src [%08x], len [%d*4], dst [%08x]\n"
+			   "comp_addr [%x:%08x], comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_lo >> 2,
+			   dmae->len, dmae->dst_addr_lo >> 2,
+			   dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		break;
+	default:
+		if (src_type == DMAE_CMD_SRC_PCI)
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src_addr [%x:%08x]  len [%d * 4]  dst_addr [none]\n"
+			   "comp_addr [%x:%08x]  comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_hi, dmae->src_addr_lo,
+			   dmae->len, dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		else
+			DP(msglvl, "DMAE: opcode 0x%08x\n"
+			   "src_addr [%08x]  len [%d * 4]  dst_addr [none]\n"
+			   "comp_addr [%x:%08x]  comp_val 0x%08x\n",
+			   dmae->opcode, dmae->src_addr_lo >> 2,
+			   dmae->len, dmae->comp_addr_hi, dmae->comp_addr_lo,
+			   dmae->comp_val);
+		break;
+	}
+}
 
 /* copy command into DMAE command memory and set DMAE command go */
 void bnx2x_post_dmae(struct bnx2x *bp, struct dmae_command *dmae, int idx)
@@ -395,7 +454,7 @@ u32 bnx2x_dmae_opcode(struct bnx2x *bp, u8 src_type, u8 dst_type,
 	return opcode;
 }
 
-static void bnx2x_prep_dmae_with_comp(struct bnx2x *bp,
+void bnx2x_prep_dmae_with_comp(struct bnx2x *bp,
 				      struct dmae_command *dmae,
 				      u8 src_type, u8 dst_type)
 {
@@ -411,9 +470,8 @@ static void bnx2x_prep_dmae_with_comp(struct bnx2x *bp,
 	dmae->comp_val = DMAE_COMP_VAL;
 }
 
-/* issue a dmae command over the init-channel and wailt for completion */
-static int bnx2x_issue_dmae_with_comp(struct bnx2x *bp,
-				      struct dmae_command *dmae)
+/* issue a dmae command over the init-channel and wait for completion */
+int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae)
 {
 	u32 *wb_comp = bnx2x_sp(bp, wb_comp);
 	int cnt = CHIP_REV_IS_SLOW(bp) ? (400000) : 4000;
@@ -1598,6 +1656,24 @@ static bool bnx2x_trylock_leader_lock(struct bnx2x *bp)
 
 static void bnx2x_cnic_cfc_comp(struct bnx2x *bp, int cid, u8 err);
 
+/* schedule the sp task and mark that interrupt occurred (runs from ISR) */
+static int bnx2x_schedule_sp_task(struct bnx2x *bp)
+{
+	/* Set the interrupt occurred bit for the sp-task to recognize it
+	 * must ack the interrupt and transition according to the IGU
+	 * state machine.
+	 */
+	atomic_set(&bp->interrupt_occurred, 1);
+
+	/* The sp_task must execute only after this bit
+	 * is set, otherwise we will get out of sync and miss all
+	 * further interrupts. Hence, the barrier.
+	 */
+	smp_wmb();
+
+	/* schedule sp_task to workqueue */
+	return queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);
+}
 
 void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
 {
@@ -1612,6 +1688,13 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
 	   fp->index, cid, command, bp->state,
 	   rr_cqe->ramrod_cqe.ramrod_type);
 
+	/* If cid is within VF range, replace the slowpath object with the
+	 * one corresponding to this VF
+	 */
+	if (cid >= BNX2X_FIRST_VF_CID  &&
+	    cid < BNX2X_FIRST_VF_CID + BNX2X_VF_CIDS)
+		bnx2x_iov_set_queue_sp_obj(bp, cid, &q_obj);
+
 	switch (command) {
 	case (RAMROD_CMD_ID_ETH_CLIENT_UPDATE):
 		DP(BNX2X_MSG_SP, "got UPDATE ramrod. CID %d\n", cid);
@@ -1663,6 +1746,8 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
 #else
 		return;
 #endif
+	/* SRIOV: reschedule any 'in_progress' operations */
+	bnx2x_iov_sp_event(bp, cid, true);
 
 	smp_mb__before_atomic_inc();
 	atomic_inc(&bp->cq_spq_left);
@@ -1688,8 +1773,8 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
 		clear_bit(BNX2X_AFEX_FCOE_Q_UPDATE_PENDING, &bp->sp_state);
 		smp_mb__after_clear_bit();
 
-		/* schedule workqueue to send ack to MCP */
-		queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);
+		/* schedule the sp task as mcp ack is required */
+		bnx2x_schedule_sp_task(bp);
 	}
 
 	return;
@@ -1749,7 +1834,11 @@ irqreturn_t bnx2x_interrupt(int irq, void *dev_instance)
 	}
 
 	if (unlikely(status & 0x1)) {
-		queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);
+
+		/* schedule sp task to perform default status block work, ack
+		 * attentions and enable interrupts.
+		 */
+		bnx2x_schedule_sp_task(bp);
 
 		status &= ~0x1;
 		if (!status)
@@ -4830,7 +4919,7 @@ static void bnx2x_eq_int(struct bnx2x *bp)
 	u8 echo;
 	u32 cid;
 	u8 opcode;
-	int spqe_cnt = 0;
+	int rc, spqe_cnt = 0;
 	struct bnx2x_queue_sp_obj *q_obj;
 	struct bnx2x_func_sp_obj *f_obj = &bp->func_obj;
 	struct bnx2x_raw_obj *rss_raw = &bp->rss_conf_obj.raw;
@@ -4861,12 +4950,23 @@ static void bnx2x_eq_int(struct bnx2x *bp)
 
 		elem = &bp->eq_ring[EQ_DESC(sw_cons)];
 
+		rc = bnx2x_iov_eq_sp_event(bp, elem);
+		if (!rc) {
+			DP(BNX2X_MSG_IOV, "bnx2x_iov_eq_sp_event returned %d\n",
+			   rc);
+			goto next_spqe;
+		}
 		cid = SW_CID(elem->message.data.cfc_del_event.cid);
 		opcode = elem->message.opcode;
 
 
 		/* handle eq element */
 		switch (opcode) {
+		case EVENT_RING_OPCODE_VF_PF_CHANNEL:
+			DP(BNX2X_MSG_IOV, "vf pf channel element on eq");
+			bnx2x_vf_mbx(bp, &elem->message.data.vf_pf_event);
+			continue;
+
 		case EVENT_RING_OPCODE_STAT_QUERY:
 			DP(BNX2X_MSG_SP | BNX2X_MSG_STATS,
 			   "got statistics comp event %d\n",
@@ -5032,50 +5132,67 @@ next_spqe:
 static void bnx2x_sp_task(struct work_struct *work)
 {
 	struct bnx2x *bp = container_of(work, struct bnx2x, sp_task.work);
-	u16 status;
 
-	status = bnx2x_update_dsb_idx(bp);
-/*	if (status == 0)				     */
-/*		BNX2X_ERR("spurious slowpath interrupt!\n"); */
+	DP(BNX2X_MSG_SP, "sp task invoked\n");
 
-	DP(BNX2X_MSG_SP, "got a slowpath interrupt (status 0x%x)\n", status);
+	/* make sure the atomic interupt_occurred has bee written */
+	smp_rmb();
+	if (atomic_read(&bp->interrupt_occurred)) {
 
-	/* HW attentions */
-	if (status & BNX2X_DEF_SB_ATT_IDX) {
-		bnx2x_attn_int(bp);
-		status &= ~BNX2X_DEF_SB_ATT_IDX;
-	}
+		/* what work needs to be performed? */
+		u16 status = bnx2x_update_dsb_idx(bp);
+		DP(BNX2X_MSG_SP, "status %x\n", status);
+
+		DP(BNX2X_MSG_SP, "setting interrupt_occurred to 0");
+		atomic_set(&bp->interrupt_occurred, 0);
 
-	/* SP events: STAT_QUERY and others */
-	if (status & BNX2X_DEF_SB_IDX) {
-		struct bnx2x_fastpath *fp = bnx2x_fcoe_fp(bp);
+		/* HW attentions */
+		if (status & BNX2X_DEF_SB_ATT_IDX) {
+			bnx2x_attn_int(bp);
+			status &= ~BNX2X_DEF_SB_ATT_IDX;
+		}
+
+		/* SP events: STAT_QUERY and others */
+		if (status & BNX2X_DEF_SB_IDX) {
+			struct bnx2x_fastpath *fp = bnx2x_fcoe_fp(bp);
 
 		if (FCOE_INIT(bp) &&
-		    (bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
-			/*
-			 * Prevent local bottom-halves from running as
-			 * we are going to change the local NAPI list.
-			 */
-			local_bh_disable();
-			napi_schedule(&bnx2x_fcoe(bp, napi));
-			local_bh_enable();
+			    (bnx2x_has_rx_work(fp) || bnx2x_has_tx_work(fp))) {
+				/*
+				 * Prevent local bottom-halves from running as
+				 * we are going to change the local NAPI list.
+				 */
+				local_bh_disable();
+				napi_schedule(&bnx2x_fcoe(bp, napi));
+				local_bh_enable();
+			}
+
+			/* Handle EQ completions */
+			bnx2x_eq_int(bp);
+
+			bnx2x_ack_sb(bp, bp->igu_dsb_id, USTORM_ID,
+				     le16_to_cpu(bp->def_idx), IGU_INT_NOP, 1);
+
+			status &= ~BNX2X_DEF_SB_IDX;
 		}
 
-		/* Handle EQ completions */
-		bnx2x_eq_int(bp);
+		/* if status is non zero then perhaps something went wrong */
+		if (unlikely(status))
+			DP(BNX2X_MSG_SP,
+			   "got an unknown interrupt! (status 0x%x)\n", status);
 
-		bnx2x_ack_sb(bp, bp->igu_dsb_id, USTORM_ID,
-			le16_to_cpu(bp->def_idx), IGU_INT_NOP, 1);
+		/* ack status block only if something was actually handled */
+		bnx2x_ack_sb(bp, bp->igu_dsb_id, ATTENTION_ID,
+			     le16_to_cpu(bp->def_att_idx), IGU_INT_ENABLE, 1);
 
-		status &= ~BNX2X_DEF_SB_IDX;
 	}
 
-	if (unlikely(status))
-		DP(BNX2X_MSG_SP, "got an unknown interrupt! (status 0x%x)\n",
-		   status);
-
-	bnx2x_ack_sb(bp, bp->igu_dsb_id, ATTENTION_ID,
-	     le16_to_cpu(bp->def_att_idx), IGU_INT_ENABLE, 1);
+	/* must be called after the EQ processing (since eq leads to sriov
+	 * ramrod completion flows).
+	 * This flow may have been scheduled by the arrival of a ramrod
+	 * completion, or by the sriov code rescheduling itself.
+	 */
+	bnx2x_iov_sp_task(bp);
 
 	/* afex - poll to check if VIFSET_ACK should be sent to MFW */
 	if (test_and_clear_bit(BNX2X_AFEX_PENDING_VIFSET_MCP_ACK,
@@ -5108,7 +5225,10 @@ irqreturn_t bnx2x_msix_sp_int(int irq, void *dev_instance)
 		rcu_read_unlock();
 	}
 
-	queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);
+	/* schedule sp task to perform default status block work, ack
+	 * attentions and enable interrupts.
+	 */
+	bnx2x_schedule_sp_task(bp);
 
 	return IRQ_HANDLED;
 }
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index bafb7fb..5a55bf6 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -516,6 +516,16 @@ static void bnx2x_vf_set_bars(struct bnx2x *bp, struct bnx2x_virtf *vf)
 	}
 }
 
+void bnx2x_iov_remove_one(struct bnx2x *bp)
+{
+	/* if SRIOV is not enabled there's nothing to do */
+	if (!IS_SRIOV(bp))
+		return;
+
+	/* free vf database */
+	__bnx2x_iov_free_vfdb(bp);
+}
+
 void bnx2x_iov_free_mem(struct bnx2x *bp)
 {
 	int i;
@@ -689,12 +699,242 @@ int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line)
 	return line + i;
 }
 
-void bnx2x_iov_remove_one(struct bnx2x *bp)
+static u8 bnx2x_iov_is_vf_cid(struct bnx2x *bp, u16 cid)
 {
-	/* if SRIOV is not enabled there's nothing to do */
+	return ((cid >= BNX2X_FIRST_VF_CID) &&
+		((cid - BNX2X_FIRST_VF_CID) < BNX2X_VF_CIDS));
+}
+
+static
+void bnx2x_vf_handle_classification_eqe(struct bnx2x *bp,
+					struct bnx2x_vf_queue *vfq,
+					union event_ring_elem *elem)
+{
+	unsigned long ramrod_flags = 0;
+	int rc = 0;
+
+	/* Always push next commands out, don't wait here */
+	set_bit(RAMROD_CONT, &ramrod_flags);
+
+	switch (elem->message.data.eth_event.echo >> BNX2X_SWCID_SHIFT) {
+	case BNX2X_FILTER_MAC_PENDING:
+		rc = vfq->mac_obj.complete(bp, &vfq->mac_obj, elem,
+					   &ramrod_flags);
+		break;
+	case BNX2X_FILTER_VLAN_PENDING:
+		rc = vfq->vlan_obj.complete(bp, &vfq->vlan_obj, elem,
+					    &ramrod_flags);
+		break;
+	default:
+		BNX2X_ERR("Unsupported classification command: %d\n",
+			  elem->message.data.eth_event.echo);
+		return;
+	}
+	if (rc < 0)
+		BNX2X_ERR("Failed to schedule new commands: %d\n", rc);
+	else if (rc > 0)
+		DP(BNX2X_MSG_IOV, "Scheduled next pending commands...\n");
+}
+
+static
+void bnx2x_vf_handle_mcast_eqe(struct bnx2x *bp,
+			       struct bnx2x_virtf *vf)
+{
+	struct bnx2x_mcast_ramrod_params rparam = {NULL};
+	int rc;
+
+	rparam.mcast_obj = &vf->mcast_obj;
+
+	vf->mcast_obj.raw.clear_pending(&vf->mcast_obj.raw);
+
+	/* If there are pending mcast commands - send them */
+	if (vf->mcast_obj.check_pending(&vf->mcast_obj)) {
+		rc = bnx2x_config_mcast(bp, &rparam, BNX2X_MCAST_CMD_CONT);
+		if (rc < 0)
+			BNX2X_ERR("Failed to send pending mcast commands: %d\n",
+				  rc);
+	}
+}
+
+static
+void bnx2x_vf_handle_filters_eqe(struct bnx2x *bp,
+				 struct bnx2x_virtf *vf)
+{
+	smp_mb__before_clear_bit();
+	clear_bit(BNX2X_FILTER_RX_MODE_PENDING, &vf->filter_state);
+	smp_mb__after_clear_bit();
+}
+
+int bnx2x_iov_eq_sp_event(struct bnx2x *bp, union event_ring_elem *elem)
+{
+	struct bnx2x_virtf *vf;
+	int qidx = 0, abs_vfid;
+	u8 opcode;
+	u16 cid = 0xffff;
+
+	if (!IS_SRIOV(bp))
+		return 1;
+
+	/* first get the cid - the only events we handle here are cfc-delete
+	 * and set-mac completion
+	 */
+	opcode = elem->message.opcode;
+
+	switch (opcode) {
+	case EVENT_RING_OPCODE_CFC_DEL:
+		cid = SW_CID((__force __le32)
+			     elem->message.data.cfc_del_event.cid);
+		DP(BNX2X_MSG_IOV, "checking cfc-del comp cid=%d\n", cid);
+		break;
+	case EVENT_RING_OPCODE_CLASSIFICATION_RULES:
+	case EVENT_RING_OPCODE_MULTICAST_RULES:
+	case EVENT_RING_OPCODE_FILTERS_RULES:
+		cid = (elem->message.data.eth_event.echo &
+		       BNX2X_SWCID_MASK);
+		DP(BNX2X_MSG_IOV, "checking filtering comp cid=%d\n", cid);
+		break;
+	case EVENT_RING_OPCODE_VF_FLR:
+		abs_vfid = elem->message.data.vf_flr_event.vf_id;
+		DP(BNX2X_MSG_IOV, "Got VF FLR notification abs_vfid=%d\n",
+		   abs_vfid);
+		goto get_vf;
+	case EVENT_RING_OPCODE_MALICIOUS_VF:
+		abs_vfid = elem->message.data.malicious_vf_event.vf_id;
+		DP(BNX2X_MSG_IOV, "Got VF MALICIOUS notification abs_vfid=%d\n",
+		   abs_vfid);
+		goto get_vf;
+	default:
+		return 1;
+	}
+
+	/* check if the cid is the VF range */
+	if (!bnx2x_iov_is_vf_cid(bp, cid)) {
+		DP(BNX2X_MSG_IOV, "cid is outside vf range: %d\n", cid);
+		return 1;
+	}
+	/* extract vf and rxq index from vf_cid - relies on the following:
+	 * 1. vfid on cid reflects the true abs_vfid
+	 * 2. the max number of VFs (per path) is 64
+	 */
+	qidx = cid & ((1 << BNX2X_VF_CID_WND)-1);
+	abs_vfid = (cid >> BNX2X_VF_CID_WND) & (BNX2X_MAX_NUM_OF_VFS-1);
+get_vf:
+	vf = bnx2x_vf_by_abs_fid(bp, abs_vfid);
+
+	if (!vf) {
+		BNX2X_ERR("EQ completion for unknown VF, cid %d, abs_vfid %d\n",
+			  cid, abs_vfid);
+		return 0;
+	}
+
+	switch (opcode) {
+	case EVENT_RING_OPCODE_CFC_DEL:
+		DP(BNX2X_MSG_IOV, "got VF [%d:%d] cfc delete ramrod\n",
+		   vf->abs_vfid, qidx);
+		vfq_get(vf, qidx)->sp_obj.complete_cmd(bp,
+						       &vfq_get(vf,
+								qidx)->sp_obj,
+						       BNX2X_Q_CMD_CFC_DEL);
+		break;
+	case EVENT_RING_OPCODE_CLASSIFICATION_RULES:
+		DP(BNX2X_MSG_IOV, "got VF [%d:%d] set mac/vlan ramrod\n",
+		   vf->abs_vfid, qidx);
+		bnx2x_vf_handle_classification_eqe(bp, vfq_get(vf, qidx), elem);
+		break;
+	case EVENT_RING_OPCODE_MULTICAST_RULES:
+		DP(BNX2X_MSG_IOV, "got VF [%d:%d] set mcast ramrod\n",
+		   vf->abs_vfid, qidx);
+		bnx2x_vf_handle_mcast_eqe(bp, vf);
+		break;
+	case EVENT_RING_OPCODE_FILTERS_RULES:
+		DP(BNX2X_MSG_IOV, "got VF [%d:%d] set rx-mode ramrod\n",
+		   vf->abs_vfid, qidx);
+		bnx2x_vf_handle_filters_eqe(bp, vf);
+		break;
+	case EVENT_RING_OPCODE_VF_FLR:
+		DP(BNX2X_MSG_IOV, "got VF [%d] FLR notification\n",
+		   vf->abs_vfid);
+		/* Do nothing for now */
+		break;
+	case EVENT_RING_OPCODE_MALICIOUS_VF:
+		DP(BNX2X_MSG_IOV, "got VF [%d] MALICIOUS notification\n",
+		   vf->abs_vfid);
+		/* Do nothing for now */
+		break;
+	}
+	/* SRIOV: reschedule any 'in_progress' operations */
+	bnx2x_iov_sp_event(bp, cid, false);
+
+	return 0;
+}
+
+static struct bnx2x_virtf *bnx2x_vf_by_cid(struct bnx2x *bp, int vf_cid)
+{
+	/* extract the vf from vf_cid - relies on the following:
+	 * 1. vfid on cid reflects the true abs_vfid
+	 * 2. the max number of VFs (per path) is 64
+	 */
+	int abs_vfid = (vf_cid >> BNX2X_VF_CID_WND) & (BNX2X_MAX_NUM_OF_VFS-1);
+	return bnx2x_vf_by_abs_fid(bp, abs_vfid);
+}
+
+void bnx2x_iov_set_queue_sp_obj(struct bnx2x *bp, int vf_cid,
+				struct bnx2x_queue_sp_obj **q_obj)
+{
+	struct bnx2x_virtf *vf;
+
 	if (!IS_SRIOV(bp))
 		return;
 
-	/* free vf database */
-	__bnx2x_iov_free_vfdb(bp);
+	vf = bnx2x_vf_by_cid(bp, vf_cid);
+
+	if (vf) {
+		/* extract queue index from vf_cid - relies on the following:
+		 * 1. vfid on cid reflects the true abs_vfid
+		 * 2. the max number of VFs (per path) is 64
+		 */
+		int q_index = vf_cid & ((1 << BNX2X_VF_CID_WND)-1);
+		*q_obj = &bnx2x_vfq(vf, q_index, sp_obj);
+	} else {
+		BNX2X_ERR("No vf matching cid %d\n", vf_cid);
+	}
+}
+
+void bnx2x_iov_sp_event(struct bnx2x *bp, int vf_cid, bool queue_work)
+{
+	struct bnx2x_virtf *vf;
+
+	/* check if the cid is the VF range */
+	if (!IS_SRIOV(bp) || !bnx2x_iov_is_vf_cid(bp, vf_cid))
+		return;
+
+	vf = bnx2x_vf_by_cid(bp, vf_cid);
+	if (vf) {
+
+		/* set in_progress flag */
+		atomic_set(&vf->op_in_progress, 1);
+		if (queue_work)
+			queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);
+	}
+}
+
+void bnx2x_iov_sp_task(struct bnx2x *bp)
+{
+	int i;
+
+	if (!IS_SRIOV(bp))
+		return;
+	/* Iterate over all VFs and invoke state transition for VFs with
+	 * 'in-progress' slow-path operations
+	 */
+	DP(BNX2X_MSG_IOV, "searching for pending vf operations\n");
+	for_each_vf(bp, i) {
+		struct bnx2x_virtf *vf = BP_VF(bp, i);
+
+		if (!list_empty(&vf->op_list_head) &&
+		    atomic_read(&vf->op_in_progress)) {
+			DP(BNX2X_MSG_IOV, "running pending op for vf %d\n", i);
+			bnx2x_vfop_cur(bp, vf)->transition(bp, vf);
+		}
+	}
 }
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index c49b6e0..7c46034 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -83,6 +83,84 @@ union bnx2x_vfop_params {
 
 /* forward */
 struct bnx2x_virtf;
+
+/* VFOP definitions */
+typedef void (*vfop_handler_t)(struct bnx2x *bp, struct bnx2x_virtf *vf);
+
+/* VFOP queue filters command additional arguments */
+struct bnx2x_vfop_filter {
+	struct list_head link;
+	int type;
+#define BNX2X_VFOP_FILTER_MAC	1
+#define BNX2X_VFOP_FILTER_VLAN	2
+
+	bool add;
+	u8 *mac;
+	u16 vid;
+};
+
+struct bnx2x_vfop_filters {
+	int add_cnt;
+	struct list_head head;
+	struct bnx2x_vfop_filter filters[];
+};
+
+/* transient list allocated, built and saved until its
+ * passed to the SP-VERBs layer.
+ */
+struct bnx2x_vfop_args_mcast {
+	int mc_num;
+	struct bnx2x_mcast_list_elem *mc;
+};
+
+struct bnx2x_vfop_args_qctor {
+	int	qid;
+	u16	sb_idx;
+};
+
+struct bnx2x_vfop_args_qdtor {
+	int	qid;
+	struct eth_context *cxt;
+};
+
+struct bnx2x_vfop_args_defvlan {
+	int	qid;
+	bool	enable;
+	u16	vid;
+	u8	prio;
+};
+
+struct bnx2x_vfop_args_qx {
+	int	qid;
+	bool	en_add;
+};
+
+struct bnx2x_vfop_args_filters {
+	struct bnx2x_vfop_filters *multi_filter;
+	atomic_t *credit;	/* non NULL means 'don't consume credit' */
+};
+
+union bnx2x_vfop_args {
+	struct bnx2x_vfop_args_mcast	mc_list;
+	struct bnx2x_vfop_args_qctor	qctor;
+	struct bnx2x_vfop_args_qdtor	qdtor;
+	struct bnx2x_vfop_args_defvlan	defvlan;
+	struct bnx2x_vfop_args_qx	qx;
+	struct bnx2x_vfop_args_filters	filters;
+};
+
+struct bnx2x_vfop {
+	struct list_head link;
+	int			rc;		/* return code */
+	int			state;		/* next state */
+	union bnx2x_vfop_args	args;		/* extra arguments */
+	union bnx2x_vfop_params *op_p;		/* ramrod params */
+
+	/* state machine callbacks */
+	vfop_handler_t transition;
+	vfop_handler_t done;
+};
+
 /* vf context */
 struct bnx2x_virtf {
 	u16 cfg_flags;
@@ -282,6 +360,12 @@ struct bnx2x_vfdb {
 	u32 flrd_vfs[FLRD_VFS_DWORDS];
 };
 
+/* queue access */
+static inline struct bnx2x_vf_queue *vfq_get(struct bnx2x_virtf *vf, u8 index)
+{
+	return &(vf->vfqs[index]);
+}
+
 static inline u8 vf_igu_sb(struct bnx2x_virtf *vf, u16 sb_idx)
 {
 	return vf->igu_base_id + sb_idx;
@@ -296,7 +380,22 @@ int bnx2x_iov_alloc_mem(struct bnx2x *bp);
 int bnx2x_iov_nic_init(struct bnx2x *bp);
 void bnx2x_iov_init_dq(struct bnx2x *bp);
 void bnx2x_iov_init_dmae(struct bnx2x *bp);
+void bnx2x_iov_set_queue_sp_obj(struct bnx2x *bp, int vf_cid,
+				struct bnx2x_queue_sp_obj **q_obj);
+void bnx2x_iov_sp_event(struct bnx2x *bp, int vf_cid, bool queue_work);
+int bnx2x_iov_eq_sp_event(struct bnx2x *bp, union event_ring_elem *elem);
+void bnx2x_iov_sp_task(struct bnx2x *bp);
+/* global vf mailbox routines */
+void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event);
 void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid);
+static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
+						struct bnx2x_virtf *vf)
+{
+	WARN(!mutex_is_locked(&vf->op_mutex), "about to access vf op linked list but mutex was not locked!");
+	WARN_ON(list_empty(&vf->op_list_head));
+	return list_first_entry(&vf->op_list_head, struct bnx2x_vfop, link);
+}
+
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 /* VF FLR helpers */
 int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid);
@@ -306,4 +405,7 @@ void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
 void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
 		     u16 type, u16 length);
 void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list);
+
+bool bnx2x_tlv_supported(u16 tlvtype);
+
 #endif /* bnx2x_sriov.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 52daa3bdf..7c8e87d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -78,6 +78,24 @@ void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list)
 	   tlv->type, tlv->length);
 }
 
+/* test whether we support a tlv type */
+bool bnx2x_tlv_supported(u16 tlvtype)
+{
+	return CHANNEL_TLV_NONE < tlvtype && tlvtype < CHANNEL_TLV_MAX;
+}
+
+static inline int bnx2x_pfvf_status_codes(int rc)
+{
+	switch (rc) {
+	case 0:
+		return PFVF_STATUS_SUCCESS;
+	case -ENOMEM:
+		return PFVF_STATUS_NO_RESOURCE;
+	default:
+		return PFVF_STATUS_FAILURE;
+	}
+}
+
 /* General service functions */
 static void storm_memset_vf_mbx_ack(struct bnx2x *bp, u16 abs_fid)
 {
@@ -114,3 +132,144 @@ void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid)
 	/* enable the VF access to the mailbox */
 	bnx2x_vf_enable_access(bp, abs_vfid);
 }
+
+/* this works only on !E1h */
+static int bnx2x_copy32_vf_dmae(struct bnx2x *bp, u8 from_vf,
+				dma_addr_t pf_addr, u8 vfid, u32 vf_addr_hi,
+				u32 vf_addr_lo, u32 len32)
+{
+	struct dmae_command dmae;
+
+	if (CHIP_IS_E1x(bp)) {
+		BNX2X_ERR("Chip revision does not support VFs\n");
+		return DMAE_NOT_RDY;
+	}
+
+	if (!bp->dmae_ready) {
+		BNX2X_ERR("DMAE is not ready, can not copy\n");
+		return DMAE_NOT_RDY;
+	}
+
+	/* set opcode and fixed command fields */
+	bnx2x_prep_dmae_with_comp(bp, &dmae, DMAE_SRC_PCI, DMAE_DST_PCI);
+
+	if (from_vf) {
+		dmae.opcode_iov = (vfid << DMAE_COMMAND_SRC_VFID_SHIFT) |
+			(DMAE_SRC_VF << DMAE_COMMAND_SRC_VFPF_SHIFT) |
+			(DMAE_DST_PF << DMAE_COMMAND_DST_VFPF_SHIFT);
+
+		dmae.opcode |= (DMAE_C_DST << DMAE_COMMAND_C_FUNC_SHIFT);
+
+		dmae.src_addr_lo = vf_addr_lo;
+		dmae.src_addr_hi = vf_addr_hi;
+		dmae.dst_addr_lo = U64_LO(pf_addr);
+		dmae.dst_addr_hi = U64_HI(pf_addr);
+	} else {
+		dmae.opcode_iov = (vfid << DMAE_COMMAND_DST_VFID_SHIFT) |
+			(DMAE_DST_VF << DMAE_COMMAND_DST_VFPF_SHIFT) |
+			(DMAE_SRC_PF << DMAE_COMMAND_SRC_VFPF_SHIFT);
+
+		dmae.opcode |= (DMAE_C_SRC << DMAE_COMMAND_C_FUNC_SHIFT);
+
+		dmae.src_addr_lo = U64_LO(pf_addr);
+		dmae.src_addr_hi = U64_HI(pf_addr);
+		dmae.dst_addr_lo = vf_addr_lo;
+		dmae.dst_addr_hi = vf_addr_hi;
+	}
+	dmae.len = len32;
+
+	bnx2x_dp_dmae(bp, &dmae, BNX2X_MSG_DMAE);
+
+	/* issue the command and wait for completion */
+	return bnx2x_issue_dmae_with_comp(bp, &dmae);
+}
+
+/* dispatch request */
+static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				  struct bnx2x_vf_mbx *mbx)
+{
+	int i;
+
+	/* check if tlv type is known */
+	if (bnx2x_tlv_supported(mbx->first_tlv.tl.type)) {
+
+		/* switch on the opcode */
+		switch (mbx->first_tlv.tl.type) {
+		}
+
+	/* unknown TLV - this may belong to a VF driver from the future - a
+	 * version written after this PF driver was written, which supports
+	 * features unknown as of yet. Too bad since we don't support them.
+	 * Or this may be because someone wrote a crappy VF driver and is
+	 * sending garbage over the channel.
+	 */
+	} else {
+		BNX2X_ERR("unknown TLV. type %d length %d. first 20 bytes of mailbox buffer:\n",
+			  mbx->first_tlv.tl.type, mbx->first_tlv.tl.length);
+		for (i = 0; i < 20; i++)
+			DP_CONT(BNX2X_MSG_IOV, "%x ",
+				mbx->msg->req.tlv_buf_size.tlv_buffer[i]);
+	}
+}
+
+/* handle new vf-pf message */
+void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event)
+{
+	struct bnx2x_virtf *vf;
+	struct bnx2x_vf_mbx *mbx;
+	u8 vf_idx;
+	int rc;
+
+	DP(BNX2X_MSG_IOV,
+	   "vf pf event received: vfid %d, address_hi %x, address lo %x",
+	   vfpf_event->vf_id, vfpf_event->msg_addr_hi, vfpf_event->msg_addr_lo);
+	/* Sanity checks consider removing later */
+
+	/* check if the vf_id is valid */
+	if (vfpf_event->vf_id - BP_VFDB(bp)->sriov.first_vf_in_pf >
+	    BNX2X_NR_VIRTFN(bp)) {
+		BNX2X_ERR("Illegal vf_id %d max allowed: %d\n",
+			  vfpf_event->vf_id, BNX2X_NR_VIRTFN(bp));
+		goto mbx_done;
+	}
+	vf_idx = bnx2x_vf_idx_by_abs_fid(bp, vfpf_event->vf_id);
+
+	mbx = BP_VF_MBX(bp, vf_idx);
+
+	/* verify an event is not currently being processed -
+	 * debug failsafe only
+	 */
+	if (mbx->flags & VF_MSG_INPROCESS) {
+		BNX2X_ERR("Previous message is still being processed, vf_id %d\n",
+			  vfpf_event->vf_id);
+		goto mbx_done;
+	}
+
+	vf = BP_VF(bp, vf_idx);
+
+	/* save the VF message address */
+	mbx->vf_addr_hi = vfpf_event->msg_addr_hi;
+	mbx->vf_addr_lo = vfpf_event->msg_addr_lo;
+	DP(BNX2X_MSG_IOV, "mailbox vf address hi 0x%x, lo 0x%x, offset 0x%x",
+	   mbx->vf_addr_hi, mbx->vf_addr_lo, mbx->first_tlv.resp_msg_offset);
+
+	/* dmae to get the VF request */
+	rc = bnx2x_copy32_vf_dmae(bp, true, mbx->msg_mapping, vf->abs_vfid,
+				  mbx->vf_addr_hi, mbx->vf_addr_lo,
+				  sizeof(union vfpf_tlvs)/4);
+	if (rc) {
+		BNX2X_ERR("Failed to copy request VF %d\n", vf->abs_vfid);
+		goto mbx_error;
+	}
+
+	/* process the VF message header */
+	mbx->first_tlv = mbx->msg->req.first_tlv;
+
+	/* dispatch the request (will prepare the response) */
+	bnx2x_vf_mbx_request(bp, vf, mbx);
+	goto mbx_done;
+
+mbx_error:
+mbx_done:
+	return;
+}
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 12/22] bnx2x: Support of PF driver of a VF acquire request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (10 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 11/22] bnx2x: Infrastructure for VF <-> PF request on PF side Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 13/22] bnx2x: Support of PF driver of a VF init request Ariel Elior
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

When a VF is probed by the VF driver, the VF driver sends an
'acquire' request over the VF <-> PF channel for the resources
it needs to operate (interrupts, queues, etc).
The PF driver either ratifies the request and allocates the resources,
responds with the maximum values it will allow the VF to acquire,
or fails the request entirely if there is a problem.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c    |   28 +++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h    |    9 +
 .../net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c    |   13 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c  |  200 +++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h  |   43 ++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c   |  201 ++++++++++++++++++++
 6 files changed, 483 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index e8a05f4..7966d7f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -87,6 +87,34 @@ static inline void bnx2x_move_fp(struct bnx2x *bp, int from, int to)
 	to_fp->txdata_ptr[0] = &bp->bnx2x_txq[new_txdata_index];
 }
 
+/**
+ * bnx2x_fill_fw_str - Fill buffer with FW version string.
+ *
+ * @bp:        driver handle
+ * @buf:       character buffer to fill with the fw name
+ * @buf_len:   length of the above buffer
+ *
+ */
+void bnx2x_fill_fw_str(struct bnx2x *bp, char *buf, size_t buf_len)
+{
+	if (IS_PF(bp)) {
+		u8 phy_fw_ver[PHY_FW_VER_LEN];
+
+		phy_fw_ver[0] = '\0';
+		bnx2x_get_ext_phy_fw_version(&bp->link_params,
+					     phy_fw_ver, PHY_FW_VER_LEN);
+		strlcpy(buf, bp->fw_ver, buf_len);
+		snprintf(buf + strlen(bp->fw_ver), 32 - strlen(bp->fw_ver),
+			 "bc %d.%d.%d%s%s",
+			 (bp->common.bc_ver & 0xff0000) >> 16,
+			 (bp->common.bc_ver & 0xff00) >> 8,
+			 (bp->common.bc_ver & 0xff),
+			 ((phy_fw_ver[0] != '\0') ? " phy " : ""), phy_fw_ver);
+	} else {
+		strlcpy(buf, bp->acquire_resp.pfdev_info.fw_ver, buf_len);
+	}
+}
+
 int load_count[2][3] = { {0} }; /* per-path: 0-common, 1-port0, 2-port1 */
 
 /* free skb in the packet ring at pos idx
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 9ee67fa..cd1eaff 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -1401,4 +1401,13 @@ static inline bool bnx2x_is_valid_ether_addr(struct bnx2x *bp, u8 *addr)
 	return false;
 }
 
+/**
+ * bnx2x_fill_fw_str - Fill buffer with FW version string.
+ *
+ * @bp:        driver handle
+ * @buf:       character buffer to fill with the fw name
+ * @buf_len:   length of the above buffer
+ *
+ */
+void bnx2x_fill_fw_str(struct bnx2x *bp, char *buf, size_t buf_len);
 #endif /* BNX2X_CMN_H */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index b7c82f9..292634f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -817,21 +817,12 @@ static void bnx2x_get_drvinfo(struct net_device *dev,
 			      struct ethtool_drvinfo *info)
 {
 	struct bnx2x *bp = netdev_priv(dev);
-	u8 phy_fw_ver[PHY_FW_VER_LEN];
 
 	strlcpy(info->driver, DRV_MODULE_NAME, sizeof(info->driver));
 	strlcpy(info->version, DRV_MODULE_VERSION, sizeof(info->version));
 
-	phy_fw_ver[0] = '\0';
-	bnx2x_get_ext_phy_fw_version(&bp->link_params,
-				     phy_fw_ver, PHY_FW_VER_LEN);
-	strlcpy(info->fw_version, bp->fw_ver, sizeof(info->fw_version));
-	snprintf(info->fw_version + strlen(bp->fw_ver), 32 - strlen(bp->fw_ver),
-		 "bc %d.%d.%d%s%s",
-		 (bp->common.bc_ver & 0xff0000) >> 16,
-		 (bp->common.bc_ver & 0xff00) >> 8,
-		 (bp->common.bc_ver & 0xff),
-		 ((phy_fw_ver[0] != '\0') ? " phy " : ""), phy_fw_ver);
+	bnx2x_fill_fw_str(bp, info->fw_version, sizeof(info->fw_version));
+
 	strlcpy(info->bus_info, pci_name(bp->pdev), sizeof(info->bus_info));
 	info->n_stats = BNX2X_NUM_STATS;
 	info->testinfo_len = BNX2X_NUM_TESTS(bp);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 5a55bf6..0c499c8 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -591,6 +591,64 @@ alloc_mem_err:
 	return -ENOMEM;
 }
 
+static void bnx2x_vfq_init(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			   struct bnx2x_vf_queue *q)
+{
+	u8 cl_id = vfq_cl_id(vf, q);
+	u8 func_id = FW_VF_HANDLE(vf->abs_vfid);
+
+	unsigned long q_type = 0;
+	set_bit(BNX2X_Q_TYPE_HAS_TX, &q_type);
+	set_bit(BNX2X_Q_TYPE_HAS_RX, &q_type);
+
+	/* Queue State object */
+	bnx2x_init_queue_obj(bp, &q->sp_obj,
+			     cl_id, &q->cid, 1, func_id,
+			     bnx2x_vf_sp(bp, vf, q_data),
+			     bnx2x_vf_sp_map(bp, vf, q_data),
+			     q_type);
+
+	DP(BNX2X_MSG_IOV,
+	   "initialized vf %d's queue object. func id set to %d\n",
+	   vf->abs_vfid, q->sp_obj.func_id);
+
+	/* mac/vlan objects are per queue, but only those
+	 * that belong to the leading queue are initialized
+	 */
+	if (vfq_is_leading(q)) {
+
+		/* mac*/
+		bnx2x_init_mac_obj(bp, &q->mac_obj,
+				   cl_id, q->cid, func_id,
+				   bnx2x_vf_sp(bp, vf, mac_rdata),
+				   bnx2x_vf_sp_map(bp, vf, mac_rdata),
+				   BNX2X_FILTER_MAC_PENDING,
+				   &vf->filter_state,
+				   BNX2X_OBJ_TYPE_RX_TX,
+				   &bp->macs_pool);
+		/* vlan */
+		bnx2x_init_vlan_obj(bp, &q->vlan_obj,
+				    cl_id, q->cid, func_id,
+				    bnx2x_vf_sp(bp, vf, vlan_rdata),
+				    bnx2x_vf_sp_map(bp, vf, vlan_rdata),
+				    BNX2X_FILTER_VLAN_PENDING,
+				    &vf->filter_state,
+				    BNX2X_OBJ_TYPE_RX_TX,
+				    &bp->vlans_pool);
+
+		/* mcast */
+		bnx2x_init_mcast_obj(bp, &vf->mcast_obj, cl_id,
+				     q->cid, func_id, func_id,
+				     bnx2x_vf_sp(bp, vf, mcast_rdata),
+				     bnx2x_vf_sp_map(bp, vf, mcast_rdata),
+				     BNX2X_FILTER_MCAST_PENDING,
+				     &vf->filter_state,
+				     BNX2X_OBJ_TYPE_RX_TX);
+
+		vf->leading_rss = cl_id;
+	}
+}
+
 /* called by bnx2x_nic_load */
 int bnx2x_iov_nic_init(struct bnx2x *bp)
 {
@@ -938,3 +996,145 @@ void bnx2x_iov_sp_task(struct bnx2x *bp)
 		}
 	}
 }
+
+u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	return min_t(u8, min_t(u8, vf_sb_count(vf), BNX2X_CIDS_PER_VF),
+		     BNX2X_VF_MAX_QUEUES);
+}
+
+static
+int bnx2x_vf_chk_avail_resc(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			    struct vf_pf_resc_request *req_resc)
+{
+	u8 rxq_cnt = vf_rxq_count(vf) ? : bnx2x_vf_max_queue_cnt(bp, vf);
+	u8 txq_cnt = vf_txq_count(vf) ? : bnx2x_vf_max_queue_cnt(bp, vf);
+
+	return ((req_resc->num_rxqs <= rxq_cnt) &&
+		(req_resc->num_txqs <= txq_cnt) &&
+		(req_resc->num_sbs <= vf_sb_count(vf))   &&
+		(req_resc->num_mac_filters <= vf_mac_rules_cnt(vf)) &&
+		(req_resc->num_vlan_filters <= vf_vlan_rules_cnt(vf)));
+}
+
+/* CORE VF API */
+int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
+		     struct vf_pf_resc_request *resc)
+{
+	int base_vf_cid = (BP_VFDB(bp)->sriov.first_vf_in_pf + vf->index) *
+		BNX2X_CIDS_PER_VF;
+
+	union cdu_context *base_cxt = (union cdu_context *)
+		BP_VF_CXT_PAGE(bp, base_vf_cid/ILT_PAGE_CIDS)->addr +
+		(base_vf_cid & (ILT_PAGE_CIDS-1));
+	int i;
+
+	/* if state is 'acquired' the VF was not released or FLR'd, in
+	 * this case the returned resources match the acquired already
+	 * acquired resources. Verify that the requested numbers do
+	 * not exceed the already acquired numbers.
+	 */
+	if (vf->state == VF_ACQUIRED) {
+		DP(BNX2X_MSG_IOV, "VF[%d] Trying to re-acquire resources (VF was not released or FLR'd)\n",
+		   vf->abs_vfid);
+
+		if (!bnx2x_vf_chk_avail_resc(bp, vf, resc)) {
+			BNX2X_ERR("VF[%d] When re-acquiring resources, requested numbers must be <= then previously acquired numbers\n",
+				  vf->abs_vfid);
+			return -EINVAL;
+		}
+		return 0;
+	}
+
+	/* Otherwise vf state must be 'free' or 'reset' */
+	if (vf->state != VF_FREE && vf->state != VF_RESET) {
+		BNX2X_ERR("VF[%d] Can not acquire a VF with state %d\n",
+			  vf->abs_vfid, vf->state);
+		return -EINVAL;
+	}
+
+	/* static allocation:
+	 * the global maximum number are fixed per VF. fail the request if
+	 * requested number exceed these globals
+	 */
+	if (!bnx2x_vf_chk_avail_resc(bp, vf, resc)) {
+		DP(BNX2X_MSG_IOV,
+		   "cannot fulfill vf resource request. Placing maximal available values in response");
+		/* set the max resource in the vf */
+		return -ENOMEM;
+	}
+
+	/* Set resources counters - 0 request means max available */
+	vf_sb_count(vf) = resc->num_sbs;
+	vf_rxq_count(vf) = resc->num_rxqs ? : bnx2x_vf_max_queue_cnt(bp, vf);
+	vf_txq_count(vf) = resc->num_txqs ? : bnx2x_vf_max_queue_cnt(bp, vf);
+	if (resc->num_mac_filters)
+		vf_mac_rules_cnt(vf) = resc->num_mac_filters;
+	if (resc->num_vlan_filters)
+		vf_vlan_rules_cnt(vf) = resc->num_vlan_filters;
+
+	DP(BNX2X_MSG_IOV,
+	   "Fullfilling vf request: sb count %d, tx_count %d, rx_count %d, mac_rules_count %d, vlan_rules_count %d\n",
+	   vf_sb_count(vf), vf_rxq_count(vf),
+	   vf_txq_count(vf), vf_mac_rules_cnt(vf),
+	   vf_vlan_rules_cnt(vf));
+
+	/* Initialize the queues */
+	if (!vf->vfqs) {
+		DP(BNX2X_MSG_IOV, "vf->vfqs was not allocated");
+		return -EINVAL;
+	}
+
+	for_each_vfq(vf, i) {
+		struct bnx2x_vf_queue *q = vfq_get(vf, i);
+
+		if (!q) {
+			DP(BNX2X_MSG_IOV, "q number %d was not allocated", i);
+			return -EINVAL;
+		}
+
+		q->index = i;
+		q->cxt = &((base_cxt + i)->eth);
+		q->cid = BNX2X_FIRST_VF_CID + base_vf_cid + i;
+
+		DP(BNX2X_MSG_IOV, "VFQ[%d:%d]: index %d, cid 0x%x, cxt %p\n",
+		   vf->abs_vfid, i, q->index, q->cid, q->cxt);
+
+		/* init SP objects */
+		bnx2x_vfq_init(bp, vf, q);
+	}
+	vf->state = VF_ACQUIRED;
+	return 0;
+}
+
+void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			      enum channel_tlvs tlv)
+{
+	/* lock the channel */
+	mutex_lock(&vf->op_mutex);
+
+	/* record the locking op */
+	vf->op_current = tlv;
+
+	/* log the lock */
+	DP(BNX2X_MSG_IOV, "VF[%d]: vf pf channel locked by %d\n",
+	   vf->abs_vfid, tlv);
+}
+
+void bnx2x_unlock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				enum channel_tlvs expected_tlv)
+{
+	WARN(expected_tlv != vf->op_current,
+	     "lock mismatch: expected %d found %d", expected_tlv,
+	     vf->op_current);
+
+	/* lock the channel */
+	mutex_unlock(&vf->op_mutex);
+
+	/* log the unlock */
+	DP(BNX2X_MSG_IOV, "VF[%d]: vf pf channel unlocked by %d\n",
+	   vf->abs_vfid, vf->op_current);
+
+	/* record the locking op */
+	vf->op_current = CHANNEL_TLV_NONE;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 7c46034..e494b95 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -19,9 +19,13 @@
 #ifndef BNX2X_SRIOV_H
 #define BNX2X_SRIOV_H
 
+#include "bnx2x_vfpf.h"
+#include "bnx2x_cmn.h"
+
 /* The bnx2x device structure holds vfdb structure described below.
  * The VF array is indexed by the relative vfid.
  */
+#define BNX2X_VF_MAX_QUEUES		16
 struct bnx2x_sriov {
 	u32 first_vf_in_pf;
 
@@ -257,6 +261,12 @@ struct bnx2x_virtf {
 #define for_each_vf(bp, var) \
 		for ((var) = 0; (var) < BNX2X_NR_VIRTFN(bp); (var)++)
 
+#define for_each_vfq(vf, var) \
+		for ((var) = 0; (var) < vf_rxq_count(vf); (var)++)
+
+#define for_each_vf_sb(vf, var) \
+		for ((var) = 0; (var) < vf_sb_count(vf); (var)++)
+
 #define HW_VF_HANDLE(bp, abs_vfid) \
 	(u16)(BP_ABS_FUNC((bp)) | (1<<3) |  ((u16)(abs_vfid) << 4))
 
@@ -265,6 +275,13 @@ struct bnx2x_virtf {
 #define FW_VF_HANDLE(abs_vfid)	\
 	(abs_vfid + FW_PF_MAX_HANDLE)
 
+/* locking and unlocking the channel mutex */
+void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			      enum channel_tlvs tlv);
+
+void bnx2x_unlock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				enum channel_tlvs expected_tlv);
+
 /* VF mail box (aka vf-pf channel) */
 
 /* a container for the bi-directional vf<-->pf messages.
@@ -366,11 +383,32 @@ static inline struct bnx2x_vf_queue *vfq_get(struct bnx2x_virtf *vf, u8 index)
 	return &(vf->vfqs[index]);
 }
 
+static inline bool vfq_is_leading(struct bnx2x_vf_queue *vfq)
+{
+	return (vfq->index == 0);
+}
+
+/* FW ids */
 static inline u8 vf_igu_sb(struct bnx2x_virtf *vf, u16 sb_idx)
 {
 	return vf->igu_base_id + sb_idx;
 }
 
+static inline u8 vf_hc_qzone(struct bnx2x_virtf *vf, u16 sb_idx)
+{
+	return vf_igu_sb(vf, sb_idx);
+}
+
+static u8 vfq_cl_id(struct bnx2x_virtf *vf, struct bnx2x_vf_queue *q)
+{
+	return vf->igu_base_id + q->index;
+}
+
+static inline u8 vfq_qzone_id(struct bnx2x_virtf *vf, struct bnx2x_vf_queue *q)
+{
+	return vfq_cl_id(vf, q);
+}
+
 /* global iov routines */
 int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line);
 int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param, int num_vfs_param);
@@ -388,6 +426,10 @@ void bnx2x_iov_sp_task(struct bnx2x *bp);
 /* global vf mailbox routines */
 void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event);
 void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid);
+/* acquire */
+int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			  struct vf_pf_resc_request *resc);
+
 static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
 						struct bnx2x_virtf *vf)
 {
@@ -397,6 +439,7 @@ static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
 }
 
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
+u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
 /* VF FLR helpers */
 int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid);
 void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 7c8e87d..2f46972 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -184,6 +184,180 @@ static int bnx2x_copy32_vf_dmae(struct bnx2x *bp, u8 from_vf,
 	return bnx2x_issue_dmae_with_comp(bp, &dmae);
 }
 
+static void bnx2x_vf_mbx_resp(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vf_mbx *mbx = BP_VF_MBX(bp, vf->index);
+	u64 vf_addr;
+	dma_addr_t pf_addr;
+	u16 length, type;
+	int rc;
+
+	struct pfvf_general_resp_tlv *resp = &mbx->msg->resp.general_resp;
+
+	/* prepare response */
+	type = mbx->first_tlv.tl.type;
+	length = type == CHANNEL_TLV_ACQUIRE ?
+		sizeof(struct pfvf_acquire_resp_tlv) :
+		sizeof(struct pfvf_general_resp_tlv);
+	bnx2x_add_tlv(bp, resp, 0, type, length);
+	resp->hdr.status = bnx2x_pfvf_status_codes(vf->op_rc);
+	bnx2x_add_tlv(bp, resp, length, CHANNEL_TLV_LIST_END,
+		      sizeof(struct channel_list_end_tlv));
+	bnx2x_dp_tlv_list(bp, resp);
+
+	DP(BNX2X_MSG_IOV, "mailbox vf address hi 0x%x, lo 0x%x, offset 0x%x",
+	   mbx->vf_addr_hi, mbx->vf_addr_lo, mbx->first_tlv.resp_msg_offset);
+
+	/* send response */
+	vf_addr = HILO_U64(mbx->vf_addr_hi, mbx->vf_addr_lo) +
+		  mbx->first_tlv.resp_msg_offset;
+
+	pf_addr = mbx->msg_mapping +
+		offsetof(struct bnx2x_vf_mbx_msg, resp);
+
+	/* copy the response body, if there is one, before the header, as the vf
+	 * is sensitive to the header being written
+	 */
+	if (resp->hdr.tl.length > sizeof(u64)) {
+		length = resp->hdr.tl.length - sizeof(u64);
+		vf_addr += sizeof(u64);
+		pf_addr += sizeof(u64);
+		rc = bnx2x_copy32_vf_dmae(bp, false, pf_addr, vf->abs_vfid,
+					  U64_HI(vf_addr),
+					  U64_LO(vf_addr),
+					  length/4);
+		if (rc) {
+			BNX2X_ERR("Failed to copy response body to VF %d\n",
+				  vf->abs_vfid);
+			return;
+		}
+		vf_addr -= sizeof(u64);
+		pf_addr -= sizeof(u64);
+	}
+
+	/* ack the FW */
+	storm_memset_vf_mbx_ack(bp, vf->abs_vfid);
+	mmiowb();
+
+	/* initiate dmae to send the response */
+	mbx->flags &= ~VF_MSG_INPROCESS;
+
+	/* copy the response header including status-done field,
+	 * must be last dmae, must be after FW is acked
+	 */
+	rc = bnx2x_copy32_vf_dmae(bp, false, pf_addr, vf->abs_vfid,
+				  U64_HI(vf_addr),
+				  U64_LO(vf_addr),
+				  sizeof(u64)/4);
+
+	/* unlock channel mutex */
+	bnx2x_unlock_vf_pf_channel(bp, vf, mbx->first_tlv.tl.type);
+
+	if (rc) {
+		BNX2X_ERR("Failed to copy response status to VF %d\n",
+			  vf->abs_vfid);
+	}
+	return;
+}
+
+static void bnx2x_vf_mbx_acquire_resp(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				      struct bnx2x_vf_mbx *mbx, int vfop_status)
+{
+	int i;
+	struct pfvf_acquire_resp_tlv *resp = &mbx->msg->resp.acquire_resp;
+	struct pf_vf_resc *resc = &resp->resc;
+	u8 status = bnx2x_pfvf_status_codes(vfop_status);
+
+	memset(resp, 0, sizeof(*resp));
+
+	/* fill in pfdev info */
+	resp->pfdev_info.chip_num = bp->common.chip_id;
+	resp->pfdev_info.db_size = (1 << BNX2X_DB_SHIFT);
+	resp->pfdev_info.indices_per_sb = HC_SB_MAX_INDICES_E2;
+	resp->pfdev_info.pf_cap = (PFVF_CAP_RSS |
+				   /* PFVF_CAP_DHC |*/ PFVF_CAP_TPA);
+	bnx2x_fill_fw_str(bp, resp->pfdev_info.fw_ver,
+			  sizeof(resp->pfdev_info.fw_ver));
+
+	if (status == PFVF_STATUS_NO_RESOURCE ||
+	    status == PFVF_STATUS_SUCCESS) {
+
+		/* set resources numbers, if status equals NO_RESOURCE these
+		 * are max possible numbers
+		 */
+		resc->num_rxqs = vf_rxq_count(vf) ? :
+			bnx2x_vf_max_queue_cnt(bp, vf);
+		resc->num_txqs = vf_txq_count(vf) ? :
+			bnx2x_vf_max_queue_cnt(bp, vf);
+		resc->num_sbs = vf_sb_count(vf);
+		resc->num_mac_filters = vf_mac_rules_cnt(vf);
+		resc->num_vlan_filters = vf_vlan_rules_cnt(vf);
+		resc->num_mc_filters = 0;
+
+		if (status == PFVF_STATUS_SUCCESS) {
+			for_each_vfq(vf, i)
+				resc->hw_qid[i] =
+					vfq_qzone_id(vf, vfq_get(vf, i));
+
+			for_each_vf_sb(vf, i) {
+				resc->hw_sbs[i].hw_sb_id = vf_igu_sb(vf, i);
+				resc->hw_sbs[i].sb_qid = vf_hc_qzone(vf, i);
+			}
+		}
+	}
+
+	DP(BNX2X_MSG_IOV, "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x, db_size=%d, idx_per_sb=%d, pf_cap=0x%x\n"
+	   "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d, n_vlans-%d, n_mcs-%d, fw_ver: '%s'\n",
+	   vf->abs_vfid,
+	   resp->pfdev_info.chip_num,
+	   resp->pfdev_info.db_size,
+	   resp->pfdev_info.indices_per_sb,
+	   resp->pfdev_info.pf_cap,
+	   resc->num_rxqs,
+	   resc->num_txqs,
+	   resc->num_sbs,
+	   resc->num_mac_filters,
+	   resc->num_vlan_filters,
+	   resc->num_mc_filters,
+	   resp->pfdev_info.fw_ver);
+
+	DP_CONT(BNX2X_MSG_IOV, "hw_qids- [ ");
+	for (i = 0; i < vf_rxq_count(vf); i++)
+		DP_CONT(BNX2X_MSG_IOV, "%d ", resc->hw_qid[i]);
+	DP_CONT(BNX2X_MSG_IOV, "], sb_info- [ ");
+	for (i = 0; i < vf_sb_count(vf); i++)
+		DP_CONT(BNX2X_MSG_IOV, "%d:%d ",
+			resc->hw_sbs[i].hw_sb_id,
+			resc->hw_sbs[i].sb_qid);
+	DP_CONT(BNX2X_MSG_IOV, "]\n");
+
+	/* send the response */
+	vf->op_rc = vfop_status;
+	bnx2x_vf_mbx_resp(bp, vf);
+}
+
+static void bnx2x_vf_mbx_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				 struct bnx2x_vf_mbx *mbx)
+{
+	int rc;
+	struct vfpf_acquire_tlv *acquire = &mbx->msg->req.acquire;
+
+	/* log vfdef info */
+	DP(BNX2X_MSG_IOV,
+	   "VF[%d] ACQUIRE: vfdev_info- vf_id %d, vf_os %d resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d, n_vlans-%d, n_mcs-%d\n",
+	   vf->abs_vfid, acquire->vfdev_info.vf_id, acquire->vfdev_info.vf_os,
+	   acquire->resc_request.num_rxqs, acquire->resc_request.num_txqs,
+	   acquire->resc_request.num_sbs, acquire->resc_request.num_mac_filters,
+	   acquire->resc_request.num_vlan_filters,
+	   acquire->resc_request.num_mc_filters);
+
+	/* acquire the resources */
+	rc = bnx2x_vf_acquire(bp, vf, &acquire->resc_request);
+
+	/* response */
+	bnx2x_vf_mbx_acquire_resp(bp, vf, mbx, rc);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -193,8 +367,16 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	/* check if tlv type is known */
 	if (bnx2x_tlv_supported(mbx->first_tlv.tl.type)) {
 
+		/* Lock the per vf op mutex and note the locker's identity.
+		 * The unlock will take place in mbx response.
+		 */
+		bnx2x_lock_vf_pf_channel(bp, vf, mbx->first_tlv.tl.type);
+
 		/* switch on the opcode */
 		switch (mbx->first_tlv.tl.type) {
+		case CHANNEL_TLV_ACQUIRE:
+			bnx2x_vf_mbx_acquire(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
@@ -209,6 +391,25 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		for (i = 0; i < 20; i++)
 			DP_CONT(BNX2X_MSG_IOV, "%x ",
 				mbx->msg->req.tlv_buf_size.tlv_buffer[i]);
+
+		/* test whether we can respond to the VF (do we have an address
+		 * for it?)
+		 */
+		if (vf->state == VF_ACQUIRED) {
+
+			/* mbx_resp uses the op_rc of the VF */
+			vf->op_rc = PFVF_STATUS_NOT_SUPPORTED;
+
+			/* notify the VF that we do not support this request */
+			bnx2x_vf_mbx_resp(bp, vf);
+		} else {
+
+			/* can't send a response since this VF is unknown to us
+			 * just unlock the channel and be done with.
+			 */
+			bnx2x_unlock_vf_pf_channel(bp, vf,
+						   mbx->first_tlv.tl.type);
+		}
 	}
 }
 
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 13/22] bnx2x: Support of PF driver of a VF init request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (11 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 12/22] bnx2x: Support of PF driver of a VF acquire request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 14/22] bnx2x: Support statistics collection for VFs by the PF Ariel Elior
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The VF driver will send an 'init' request as part of its nic load
flow. This message is used by the VF to publish the GPA's of its
status blocks, slow path ring and statistics buffer.
The PF driver notes all this down in the VF database, and also uses
this message to transfer the VF to VF_INIT state internally.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    2 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |    2 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |    4 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  160 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    6 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   20 +++
 6 files changed, 193 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index 6fc3fef..c97616c 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1852,6 +1852,8 @@ int bnx2x_del_all_macs(struct bnx2x *bp,
 
 /* Init Function API  */
 void bnx2x_func_init(struct bnx2x *bp, struct bnx2x_func_init_params *p);
+void bnx2x_init_sb(struct bnx2x *bp, dma_addr_t mapping, int vfid,
+			  u8 vf_valid, int fw_sb_id, int igu_sb_id);
 u32 bnx2x_get_pretend_reg(struct bnx2x *bp);
 int bnx2x_get_gpio(struct bnx2x *bp, int gpio_num, u8 port);
 int bnx2x_set_gpio(struct bnx2x *bp, int gpio_num, u32 mode, u8 port);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 07e2963..78514ab 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -5419,7 +5419,7 @@ static void bnx2x_map_sb_state_machines(struct hc_index_data *index_data)
 		SM_TX_ID << HC_INDEX_DATA_SM_ID_SHIFT;
 }
 
-static void bnx2x_init_sb(struct bnx2x *bp, dma_addr_t mapping, int vfid,
+void bnx2x_init_sb(struct bnx2x *bp, dma_addr_t mapping, int vfid,
 			  u8 vf_valid, int fw_sb_id, int igu_sb_id)
 {
 	int igu_seg_id;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index cefe448..1008d3f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -3726,6 +3726,10 @@
 #define PXP_REG_HST_DISCARD_INTERNAL_WRITES_STATUS		 0x10309c
 /* [WB 160] Used for initialization of the inbound interrupts memory */
 #define PXP_REG_HST_INBOUND_INT 				 0x103800
+/* [RW 7] Indirect access to the permission table. The fields are : {Valid;
+ * VFID[5:0]}
+ */
+#define PXP_REG_HST_ZONE_PERMISSION_TABLE			 0x103400
 /* [RW 32] Interrupt mask register #0 read/write */
 #define PXP_REG_PXP_INT_MASK_0					 0x103074
 #define PXP_REG_PXP_INT_MASK_1					 0x103084
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 0c499c8..99fd606 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -65,6 +65,41 @@ struct bnx2x_virtf *bnx2x_vf_by_abs_fid(struct bnx2x *bp, u16 abs_vfid)
 	return (idx < BNX2X_NR_VIRTFN(bp)) ? BP_VF(bp, idx) : NULL;
 }
 
+static void bnx2x_vf_igu_ack_sb(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				u8 igu_sb_id, u8 segment, u16 index, u8 op,
+				u8 update)
+{
+	/* acking a VF sb through the PF - use the GRC */
+	u32 ctl;
+	u32 igu_addr_data = IGU_REG_COMMAND_REG_32LSB_DATA;
+	u32 igu_addr_ctl = IGU_REG_COMMAND_REG_CTRL;
+	u32 func_encode = vf->abs_vfid;
+	u32 addr_encode = IGU_CMD_E2_PROD_UPD_BASE + igu_sb_id;
+	struct igu_regular cmd_data = {0};
+
+	cmd_data.sb_id_and_flags =
+			((index << IGU_REGULAR_SB_INDEX_SHIFT) |
+			 (segment << IGU_REGULAR_SEGMENT_ACCESS_SHIFT) |
+			 (update << IGU_REGULAR_BUPDATE_SHIFT) |
+			 (op << IGU_REGULAR_ENABLE_INT_SHIFT));
+
+	ctl = addr_encode << IGU_CTRL_REG_ADDRESS_SHIFT		|
+	      func_encode << IGU_CTRL_REG_FID_SHIFT		|
+	      IGU_CTRL_CMD_TYPE_WR << IGU_CTRL_REG_TYPE_SHIFT;
+
+	DP(NETIF_MSG_HW, "write 0x%08x to IGU(via GRC) addr 0x%x\n",
+	   cmd_data.sb_id_and_flags, igu_addr_data);
+	REG_WR(bp, igu_addr_data, cmd_data.sb_id_and_flags);
+	mmiowb();
+	barrier();
+
+	DP(NETIF_MSG_HW, "write 0x%08x to IGU(via GRC) addr 0x%x\n",
+	   ctl, igu_addr_ctl);
+	REG_WR(bp, igu_addr_ctl, ctl);
+	mmiowb();
+	barrier();
+}
+
 static int bnx2x_ari_enabled(struct pci_dev *dev)
 {
 	return dev->bus->self && dev->bus->self->ari_enabled;
@@ -364,6 +399,52 @@ static void bnx2x_vf_pglue_clear_err(struct bnx2x *bp, u8 abs_vfid)
 	REG_WR(bp, was_err_reg, 1 << (abs_vfid & 0x1f));
 }
 
+static void bnx2x_vf_igu_reset(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	int i;
+	u32 val;
+
+	/* Set VF masks and configuration - pretend */
+	bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf->abs_vfid));
+
+	REG_WR(bp, IGU_REG_SB_INT_BEFORE_MASK_LSB, 0);
+	REG_WR(bp, IGU_REG_SB_INT_BEFORE_MASK_MSB, 0);
+	REG_WR(bp, IGU_REG_SB_MASK_LSB, 0);
+	REG_WR(bp, IGU_REG_SB_MASK_MSB, 0);
+	REG_WR(bp, IGU_REG_PBA_STATUS_LSB, 0);
+	REG_WR(bp, IGU_REG_PBA_STATUS_MSB, 0);
+
+	val = REG_RD(bp, IGU_REG_VF_CONFIGURATION);
+	val |= (IGU_VF_CONF_FUNC_EN | IGU_VF_CONF_MSI_MSIX_EN);
+	if (vf->cfg_flags & VF_CFG_INT_SIMD)
+		val |= IGU_VF_CONF_SINGLE_ISR_EN;
+	val &= ~IGU_VF_CONF_PARENT_MASK;
+	val |= BP_FUNC(bp) << IGU_VF_CONF_PARENT_SHIFT;	/* parent PF */
+	REG_WR(bp, IGU_REG_VF_CONFIGURATION, val);
+
+	DP(BNX2X_MSG_IOV,
+	   "value in IGU_REG_VF_CONFIGURATION of vf %d after write %x\n",
+	   vf->abs_vfid, REG_RD(bp, IGU_REG_VF_CONFIGURATION));
+
+	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
+
+	/* iterate over all queues, clear sb consumer */
+	for (i = 0; i < vf_sb_count(vf); i++) {
+		u8 igu_sb_id = vf_igu_sb(vf, i);
+
+		/* zero prod memory */
+		REG_WR(bp, IGU_REG_PROD_CONS_MEMORY + igu_sb_id * 4, 0);
+
+		/* clear sb state machine */
+		bnx2x_igu_clear_sb_gen(bp, vf->abs_vfid, igu_sb_id,
+				       false /* VF */);
+
+		/* disable + update */
+		bnx2x_vf_igu_ack_sb(bp, vf, igu_sb_id, USTORM_ID, 0,
+				    IGU_INT_DISABLE, 1);
+	}
+}
+
 void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid)
 {
 	/* set the VF-PF association in the FW */
@@ -381,6 +462,17 @@ void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid)
 	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
 }
 
+static void bnx2x_vf_enable_traffic(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	/* Reset vf in IGU  interrupts are still disabled */
+	bnx2x_vf_igu_reset(bp, vf);
+
+	/* pretend to enable the vf with the PBF */
+	bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf->abs_vfid));
+	REG_WR(bp, PBF_REG_DISABLE_VF, 0);
+	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
+}
+
 static u8 bnx2x_vf_is_pcie_pending(struct bnx2x *bp, u8 abs_vfid)
 {
 	struct pci_dev *dev;
@@ -996,6 +1088,13 @@ void bnx2x_iov_sp_task(struct bnx2x *bp)
 		}
 	}
 }
+static void bnx2x_vf_qtbl_set_q(struct bnx2x *bp, u8 abs_vfid, u8 qid,
+				u8 enable)
+{
+	u32 reg = PXP_REG_HST_ZONE_PERMISSION_TABLE + qid * 4;
+	u32 val = enable ? (abs_vfid | (1 << 6)) : 0;
+	REG_WR(bp, reg, val);
+}
 
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
@@ -1107,6 +1206,67 @@ int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	return 0;
 }
 
+int bnx2x_vf_init(struct bnx2x *bp, struct bnx2x_virtf *vf, dma_addr_t *sb_map)
+{
+	struct bnx2x_func_init_params func_init = {0};
+	u16 flags = 0;
+	int i;
+
+	/* the sb resources are initialized at this point, do the
+	 * FW/HW initializations
+	 */
+	for_each_vf_sb(vf, i)
+		bnx2x_init_sb(bp, (dma_addr_t)sb_map[i], vf->abs_vfid, true,
+			      vf_igu_sb(vf, i), vf_igu_sb(vf, i));
+
+	/* Sanity checks */
+	if (vf->state != VF_ACQUIRED) {
+		DP(BNX2X_MSG_IOV, "VF[%d] is not in VF_ACQUIRED, but %d\n",
+		   vf->abs_vfid, vf->state);
+		return -EINVAL;
+	}
+	/* FLR cleanup epilogue */
+	if (bnx2x_vf_flr_clnup_epilog(bp, vf->abs_vfid))
+		return -EBUSY;
+
+	/* reset IGU VF statistics: MSIX */
+	REG_WR(bp, IGU_REG_STATISTIC_NUM_MESSAGE_SENT + vf->abs_vfid * 4 , 0);
+
+	/* vf init */
+	if (vf->cfg_flags & VF_CFG_STATS)
+		flags |= (FUNC_FLG_STATS | FUNC_FLG_SPQ);
+
+	if (vf->cfg_flags & VF_CFG_TPA)
+		flags |= FUNC_FLG_TPA;
+
+	if (is_vf_multi(vf))
+		flags |= FUNC_FLG_RSS;
+
+	/* function setup */
+	func_init.func_flgs = flags;
+	func_init.pf_id = BP_FUNC(bp);
+	func_init.func_id = FW_VF_HANDLE(vf->abs_vfid);
+	func_init.fw_stat_map = vf->fw_stat_map;
+	func_init.spq_map = vf->spq_map;
+	func_init.spq_prod = 0;
+	bnx2x_func_init(bp, &func_init);
+
+	/* Configure RSS TBD */
+
+	/* Enable the vf */
+	bnx2x_vf_enable_access(bp, vf->abs_vfid);
+	bnx2x_vf_enable_traffic(bp, vf);
+
+	/* queue protection table */
+	for_each_vfq(vf, i)
+		bnx2x_vf_qtbl_set_q(bp, vf->abs_vfid,
+				    vfq_qzone_id(vf, vfq_get(vf, i)), true);
+
+	vf->state = VF_ENABLED;
+
+	return 0;
+}
+
 void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
 			      enum channel_tlvs tlv)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index e494b95..152a28e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -267,6 +267,8 @@ struct bnx2x_virtf {
 #define for_each_vf_sb(vf, var) \
 		for ((var) = 0; (var) < vf_sb_count(vf); (var)++)
 
+#define is_vf_multi(vf)	(vf_rxq_count(vf) > 1)
+
 #define HW_VF_HANDLE(bp, abs_vfid) \
 	(u16)(BP_ABS_FUNC((bp)) | (1<<3) |  ((u16)(abs_vfid) << 4))
 
@@ -430,6 +432,10 @@ void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid);
 int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 			  struct vf_pf_resc_request *resc);
 
+/* init */
+int bnx2x_vf_init(struct bnx2x *bp, struct bnx2x_virtf *vf,
+		  dma_addr_t *sb_map);
+
 static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
 						struct bnx2x_virtf *vf)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 2f46972..ba5503e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -20,6 +20,8 @@
 /* VF MBX (aka vf-pf channel) */
 #include "bnx2x.h"
 #include "bnx2x_sriov.h"
+#include <linux/crc32.h>
+
 /* place a given tlv on the tlv buffer at a given offset */
 void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
 		   u16 length)
@@ -358,6 +360,21 @@ static void bnx2x_vf_mbx_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	bnx2x_vf_mbx_acquire_resp(bp, vf, mbx, rc);
 }
 
+static void bnx2x_vf_mbx_init_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			      struct bnx2x_vf_mbx *mbx)
+{
+	struct vfpf_init_tlv *init = &mbx->msg->req.init;
+
+	/* record ghost addresses from vf message */
+	vf->spq_map = init->spq_addr;
+	vf->fw_stat_map = init->stats_addr;
+
+	vf->op_rc = bnx2x_vf_init(bp, vf, (dma_addr_t *)init->sb_addr);
+
+	/* response */
+	bnx2x_vf_mbx_resp(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -377,6 +394,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_ACQUIRE:
 			bnx2x_vf_mbx_acquire(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_INIT:
+			bnx2x_vf_mbx_init_vf(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 14/22] bnx2x: Support statistics collection for VFs by the PF
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (12 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 13/22] bnx2x: Support of PF driver of a VF init request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 15/22] bnx2x: Support of PF driver of a VF setup_q request Ariel Elior
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Statistics are collected by the PF driver. The collection is
performed via a query sent to the device which is basically an array
of 3-tuples of the form (statistics client, function, DMAE address).
In this patch the PF driver adds to the query, on top of the
statistics clients it is maintaining for itself (rss queues, storage,
etc), the 3-tuples for the VFs it is maintaining. The addresses used
are the GPAs of the statistics buffers supplied by the VF in the
init message on the VF <-> PF channel. The function parameter
ensures that the iommu will translate the GPA to the correct physical
address.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   77 +--------------
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c    |   21 ++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h    |    9 ++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |   92 ++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    2 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c |  108 +++++++++++++++++----
 6 files changed, 217 insertions(+), 92 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 78514ab..3618afb 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -5250,7 +5250,8 @@ static void bnx2x_timer(unsigned long data)
 	if (!netif_running(bp->dev))
 		return;
 
-	if (!BP_NOMCP(bp)) {
+	if (IS_PF(bp) &&
+	    !BP_NOMCP(bp)) {
 		int mb_idx = BP_FW_MB_IDX(bp);
 		u32 drv_pulse;
 		u32 mcp_pulse;
@@ -7672,66 +7673,6 @@ void bnx2x_free_mem(struct bnx2x *bp)
 		       BCM_PAGE_SIZE * NUM_EQ_PAGES);
 }
 
-static int bnx2x_alloc_fw_stats_mem(struct bnx2x *bp)
-{
-	int num_groups;
-	int is_fcoe_stats = NO_FCOE(bp) ? 0 : 1;
-
-	/* number of queues for statistics is number of eth queues + FCoE */
-	u8 num_queue_stats = BNX2X_NUM_ETH_QUEUES(bp) + is_fcoe_stats;
-
-	/* Total number of FW statistics requests =
-	 * 1 for port stats + 1 for PF stats + potential 1 for FCoE stats +
-	 * num of queues
-	 */
-	bp->fw_stats_num = 2 + is_fcoe_stats + num_queue_stats;
-
-
-	/* Request is built from stats_query_header and an array of
-	 * stats_query_cmd_group each of which contains
-	 * STATS_QUERY_CMD_COUNT rules. The real number or requests is
-	 * configured in the stats_query_header.
-	 */
-	num_groups = ((bp->fw_stats_num) / STATS_QUERY_CMD_COUNT) +
-		     (((bp->fw_stats_num) % STATS_QUERY_CMD_COUNT) ? 1 : 0);
-
-	bp->fw_stats_req_sz = sizeof(struct stats_query_header) +
-			num_groups * sizeof(struct stats_query_cmd_group);
-
-	/* Data for statistics requests + stats_conter
-	 *
-	 * stats_counter holds per-STORM counters that are incremented
-	 * when STORM has finished with the current request.
-	 *
-	 * memory for FCoE offloaded statistics are counted anyway,
-	 * even if they will not be sent.
-	 */
-	bp->fw_stats_data_sz = sizeof(struct per_port_stats) +
-		sizeof(struct per_pf_stats) +
-		sizeof(struct fcoe_statistics_params) +
-		sizeof(struct per_queue_stats) * num_queue_stats +
-		sizeof(struct stats_counter);
-
-	BNX2X_PCI_ALLOC(bp->fw_stats, &bp->fw_stats_mapping,
-			bp->fw_stats_data_sz + bp->fw_stats_req_sz);
-
-	/* Set shortcuts */
-	bp->fw_stats_req = (struct bnx2x_fw_stats_req *)bp->fw_stats;
-	bp->fw_stats_req_mapping = bp->fw_stats_mapping;
-
-	bp->fw_stats_data = (struct bnx2x_fw_stats_data *)
-		((u8 *)bp->fw_stats + bp->fw_stats_req_sz);
-
-	bp->fw_stats_data_mapping = bp->fw_stats_mapping +
-				   bp->fw_stats_req_sz;
-	return 0;
-
-alloc_mem_err:
-	BNX2X_PCI_FREE(bp->fw_stats, bp->fw_stats_mapping,
-		       bp->fw_stats_data_sz + bp->fw_stats_req_sz);
-	BNX2X_ERR("Can't allocate memory\n");
-	return -ENOMEM;
-}
 
 int bnx2x_alloc_mem_cnic(struct bnx2x *bp)
 {
@@ -7778,10 +7719,6 @@ int bnx2x_alloc_mem(struct bnx2x *bp)
 	BNX2X_PCI_ALLOC(bp->slowpath, &bp->slowpath_mapping,
 			sizeof(struct bnx2x_slowpath));
 
-	/* Allocated memory for FW statistics  */
-	if (bnx2x_alloc_fw_stats_mem(bp))
-		goto alloc_mem_err;
-
 	/* Allocate memory for CDU context:
 	 * This memory is allocated separately and not in the generic ILT
 	 * functions because CDU differs in few aspects:
@@ -7810,6 +7747,9 @@ int bnx2x_alloc_mem(struct bnx2x *bp)
 	if (bnx2x_ilt_mem_op(bp, ILT_MEMOP_ALLOC))
 		goto alloc_mem_err;
 
+	if (bnx2x_iov_alloc_mem(bp))
+		goto alloc_mem_err;
+
 	/* Slow path ring */
 	BNX2X_PCI_ALLOC(bp->spq, &bp->spq_mapping, BCM_PAGE_SIZE);
 
@@ -7817,13 +7757,6 @@ int bnx2x_alloc_mem(struct bnx2x *bp)
 	BNX2X_PCI_ALLOC(bp->eq_ring, &bp->eq_mapping,
 			BCM_PAGE_SIZE * NUM_EQ_PAGES);
 
-
-	/* fastpath */
-	/* need to be done at the end, since it's self adjusting to amount
-	 * of memory available for RSS queues
-	 */
-	if (bnx2x_alloc_fp_mem(bp))
-		goto alloc_mem_err;
 	return 0;
 
 alloc_mem_err:
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
index b8b4b74..1eb92b8 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
@@ -5199,6 +5199,27 @@ void bnx2x_init_queue_obj(struct bnx2x *bp,
 	obj->set_pending = bnx2x_queue_set_pending;
 }
 
+/* return a queue object's logical state*/
+int bnx2x_get_q_logical_state(struct bnx2x *bp,
+			       struct bnx2x_queue_sp_obj *obj)
+{
+	switch (obj->state) {
+	case BNX2X_Q_STATE_ACTIVE:
+	case BNX2X_Q_STATE_MULTI_COS:
+		return BNX2X_Q_LOGICAL_STATE_ACTIVE;
+	case BNX2X_Q_STATE_RESET:
+	case BNX2X_Q_STATE_INITIALIZED:
+	case BNX2X_Q_STATE_MCOS_TERMINATED:
+	case BNX2X_Q_STATE_INACTIVE:
+	case BNX2X_Q_STATE_STOPPED:
+	case BNX2X_Q_STATE_TERMINATED:
+	case BNX2X_Q_STATE_FLRED:
+		return BNX2X_Q_LOGICAL_STATE_STOPPED;
+	default:
+		return -EINVAL;
+	}
+}
+
 /********************** Function state object *********************************/
 enum bnx2x_func_state bnx2x_func_get_state(struct bnx2x *bp,
 					   struct bnx2x_func_sp_obj *o)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
index adbd91b..b304678 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
@@ -776,6 +776,12 @@ enum bnx2x_q_state {
 	BNX2X_Q_STATE_MAX,
 };
 
+/* Allowed Queue states */
+enum bnx2x_q_logical_state {
+	BNX2X_Q_LOGICAL_STATE_ACTIVE,
+	BNX2X_Q_LOGICAL_STATE_STOPPED,
+};
+
 /* Allowed commands */
 enum bnx2x_queue_cmd {
 	BNX2X_Q_CMD_INIT,
@@ -1261,6 +1267,9 @@ void bnx2x_init_queue_obj(struct bnx2x *bp,
 int bnx2x_queue_state_change(struct bnx2x *bp,
 			     struct bnx2x_queue_state_params *params);
 
+int bnx2x_get_q_logical_state(struct bnx2x *bp,
+			       struct bnx2x_queue_sp_obj *obj);
+
 /********************* VLAN-MAC ****************/
 void bnx2x_init_mac_obj(struct bnx2x *bp,
 			struct bnx2x_vlan_mac_obj *mac_obj,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 99fd606..52c3bd2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -1068,6 +1068,81 @@ void bnx2x_iov_sp_event(struct bnx2x *bp, int vf_cid, bool queue_work)
 	}
 }
 
+void bnx2x_iov_adjust_stats_req(struct bnx2x *bp)
+{
+	int i;
+	int first_queue_query_index, num_queues_req;
+	dma_addr_t cur_data_offset;
+	struct stats_query_entry *cur_query_entry;
+	u8 stats_count = 0;
+	bool is_fcoe = false;
+
+	if (!IS_SRIOV(bp))
+		return;
+
+	if (!NO_FCOE(bp))
+		is_fcoe = true;
+
+	/* fcoe adds one global request and one queue request */
+	num_queues_req = BNX2X_NUM_ETH_QUEUES(bp) + is_fcoe;
+
+	first_queue_query_index = BNX2X_FIRST_QUEUE_QUERY_IDX -
+		(is_fcoe ? 0 : 1);
+
+	DP(BNX2X_MSG_IOV,
+	   "BNX2X_NUM_ETH_QUEUES %d, is_fcoe %d, first_queue_query_index %d => determined the last non virtual statistics query index is %d. Will add queries on top of that\n",
+	   BNX2X_NUM_ETH_QUEUES(bp), is_fcoe, first_queue_query_index,
+	   first_queue_query_index + num_queues_req);
+
+	cur_data_offset = bp->fw_stats_data_mapping +
+		offsetof(struct bnx2x_fw_stats_data, queue_stats) +
+		num_queues_req * sizeof(struct per_queue_stats);
+
+	cur_query_entry = &bp->fw_stats_req->
+		query[first_queue_query_index + num_queues_req];
+
+	for_each_vf(bp, i) {
+		int j;
+		struct bnx2x_virtf *vf = BP_VF(bp, i);
+		if (vf->state != VF_ENABLED) {
+			DP(BNX2X_MSG_IOV,
+			   "vf %d not enabled so no stats for it\n",
+			   vf->abs_vfid);
+			continue;
+		}
+
+		DP(BNX2X_MSG_IOV, "add addresses for vf %d\n", vf->abs_vfid);
+
+		for_each_vfq(vf, j) {
+			struct bnx2x_vf_queue *rxq = vfq_get(vf, j);
+
+			/* collect stats fro active queues only */
+			if (bnx2x_get_q_logical_state(bp, &rxq->sp_obj) ==
+			    BNX2X_Q_LOGICAL_STATE_STOPPED)
+				continue;
+
+			/* create stats query entry for this queue */
+			cur_query_entry->kind = STATS_TYPE_QUEUE;
+			cur_query_entry->index = vfq_cl_id(vf, rxq);
+			cur_query_entry->funcID =
+				cpu_to_le16(FW_VF_HANDLE(vf->abs_vfid));
+			cur_query_entry->address.hi =
+				cpu_to_le32(U64_HI(vf->fw_stat_map));
+			cur_query_entry->address.lo =
+				cpu_to_le32(U64_LO(vf->fw_stat_map));
+			DP(BNX2X_MSG_IOV,
+			   "added address %x %x for vf %d queue %d client %d\n",
+			   cur_query_entry->address.hi,
+			   cur_query_entry->address.lo, cur_query_entry->funcID,
+			   j, cur_query_entry->index);
+			cur_query_entry++;
+			cur_data_offset += sizeof(struct per_queue_stats);
+			stats_count++;
+		}
+	}
+	bp->fw_stats_req->hdr.cmd_num = bp->fw_stats_num + stats_count;
+}
+
 void bnx2x_iov_sp_task(struct bnx2x *bp)
 {
 	int i;
@@ -1088,6 +1163,23 @@ void bnx2x_iov_sp_task(struct bnx2x *bp)
 		}
 	}
 }
+
+static inline
+struct bnx2x_virtf *__vf_from_stat_id(struct bnx2x *bp, u8 stat_id)
+{
+	int i;
+	struct bnx2x_virtf *vf = NULL;
+
+	for_each_vf(bp, i) {
+		vf = BP_VF(bp, i);
+		if (stat_id >= vf->igu_base_id &&
+		    stat_id < vf->igu_base_id + vf_sb_count(vf))
+			break;
+	}
+	return vf;
+}
+
+/* VF API helpers */
 static void bnx2x_vf_qtbl_set_q(struct bnx2x *bp, u8 abs_vfid, u8 qid,
 				u8 enable)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 152a28e..d86b289 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -424,6 +424,8 @@ void bnx2x_iov_set_queue_sp_obj(struct bnx2x *bp, int vf_cid,
 				struct bnx2x_queue_sp_obj **q_obj);
 void bnx2x_iov_sp_event(struct bnx2x *bp, int vf_cid, bool queue_work);
 int bnx2x_iov_eq_sp_event(struct bnx2x *bp, union event_ring_elem *elem);
+void bnx2x_iov_adjust_stats_req(struct bnx2x *bp);
+void bnx2x_iov_storm_stats_update(struct bnx2x *bp);
 void bnx2x_iov_sp_task(struct bnx2x *bp);
 /* global vf mailbox routines */
 void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
index 89ec066..c87eee0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
@@ -19,7 +19,7 @@
 
 #include "bnx2x_stats.h"
 #include "bnx2x_cmn.h"
-
+#include "bnx2x_sriov.h"
 
 /* Statistics */
 
@@ -79,6 +79,41 @@ static inline u16 bnx2x_get_port_stats_dma_len(struct bnx2x *bp)
  * Init service functions
  */
 
+static void bnx2x_dp_stats(struct bnx2x *bp)
+{
+	int i;
+	DP(BNX2X_MSG_STATS, "dumping stats:\n"
+	   "fw_stats_req\n"
+	   "    hdr\n"
+	   "        cmd_num %d\n"
+	   "        reserved0 %d\n"
+	   "        drv_stats_counter %d\n"
+	   "        reserved1 %d\n"
+	   "        stats_counters_addrs %x %x\n",
+	   bp->fw_stats_req->hdr.cmd_num,
+	   bp->fw_stats_req->hdr.reserved0,
+	   bp->fw_stats_req->hdr.drv_stats_counter,
+	   bp->fw_stats_req->hdr.reserved1,
+	   bp->fw_stats_req->hdr.stats_counters_addrs.hi,
+	   bp->fw_stats_req->hdr.stats_counters_addrs.lo);
+
+	for (i = 0; i < bp->fw_stats_req->hdr.cmd_num; i++) {
+		DP(BNX2X_MSG_STATS,
+		   "query[%d]\n"
+		   "              kind %d\n"
+		   "              index %d\n"
+		   "              funcID %d\n"
+		   "              reserved %d\n"
+		   "              address %x %x\n",
+		   i, bp->fw_stats_req->query[i].kind,
+		   bp->fw_stats_req->query[i].index,
+		   bp->fw_stats_req->query[i].funcID,
+		   bp->fw_stats_req->query[i].reserved,
+		   bp->fw_stats_req->query[i].address.hi,
+		   bp->fw_stats_req->query[i].address.lo);
+	}
+}
+
 /* Post the next statistics ramrod. Protect it with the spin in
  * order to ensure the strict order between statistics ramrods
  * (each ramrod has a sequence number passed in a
@@ -103,7 +138,10 @@ static void bnx2x_storm_stats_post(struct bnx2x *bp)
 		DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",
 			bp->fw_stats_req->hdr.drv_stats_counter);
 
+		/* adjust the ramrod to include VF queues statistics */
+		bnx2x_iov_adjust_stats_req(bp);
 
+		bnx2x_dp_stats(bp);
 
 		/* send FW stats ramrod */
 		rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0,
@@ -482,6 +520,12 @@ static void bnx2x_func_stats_init(struct bnx2x *bp)
 
 static void bnx2x_stats_start(struct bnx2x *bp)
 {
+	/* vfs travel through here as part of the statistics FSM, but no action
+	 * is required
+	 */
+	if (IS_VF(bp))
+		return;
+
 	if (bp->port.pmf)
 		bnx2x_port_stats_init(bp);
 
@@ -501,6 +545,11 @@ static void bnx2x_stats_pmf_start(struct bnx2x *bp)
 
 static void bnx2x_stats_restart(struct bnx2x *bp)
 {
+	/* vfs travel through here as part of the statistics FSM, but no action
+	 * is required
+	 */
+	if (IS_VF(bp))
+		return;
 	bnx2x_stats_comp(bp);
 	bnx2x_stats_start(bp);
 }
@@ -832,19 +881,10 @@ static int bnx2x_hw_stats_update(struct bnx2x *bp)
 	return 0;
 }
 
-static int bnx2x_storm_stats_update(struct bnx2x *bp)
+static int bnx2x_storm_stats_validate_counters(struct bnx2x *bp)
 {
-	struct tstorm_per_port_stats *tport =
-				&bp->fw_stats_data->port.tstorm_port_statistics;
-	struct tstorm_per_pf_stats *tfunc =
-				&bp->fw_stats_data->pf.tstorm_pf_statistics;
-	struct host_func_stats *fstats = &bp->func_stats;
-	struct bnx2x_eth_stats *estats = &bp->eth_stats;
-	struct bnx2x_eth_stats_old *estats_old = &bp->eth_stats_old;
 	struct stats_counter *counters = &bp->fw_stats_data->storm_counters;
-	int i;
 	u16 cur_stats_counter;
-
 	/* Make sure we use the value of the counter
 	 * used for sending the last stats ramrod.
 	 */
@@ -880,6 +920,23 @@ static int bnx2x_storm_stats_update(struct bnx2x *bp)
 		   le16_to_cpu(counters->tstats_counter), bp->stats_counter);
 		return -EAGAIN;
 	}
+	return 0;
+}
+
+static int bnx2x_storm_stats_update(struct bnx2x *bp)
+{
+	struct tstorm_per_port_stats *tport =
+				&bp->fw_stats_data->port.tstorm_port_statistics;
+	struct tstorm_per_pf_stats *tfunc =
+				&bp->fw_stats_data->pf.tstorm_pf_statistics;
+	struct host_func_stats *fstats = &bp->func_stats;
+	struct bnx2x_eth_stats *estats = &bp->eth_stats;
+	struct bnx2x_eth_stats_old *estats_old = &bp->eth_stats_old;
+	int i;
+
+	/* vfs stat counter is managed by pf */
+	if (IS_PF(bp) && bnx2x_storm_stats_validate_counters(bp))
+		return -EAGAIN;
 
 	estats->error_bytes_received_hi = 0;
 	estats->error_bytes_received_lo = 0;
@@ -1174,23 +1231,34 @@ static void bnx2x_stats_update(struct bnx2x *bp)
 	if (bnx2x_edebug_stats_stopped(bp))
 		return;
 
-	if (*stats_comp != DMAE_COMP_VAL)
-		return;
+	if (IS_PF(bp)) {
+		if (*stats_comp != DMAE_COMP_VAL)
+			return;
 
-	if (bp->port.pmf)
-		bnx2x_hw_stats_update(bp);
+		if (bp->port.pmf)
+			bnx2x_hw_stats_update(bp);
 
-	if (bnx2x_storm_stats_update(bp)) {
-		if (bp->stats_pending++ == 3) {
-			BNX2X_ERR("storm stats were not updated for 3 times\n");
-			bnx2x_panic();
+		if (bnx2x_storm_stats_update(bp)) {
+			if (bp->stats_pending++ == 3) {
+				BNX2X_ERR("storm stats were not updated for 3 times\n");
+				bnx2x_panic();
+			}
+			return;
 		}
-		return;
+	} else {
+		/* vf doesn't collect HW statistics, and doesn't get completions
+		 * perform only update
+		 */
+		bnx2x_storm_stats_update(bp);
 	}
 
 	bnx2x_net_stats_update(bp);
 	bnx2x_drv_stats_update(bp);
 
+	/* vf is done */
+	if (IS_VF(bp))
+		return;
+
 	if (netif_msg_timer(bp)) {
 		struct bnx2x_eth_stats *estats = &bp->eth_stats;
 
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 15/22] bnx2x: Support of PF driver of a VF setup_q request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (13 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 14/22] bnx2x: Support statistics collection for VFs by the PF Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 16/22] bnx2x: Support of PF driver of a VF q_filters request Ariel Elior
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Upon receiving a 'setup_q' request from the VF over the VF <-> PF
channel the PF driver will open a corresponding queue in the
device. The PF driver configures the queue with appropriate mac
address, vlan configuration, etc from the VF.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    1 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c   |   19 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  905 +++++++++++++++++----
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |  175 ++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |  146 ++++
 5 files changed, 1070 insertions(+), 176 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index c97616c..a7d9fb0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -969,6 +969,7 @@ extern struct workqueue_struct *bnx2x_wq;
 #define BNX2X_MAX_NUM_OF_VFS	64
 #define BNX2X_VF_CID_WND	0
 #define BNX2X_CIDS_PER_VF	(1 << BNX2X_VF_CID_WND)
+#define BNX2X_CLIENTS_PER_VF	1
 #define BNX2X_FIRST_VF_CID	256
 #define BNX2X_VF_CIDS		(BNX2X_MAX_NUM_OF_VFS * BNX2X_CIDS_PER_VF)
 #define BNX2X_VF_ID_INVALID	0xFF
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 7966d7f..0a00c4c 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -2015,7 +2015,7 @@ static void bnx2x_free_fw_stats_mem(struct bnx2x *bp)
 
 static int bnx2x_alloc_fw_stats_mem(struct bnx2x *bp)
 {
-	int num_groups;
+	int num_groups, vf_headroom = 0;
 	int is_fcoe_stats = NO_FCOE(bp) ? 0 : 1;
 
 	/* number of queues for statistics is number of eth queues + FCoE */
@@ -2027,18 +2027,27 @@ static int bnx2x_alloc_fw_stats_mem(struct bnx2x *bp)
 	 * for fcoe l2 queue if applicable)
 	 */
 	bp->fw_stats_num = 2 + is_fcoe_stats + num_queue_stats;
+
+	/* vf stats appear in the request list, but their data is allocated by
+	 * the VFs themselves. We don't include them in the bp->fw_stats_num as
+	 * it is used to determine where to place the vf stats queries in the
+	 * request struct
+	 */
+	if (IS_SRIOV(bp))
+		vf_headroom = bp->vfdb->sriov.nr_virtfn * BNX2X_CLIENTS_PER_VF;
+
 	/* Request is built from stats_query_header and an array of
 	 * stats_query_cmd_group each of which contains
 	 * STATS_QUERY_CMD_COUNT rules. The real number or requests is
 	 * configured in the stats_query_header.
 	 */
 	num_groups =
-		(((bp->fw_stats_num) / STATS_QUERY_CMD_COUNT) +
-		 (((bp->fw_stats_num) % STATS_QUERY_CMD_COUNT) ?
+		(((bp->fw_stats_num + vf_headroom) / STATS_QUERY_CMD_COUNT) +
+		 (((bp->fw_stats_num + vf_headroom) % STATS_QUERY_CMD_COUNT) ?
 		 1 : 0));
 
-	DP(BNX2X_MSG_SP, "stats fw_stats_num %d, num_groups %d",
-	   bp->fw_stats_num, num_groups);
+	DP(BNX2X_MSG_SP, "stats fw_stats_num %d, vf headroom %d, num_groups %d",
+	   bp->fw_stats_num, vf_headroom, num_groups);
 
 	bp->fw_stats_req_sz = sizeof(struct stats_query_header) +
 		num_groups * sizeof(struct stats_query_cmd_group);
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 52c3bd2..bde2f47 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -99,10 +99,234 @@ static void bnx2x_vf_igu_ack_sb(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	mmiowb();
 	barrier();
 }
+/* VFOP - VF slow-path operation support */
+
+/* VFOP operations states */
+enum bnx2x_vfop_qctor_state {
+	   BNX2X_VFOP_QCTOR_INIT,
+	   BNX2X_VFOP_QCTOR_SETUP,
+	   BNX2X_VFOP_QCTOR_INT_EN
+};
+
+enum bnx2x_vfop_vlan_mac_state {
+	   BNX2X_VFOP_VLAN_MAC_CONFIG_SINGLE,
+	   BNX2X_VFOP_VLAN_MAC_CLEAR,
+	   BNX2X_VFOP_VLAN_MAC_CHK_DONE,
+	   BNX2X_VFOP_MAC_CONFIG_LIST,
+	   BNX2X_VFOP_VLAN_CONFIG_LIST,
+	   BNX2X_VFOP_VLAN_CONFIG_LIST_0
+};
+
+enum bnx2x_vfop_qsetup_state {
+	   BNX2X_VFOP_QSETUP_CTOR,
+	   BNX2X_VFOP_QSETUP_VLAN0,
+	   BNX2X_VFOP_QSETUP_DONE
+};
+
+#define bnx2x_vfop_reset_wq(vf)	atomic_set(&vf->op_in_progress, 0)
+
+void bnx2x_vfop_qctor_dump_tx(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			      struct bnx2x_queue_init_params *init_params,
+			      struct bnx2x_queue_setup_params *setup_params,
+			      u16 q_idx, u16 sb_idx)
+{
+	DP(BNX2X_MSG_IOV,
+	   "VF[%d] Q_SETUP: txq[%d]-- vfsb=%d, sb-index=%d, hc-rate=%d, flags=0x%lx, traffic-type=%d",
+	   vf->abs_vfid,
+	   q_idx,
+	   sb_idx,
+	   init_params->tx.sb_cq_index,
+	   init_params->tx.hc_rate,
+	   setup_params->flags,
+	   setup_params->txq_params.traffic_type);
+}
 
-static int bnx2x_ari_enabled(struct pci_dev *dev)
+void bnx2x_vfop_qctor_dump_rx(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			    struct bnx2x_queue_init_params *init_params,
+			    struct bnx2x_queue_setup_params *setup_params,
+			    u16 q_idx, u16 sb_idx)
 {
-	return dev->bus->self && dev->bus->self->ari_enabled;
+	struct bnx2x_rxq_setup_params *rxq_params = &setup_params->rxq_params;
+
+	DP(BNX2X_MSG_IOV, "VF[%d] Q_SETUP: rxq[%d]-- vfsb=%d, sb-index=%d, hc-rate=%d, mtu=%d, buf-size=%d\n"
+	   "sge-size=%d, max_sge_pkt=%d, tpa-agg-size=%d, flags=0x%lx, drop-flags=0x%x, cache-log=%d\n",
+	   vf->abs_vfid,
+	   q_idx,
+	   sb_idx,
+	   init_params->rx.sb_cq_index,
+	   init_params->rx.hc_rate,
+	   setup_params->gen_params.mtu,
+	   rxq_params->buf_sz,
+	   rxq_params->sge_buf_sz,
+	   rxq_params->max_sges_pkt,
+	   rxq_params->tpa_agg_sz,
+	   setup_params->flags,
+	   rxq_params->drop_flags,
+	   rxq_params->cache_line_log);
+}
+
+void bnx2x_vfop_qctor_prep(struct bnx2x *bp,
+			   struct bnx2x_virtf *vf,
+			   struct bnx2x_vf_queue *q,
+			   struct bnx2x_vfop_qctor_params *p,
+			   unsigned long q_type)
+{
+
+	struct bnx2x_queue_init_params *init_p = &p->qstate.params.init;
+	struct bnx2x_queue_setup_params *setup_p = &p->prep_qsetup;
+
+	/* INIT */
+
+	/* Enable host coalescing in the transition to INIT state */
+	if (test_bit(BNX2X_Q_FLG_HC, &init_p->rx.flags))
+		__set_bit(BNX2X_Q_FLG_HC_EN, &init_p->rx.flags);
+
+	if (test_bit(BNX2X_Q_FLG_HC, &init_p->tx.flags))
+		__set_bit(BNX2X_Q_FLG_HC_EN, &init_p->tx.flags);
+
+	/* FW SB ID */
+	init_p->rx.fw_sb_id = vf_igu_sb(vf, q->sb_idx);
+	init_p->tx.fw_sb_id = vf_igu_sb(vf, q->sb_idx);
+
+	/* context */
+	init_p->cxts[0] = q->cxt;
+
+	/* SETUP */
+
+	/* Setup-op general parameters */
+	setup_p->gen_params.spcl_id = vf->sp_cl_id;
+	setup_p->gen_params.stat_id = vfq_stat_id(vf, q);
+
+	/* Setup-op pause params:
+	 * Nothing to do, the pause thresholds are set by default to 0 which
+	 * effectively turns off the feature for this queue. We don't want
+	 * one queue (VF) to interfering with another queue (another VF)
+	 */
+	if (vf->cfg_flags & VF_CFG_FW_FC)
+		BNX2X_ERR("No support for pause to VFs (abs_vfid: %d)\n",
+			  vf->abs_vfid);
+	/* Setup-op flags:
+	 * collect statistics, zero statistics, local-switching, security,
+	 * OV for Flex10, RSS and MCAST for leading
+	 */
+	if (test_bit(BNX2X_Q_FLG_STATS, &setup_p->flags))
+		__set_bit(BNX2X_Q_FLG_ZERO_STATS, &setup_p->flags);
+
+	/* for VFs, enable tx switching, bd coherency, and mac address
+	 * anti-spoofing
+	 */
+	__set_bit(BNX2X_Q_FLG_TX_SWITCH, &setup_p->flags);
+	__set_bit(BNX2X_Q_FLG_TX_SEC, &setup_p->flags);
+	__set_bit(BNX2X_Q_FLG_ANTI_SPOOF, &setup_p->flags);
+
+	if (vfq_is_leading(q)) {
+		__set_bit(BNX2X_Q_FLG_LEADING_RSS, &setup_p->flags);
+		__set_bit(BNX2X_Q_FLG_MCAST, &setup_p->flags);
+	}
+
+	/* Setup-op rx parameters */
+	if (test_bit(BNX2X_Q_TYPE_HAS_RX, &q_type)) {
+		struct bnx2x_rxq_setup_params *rxq_p = &setup_p->rxq_params;
+
+		rxq_p->cl_qzone_id = vfq_qzone_id(vf, q);
+		rxq_p->fw_sb_id = vf_igu_sb(vf, q->sb_idx);
+		rxq_p->rss_engine_id = FW_VF_HANDLE(vf->abs_vfid);
+
+		if (test_bit(BNX2X_Q_FLG_TPA, &setup_p->flags))
+			rxq_p->max_tpa_queues = BNX2X_VF_MAX_TPA_AGG_QUEUES;
+	}
+
+	/* Setup-op tx parameters */
+	if (test_bit(BNX2X_Q_TYPE_HAS_TX, &q_type)) {
+		setup_p->txq_params.tss_leading_cl_id = vf->leading_rss;
+		setup_p->txq_params.fw_sb_id = vf_igu_sb(vf, q->sb_idx);
+	}
+}
+
+/* VFOP queue construction */
+static void bnx2x_vfop_qctor(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vfop_args_qctor *args = &vfop->args.qctor;
+	struct bnx2x_queue_state_params *q_params = &vfop->op_p->qctor.qstate;
+	enum bnx2x_vfop_qctor_state state = vfop->state;
+
+	bnx2x_vfop_reset_wq(vf);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_QCTOR_INIT:
+
+		/* has this queue already been opened? */
+		if (bnx2x_get_q_logical_state(bp, q_params->q_obj) ==
+		    BNX2X_Q_LOGICAL_STATE_ACTIVE) {
+			DP(BNX2X_MSG_IOV,
+			   "Entered qctor but queue was already up. Aborting gracefully\n");
+			goto op_done;
+		}
+
+		/* next state */
+		vfop->state = BNX2X_VFOP_QCTOR_SETUP;
+
+		q_params->cmd = BNX2X_Q_CMD_INIT;
+		vfop->rc = bnx2x_queue_state_change(bp, q_params);
+
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT);
+
+	case BNX2X_VFOP_QCTOR_SETUP:
+		/* next state */
+		vfop->state = BNX2X_VFOP_QCTOR_INT_EN;
+
+		/* copy pre-prepared setup params to the queue-state params */
+		vfop->op_p->qctor.qstate.params.setup =
+			vfop->op_p->qctor.prep_qsetup;
+
+		q_params->cmd = BNX2X_Q_CMD_SETUP;
+		vfop->rc = bnx2x_queue_state_change(bp, q_params);
+
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT);
+
+	case BNX2X_VFOP_QCTOR_INT_EN:
+
+		/* enable interrupts */
+		bnx2x_vf_igu_ack_sb(bp, vf, vf_igu_sb(vf, args->sb_idx),
+				    USTORM_ID, 0, IGU_INT_ENABLE, 0);
+		goto op_done;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_err:
+	BNX2X_ERR("QCTOR[%d:%d] error: cmd %d, rc %d\n",
+		  vf->abs_vfid, args->qid, q_params->cmd, vfop->rc);
+op_done:
+	bnx2x_vfop_end(bp, vf, vfop);
+op_pending:
+	return;
+}
+
+static int bnx2x_vfop_qctor_cmd(struct bnx2x *bp,
+				struct bnx2x_virtf *vf,
+				struct bnx2x_vfop_cmd *cmd,
+				int qid)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		vf->op_params.qctor.qstate.q_obj = &bnx2x_vfq(vf, qid, sp_obj);
+
+		vfop->args.qctor.qid = qid;
+		vfop->args.qctor.sb_idx = bnx2x_vfq(vf, qid, sb_idx);
+
+		bnx2x_vfop_opset(BNX2X_VFOP_QCTOR_INIT,
+				 bnx2x_vfop_qctor, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qctor,
+					     cmd->block);
+	}
+	return -ENOMEM;
 }
 
 static void
@@ -116,225 +340,354 @@ bnx2x_vf_set_igu_info(struct bnx2x *bp, u8 igu_sb_id, u8 abs_vfid)
 	}
 }
 
-static void
-bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
+/* VFOP MAC/VLAN helpers */
+static inline void bnx2x_vfop_credit(struct bnx2x *bp,
+				     struct bnx2x_vfop *vfop,
+				     struct bnx2x_vlan_mac_obj *obj)
 {
-	int sb_id;
-	u32 val;
-	u8 fid;
+	struct bnx2x_vfop_args_filters *args = &vfop->args.filters;
 
-	/* IGU in normal mode - read CAM */
-	for (sb_id = 0; sb_id < IGU_REG_MAPPING_MEMORY_SIZE; sb_id++) {
-		val = REG_RD(bp, IGU_REG_MAPPING_MEMORY + sb_id * 4);
-		if (!(val & IGU_REG_MAPPING_MEMORY_VALID))
-			continue;
-		fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID);
-		if (!(fid & IGU_FID_ENCODE_IS_PF))
-			bnx2x_vf_set_igu_info(bp, sb_id,
-					      (fid & IGU_FID_VF_NUM_MASK));
+	/* update credit only if there is no error
+	 * and a valid credit counter
+	 */
+	if (!vfop->rc && args->credit) {
+		int cnt = 0;
+		struct list_head *pos;
 
-		DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n",
-		   ((fid & IGU_FID_ENCODE_IS_PF) ? "PF" : "VF"),
-		   ((fid & IGU_FID_ENCODE_IS_PF) ? (fid & IGU_FID_PF_NUM_MASK) :
-		   (fid & IGU_FID_VF_NUM_MASK)), sb_id,
-		   GET_FIELD((val), IGU_REG_MAPPING_MEMORY_VECTOR));
+		list_for_each(pos, &obj->head)
+			cnt++;
+
+		atomic_set(args->credit, cnt);
 	}
 }
 
-static void __bnx2x_iov_free_vfdb(struct bnx2x *bp)
+static int bnx2x_vfop_set_user_req(struct bnx2x *bp,
+				    struct bnx2x_vfop_filter *pos,
+				    struct bnx2x_vlan_mac_data *user_req)
 {
-	if (bp->vfdb) {
-		kfree(bp->vfdb->vfqs);
-		kfree(bp->vfdb->vfs);
-		kfree(bp->vfdb);
+	user_req->cmd = pos->add ? BNX2X_VLAN_MAC_ADD :
+		BNX2X_VLAN_MAC_DEL;
+
+	switch (pos->type) {
+	case BNX2X_VFOP_FILTER_MAC:
+		memcpy(user_req->u.mac.mac, pos->mac, ETH_ALEN);
+		break;
+	case BNX2X_VFOP_FILTER_VLAN:
+		user_req->u.vlan.vlan = pos->vid;
+		break;
+	default:
+		BNX2X_ERR("Invalid filter type, skipping\n");
+		return 1;
 	}
-	bp->vfdb = NULL;
+	return 0;
 }
 
-static int bnx2x_sriov_pci_cfg_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+static int
+bnx2x_vfop_config_vlan0(struct bnx2x *bp,
+			struct bnx2x_vlan_mac_ramrod_params *vlan_mac,
+			bool add)
 {
-	int pos;
-	struct pci_dev *dev = bp->pdev;
-
-	pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV);
-	if (!pos) {
-		BNX2X_ERR("failed to find SRIOV capability in device\n");
-		return -ENODEV;
-	}
+	int rc;
 
-	iov->pos = pos;
-	DP(BNX2X_MSG_IOV, "sriov ext pos %d\n", pos);
-	pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &iov->ctrl);
-	pci_read_config_word(dev, pos + PCI_SRIOV_TOTAL_VF, &iov->total);
-	pci_read_config_word(dev, pos + PCI_SRIOV_INITIAL_VF, &iov->initial);
-	pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &iov->offset);
-	pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &iov->stride);
-	pci_read_config_dword(dev, pos + PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
-	pci_read_config_dword(dev, pos + PCI_SRIOV_CAP, &iov->cap);
-	pci_read_config_byte(dev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
+	vlan_mac->user_req.cmd = add ? BNX2X_VLAN_MAC_ADD :
+		BNX2X_VLAN_MAC_DEL;
+	vlan_mac->user_req.u.vlan.vlan = 0;
 
-	return 0;
+	rc = bnx2x_config_vlan_mac(bp, vlan_mac);
+	if (rc == -EEXIST)
+		rc = 0;
+	return rc;
 }
 
-static int bnx2x_sriov_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+static int bnx2x_vfop_config_list(struct bnx2x *bp,
+				  struct bnx2x_vfop_filters *filters,
+				  struct bnx2x_vlan_mac_ramrod_params *vlan_mac)
 {
-	u32 val;
+	struct bnx2x_vfop_filter *pos, *tmp;
+	struct list_head rollback_list, *filters_list = &filters->head;
+	struct bnx2x_vlan_mac_data *user_req = &vlan_mac->user_req;
+	int rc = 0, cnt = 0;
 
-	/* read the SRIOV capability structure
-	 * The fields can be read via configuration read or
-	 * directly from the device (starting at offset PCICFG_OFFSET)
-	 */
-	if (bnx2x_sriov_pci_cfg_info(bp, iov))
-		return -ENODEV;
+	INIT_LIST_HEAD(&rollback_list);
 
-	/* get the number of SRIOV bars */
-	iov->nres = 0;
-
-	/* read the first_vfid */
-	val = REG_RD(bp, PCICFG_OFFSET + GRC_CONFIG_REG_PF_INIT_VF);
-	iov->first_vf_in_pf = ((val & GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK)
-			       * 8) - (BNX2X_MAX_NUM_OF_VFS * BP_PATH(bp));
+	list_for_each_entry_safe(pos, tmp, filters_list, link) {
+		if (bnx2x_vfop_set_user_req(bp, pos, user_req))
+			continue;
 
-	DP(BNX2X_MSG_IOV,
-	   "IOV info[%d]: first vf %d, nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n",
-	   BP_FUNC(bp),
-	   iov->first_vf_in_pf, iov->nres, iov->cap, iov->ctrl, iov->total,
-	   iov->initial, iov->nr_virtfn, iov->offset, iov->stride, iov->pgsz);
+		rc = bnx2x_config_vlan_mac(bp, vlan_mac);
+		if (rc >= 0) {
+			cnt += pos->add ? 1 : -1;
+			list_del(&pos->link);
+			list_add(&pos->link, &rollback_list);
+			rc = 0;
+		} else if (rc == -EEXIST) {
+			rc = 0;
+		} else {
+			BNX2X_ERR("Failed to add a new vlan_mac command\n");
+			break;
+		}
+	}
 
-	return 0;
+	/* rollback if error or too many rules added */
+	if (rc || cnt > filters->add_cnt) {
+		BNX2X_ERR("error or too many rules added. Performing rollback\n");
+		list_for_each_entry_safe(pos, tmp, &rollback_list, link) {
+			pos->add = !pos->add;	/* reverse op */
+			bnx2x_vfop_set_user_req(bp, pos, user_req);
+			bnx2x_config_vlan_mac(bp, vlan_mac);
+			list_del(&pos->link);
+		}
+		cnt = 0;
+		if (!rc)
+			rc = -EINVAL;
+	}
+	filters->add_cnt = cnt;
+	return rc;
 }
 
-static u8 bnx2x_iov_get_max_queue_count(struct bnx2x *bp)
+/* VFOP set VLAN/MAC */
+static void bnx2x_vfop_vlan_mac(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
-	int i;
-	u8 queue_count = 0;
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vlan_mac_ramrod_params *vlan_mac = &vfop->op_p->vlan_mac;
+	struct bnx2x_vlan_mac_obj *obj = vlan_mac->vlan_mac_obj;
+	struct bnx2x_vfop_filters *filters = vfop->args.filters.multi_filter;
 
-	if (IS_SRIOV(bp))
-		for_each_vf(bp, i)
-			queue_count += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+	enum bnx2x_vfop_vlan_mac_state state = vfop->state;
 
-	return queue_count;
-}
+	if (vfop->rc < 0)
+		goto op_err;
 
-/* must be called after PF bars are mapped */
-int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
-				 int num_vfs_param)
-{
-	int err, i, qcount;
-	struct bnx2x_sriov *iov;
-	struct pci_dev *dev = bp->pdev;
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
 
-	bp->vfdb = NULL;
+	bnx2x_vfop_reset_wq(vf);
 
-	/* verify sriov capability is present in configuration space */
-	if (!pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV)) {
-		DP(BNX2X_MSG_IOV, "no sriov - capability not found");
-		return 0;
-	}
+	switch (state) {
+	case BNX2X_VFOP_VLAN_MAC_CLEAR:
+		/* next state */
+		vfop->state = BNX2X_VFOP_VLAN_MAC_CHK_DONE;
 
-	/* verify is pf */
-	if (IS_VF(bp))
-		return 0;
+		/* do delete */
+		vfop->rc = obj->delete_all(bp, obj,
+					   &vlan_mac->user_req.vlan_mac_flags,
+					   &vlan_mac->ramrod_flags);
 
-	/* verify chip revision */
-	if (CHIP_IS_E1x(bp))
-		return 0;
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
 
-	/* check if SRIOV support is turned off */
-	if (!num_vfs_param)
-		return 0;
+	case BNX2X_VFOP_VLAN_MAC_CONFIG_SINGLE:
+		/* next state */
+		vfop->state = BNX2X_VFOP_VLAN_MAC_CHK_DONE;
 
-	/* SRIOV assumes that num of PF CIDs < BNX2X_FIRST_VF_CID */
-	if (BNX2X_L2_MAX_CID(bp) >= BNX2X_FIRST_VF_CID) {
-		BNX2X_ERR("PF cids %d are overspilling into vf space (starts at %d). Abort SRIOV",
-			  BNX2X_L2_MAX_CID(bp), BNX2X_FIRST_VF_CID);
-		return 0;
-	}
+		/* do config */
+		vfop->rc = bnx2x_config_vlan_mac(bp, vlan_mac);
+		if (vfop->rc == -EEXIST)
+			vfop->rc = 0;
 
-	/* SRIOV can be enabled only with MSIX */
-	if (int_mode_param == BNX2X_INT_MODE_MSI ||
-	    int_mode_param == BNX2X_INT_MODE_INTX) {
-		BNX2X_ERR("Forced MSI/INTx mode is incompatible with SRIOV\n");
-		return 0;
-	}
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
 
-	/* verify ari is enabled */
-	if (!bnx2x_ari_enabled(bp->pdev)) {
-		BNX2X_ERR("ARI not supported, SRIOV can not be enabled\n");
-		return 0;
-	}
+	case BNX2X_VFOP_VLAN_MAC_CHK_DONE:
+		vfop->rc = !!obj->raw.check_pending(&obj->raw);
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
 
-	/* verify igu is in normal mode */
-	if (CHIP_INT_MODE_IS_BC(bp)) {
-		BNX2X_ERR("IGU not normal mode,  SRIOV can not be enabled\n");
-		return 0;
-	}
+	case BNX2X_VFOP_MAC_CONFIG_LIST:
+		/* next state */
+		vfop->state = BNX2X_VFOP_VLAN_MAC_CHK_DONE;
 
-	/* allocate the vfs database */
-	bp->vfdb = kzalloc(sizeof(*(bp->vfdb)), GFP_KERNEL);
-	if (!bp->vfdb) {
-		BNX2X_ERR("failed to allocate vf database\n");
-		err = -ENOMEM;
-		goto failed;
+		/* do list config */
+		vfop->rc = bnx2x_vfop_config_list(bp, filters, vlan_mac);
+		if (vfop->rc)
+			goto op_err;
+
+		set_bit(RAMROD_CONT, &vlan_mac->ramrod_flags);
+		vfop->rc = bnx2x_config_vlan_mac(bp, vlan_mac);
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+
+	case BNX2X_VFOP_VLAN_CONFIG_LIST:
+		/* next state */
+		vfop->state = BNX2X_VFOP_VLAN_CONFIG_LIST_0;
+
+		/* remove vlan0 - could be no-op */
+		vfop->rc = bnx2x_vfop_config_vlan0(bp, vlan_mac, false);
+		if (vfop->rc)
+			goto op_err;
+
+		/* Do vlan list config. if this operation fails we try to
+		 * restore vlan0 to keep the queue is working order
+		 */
+		vfop->rc = bnx2x_vfop_config_list(bp, filters, vlan_mac);
+		if (!vfop->rc) {
+			set_bit(RAMROD_CONT, &vlan_mac->ramrod_flags);
+			vfop->rc = bnx2x_config_vlan_mac(bp, vlan_mac);
+		}
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT); /* fall-through */
+
+	case BNX2X_VFOP_VLAN_CONFIG_LIST_0:
+		/* next state */
+		vfop->state = BNX2X_VFOP_VLAN_MAC_CHK_DONE;
+
+		if (list_empty(&obj->head))
+			/* add vlan0 */
+			vfop->rc = bnx2x_vfop_config_vlan0(bp, vlan_mac, true);
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+
+	default:
+		bnx2x_vfop_default(state);
 	}
+op_err:
+	BNX2X_ERR("VLAN-MAC error: rc %d\n", vfop->rc);
+op_done:
+	kfree(filters);
+	bnx2x_vfop_credit(bp, vfop, obj);
+	bnx2x_vfop_end(bp, vf, vfop);
+op_pending:
+	return;
+}
 
-	/* get the sriov info - Linux already collected all the pertinent
-	 * information, however the sriov structure is for the private use
-	 * of the pci module. Also we want this information regardless
-	 *  of the hyper-visor.
-	 */
-	iov = &(bp->vfdb->sriov);
-	err = bnx2x_sriov_info(bp, iov);
-	if (err)
-		goto failed;
+struct bnx2x_vfop_vlan_mac_flags {
+	bool drv_only;
+	bool dont_consume;
+	bool single_cmd;
+	bool add;
+};
 
-	/* SR-IOV capability was enabled but there are no VFs*/
-	if (iov->total == 0)
-		goto failed;
+static void
+bnx2x_vfop_vlan_mac_prep_ramrod(struct bnx2x_vlan_mac_ramrod_params *ramrod,
+				struct bnx2x_vfop_vlan_mac_flags *flags)
+{
+	struct bnx2x_vlan_mac_data *ureq = &ramrod->user_req;
 
-	/* calcuate the actual number of VFs */
-	iov->nr_virtfn = min_t(u16, iov->total, (u16)num_vfs_param);
+	memset(ramrod, 0, sizeof(*ramrod));
 
-	/* allcate the vf array */
-	bp->vfdb->vfs = kzalloc(sizeof(struct bnx2x_virtf) *
-				BNX2X_NR_VIRTFN(bp), GFP_KERNEL);
-	if (!bp->vfdb->vfs) {
-		BNX2X_ERR("failed to allocate vf array\n");
-		err = -ENOMEM;
-		goto failed;
+	/* ramrod flags */
+	if (flags->drv_only)
+		set_bit(RAMROD_DRV_CLR_ONLY, &ramrod->ramrod_flags);
+	if (flags->single_cmd)
+		set_bit(RAMROD_EXEC, &ramrod->ramrod_flags);
+
+	/* mac_vlan flags */
+	if (flags->dont_consume)
+		set_bit(BNX2X_DONT_CONSUME_CAM_CREDIT, &ureq->vlan_mac_flags);
+
+	/* cmd */
+	ureq->cmd = flags->add ? BNX2X_VLAN_MAC_ADD : BNX2X_VLAN_MAC_DEL;
+}
+
+int bnx2x_vfop_vlan_set_cmd(struct bnx2x *bp,
+			    struct bnx2x_virtf *vf,
+			    struct bnx2x_vfop_cmd *cmd,
+			    int qid, u16 vid, bool add)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_vfop_args_filters filters = {
+			.multi_filter = NULL, /* single command */
+			.credit = &bnx2x_vfq(vf, qid, vlan_count),
+		};
+		struct bnx2x_vfop_vlan_mac_flags flags = {
+			.drv_only = false,
+			.dont_consume = (filters.credit != NULL),
+			.single_cmd = true,
+			.add = add,
+		};
+		struct bnx2x_vlan_mac_ramrod_params *ramrod =
+			&vf->op_params.vlan_mac;
+
+		/* set ramrod params */
+		bnx2x_vfop_vlan_mac_prep_ramrod(ramrod, &flags);
+		ramrod->user_req.u.vlan.vlan = vid;
+
+		/* set object */
+		ramrod->vlan_mac_obj = &bnx2x_vfq(vf, qid, vlan_obj);
+
+		/* set extra args */
+		vfop->args.filters = filters;
+
+		bnx2x_vfop_opset(BNX2X_VFOP_VLAN_MAC_CONFIG_SINGLE,
+				 bnx2x_vfop_vlan_mac, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_vlan_mac,
+					     cmd->block);
 	}
+	return -ENOMEM;
+}
 
-	/* Initial VF init - index and abs_vfid - nr_virtfn must be set */
-	for_each_vf(bp, i) {
-		bnx2x_vf(bp, i, index) = i;
-		bnx2x_vf(bp, i, abs_vfid) = iov->first_vf_in_pf + i;
-		bnx2x_vf(bp, i, state) = VF_FREE;
-		INIT_LIST_HEAD(&bnx2x_vf(bp, i, op_list_head));
-		mutex_init(&bnx2x_vf(bp, i, op_mutex));
-		bnx2x_vf(bp, i, op_current) = CHANNEL_TLV_NONE;
+/* VFOP queue setup (queue constructor + set vlan 0) */
+static void bnx2x_vfop_qsetup(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	int qid = vfop->args.qctor.qid;
+	enum bnx2x_vfop_qsetup_state state = vfop->state;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vfop_qsetup,
+		.block = false,
+	};
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_QSETUP_CTOR:
+		/* init the queue ctor command */
+		vfop->state = BNX2X_VFOP_QSETUP_VLAN0;
+		vfop->rc = bnx2x_vfop_qctor_cmd(bp, vf, &cmd, qid);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QSETUP_VLAN0:
+		/* skip if non-leading or FPGA/EMU*/
+		if (qid || CHIP_REV_IS_SLOW(bp))
+			goto op_done;
+
+		/* init the queue set-vlan command (for vlan 0) */
+		vfop->state = BNX2X_VFOP_QSETUP_DONE;
+		vfop->rc = bnx2x_vfop_vlan_set_cmd(bp, vf, &cmd, qid, 0, true);
+		if (vfop->rc)
+			goto op_err;
+		return;
+op_err:
+	BNX2X_ERR("QSETUP[%d:%d] error: rc %d\n", vf->abs_vfid, qid, vfop->rc);
+op_done:
+	case BNX2X_VFOP_QSETUP_DONE:
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+	default:
+		bnx2x_vfop_default(state);
 	}
+}
 
-	/* re-read the IGU CAM for VFs - index and abs_vfid must be set */
-	bnx2x_get_vf_igu_cam_info(bp);
+int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
+			  struct bnx2x_virtf *vf,
+			  struct bnx2x_vfop_cmd *cmd,
+			  int qid)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
 
-	/* get the total queue count and allocate the global queue arrays */
-	qcount = bnx2x_iov_get_max_queue_count(bp);
+	if (vfop) {
+		vfop->args.qctor.qid = qid;
 
-	/* allocate the queue arrays for all VFs */
-	bp->vfdb->vfqs = kzalloc(qcount * sizeof(struct bnx2x_vf_queue),
-				 GFP_KERNEL);
-	if (!bp->vfdb->vfqs) {
-		BNX2X_ERR("failed to allocate vf queue array\n");
-		err = -ENOMEM;
-		goto failed;
+		bnx2x_vfop_opset(BNX2X_VFOP_QSETUP_CTOR,
+				 bnx2x_vfop_qsetup, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qsetup,
+					     cmd->block);
 	}
+	return -ENOMEM;
+}
 
-	return 0;
-failed:
-	DP(BNX2X_MSG_IOV, "Failed err=%d\n", err);
-	__bnx2x_iov_free_vfdb(bp);
-	return err;
+static u8 bnx2x_iov_get_max_queue_count(struct bnx2x *bp)
+{
+	int i;
+	u8 queue_count = 0;
+
+	if (IS_SRIOV(bp))
+		for_each_vf(bp, i)
+			queue_count += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+
+	return queue_count;
 }
+
 /* VF enable primitives
  *
  * when pretend is required the caller is responsible
@@ -608,6 +961,216 @@ static void bnx2x_vf_set_bars(struct bnx2x *bp, struct bnx2x_virtf *vf)
 	}
 }
 
+static int bnx2x_ari_enabled(struct pci_dev *dev)
+{
+	return dev->bus->self && dev->bus->self->ari_enabled;
+}
+
+static void
+bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
+{
+	int sb_id;
+	u32 val;
+	u8 fid;
+
+	/* IGU in normal mode - read CAM */
+	for (sb_id = 0; sb_id < IGU_REG_MAPPING_MEMORY_SIZE; sb_id++) {
+		val = REG_RD(bp, IGU_REG_MAPPING_MEMORY + sb_id * 4);
+		if (!(val & IGU_REG_MAPPING_MEMORY_VALID))
+			continue;
+		fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID);
+		if (!(fid & IGU_FID_ENCODE_IS_PF))
+			bnx2x_vf_set_igu_info(bp, sb_id,
+					      (fid & IGU_FID_VF_NUM_MASK));
+
+		DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n",
+		   ((fid & IGU_FID_ENCODE_IS_PF) ? "PF" : "VF"),
+		   ((fid & IGU_FID_ENCODE_IS_PF) ? (fid & IGU_FID_PF_NUM_MASK) :
+		   (fid & IGU_FID_VF_NUM_MASK)), sb_id,
+		   GET_FIELD((val), IGU_REG_MAPPING_MEMORY_VECTOR));
+	}
+}
+
+static void __bnx2x_iov_free_vfdb(struct bnx2x *bp)
+{
+	if (bp->vfdb) {
+		kfree(bp->vfdb->vfqs);
+		kfree(bp->vfdb->vfs);
+		kfree(bp->vfdb);
+	}
+	bp->vfdb = NULL;
+}
+
+static int bnx2x_sriov_pci_cfg_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+{
+	int pos;
+	struct pci_dev *dev = bp->pdev;
+
+	pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV);
+	if (!pos) {
+		BNX2X_ERR("failed to find SRIOV capability in device\n");
+		return -ENODEV;
+	}
+
+	iov->pos = pos;
+	DP(BNX2X_MSG_IOV, "sriov ext pos %d\n", pos);
+	pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &iov->ctrl);
+	pci_read_config_word(dev, pos + PCI_SRIOV_TOTAL_VF, &iov->total);
+	pci_read_config_word(dev, pos + PCI_SRIOV_INITIAL_VF, &iov->initial);
+	pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &iov->offset);
+	pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &iov->stride);
+	pci_read_config_dword(dev, pos + PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
+	pci_read_config_dword(dev, pos + PCI_SRIOV_CAP, &iov->cap);
+	pci_read_config_byte(dev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
+
+	return 0;
+}
+
+static int bnx2x_sriov_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
+{
+	u32 val;
+
+	/* read the SRIOV capability structure
+	 * The fields can be read via configuration read or
+	 * directly from the device (starting at offset PCICFG_OFFSET)
+	 */
+	if (bnx2x_sriov_pci_cfg_info(bp, iov))
+		return -ENODEV;
+
+	/* get the number of SRIOV bars */
+	iov->nres = 0;
+
+	/* read the first_vfid */
+	val = REG_RD(bp, PCICFG_OFFSET + GRC_CONFIG_REG_PF_INIT_VF);
+	iov->first_vf_in_pf = ((val & GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK)
+			       * 8) - (BNX2X_MAX_NUM_OF_VFS * BP_PATH(bp));
+
+	DP(BNX2X_MSG_IOV,
+	   "IOV info[%d]: first vf %d, nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n",
+	   BP_FUNC(bp),
+	   iov->first_vf_in_pf, iov->nres, iov->cap, iov->ctrl, iov->total,
+	   iov->initial, iov->nr_virtfn, iov->offset, iov->stride, iov->pgsz);
+
+	return 0;
+}
+
+/* must be called after PF bars are mapped */
+int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
+				 int num_vfs_param)
+{
+	int err, i, qcount;
+	struct bnx2x_sriov *iov;
+	struct pci_dev *dev = bp->pdev;
+
+	bp->vfdb = NULL;
+
+	/* verify is pf */
+	if (IS_VF(bp))
+		return 0;
+
+	/* verify sriov capability is present in configuration space */
+	if (!pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV))
+		return 0;
+
+	/* verify chip revision */
+	if (CHIP_IS_E1x(bp))
+		return 0;
+
+	/* check if SRIOV support is turned off */
+	if (!num_vfs_param)
+		return 0;
+
+	/* SRIOV assumes that num of PF CIDs < BNX2X_FIRST_VF_CID */
+	if (BNX2X_L2_MAX_CID(bp) >= BNX2X_FIRST_VF_CID) {
+		BNX2X_ERR("PF cids %d are overspilling into vf space (starts at %d). Abort SRIOV",
+			  BNX2X_L2_MAX_CID(bp), BNX2X_FIRST_VF_CID);
+		return 0;
+	}
+
+	/* SRIOV can be enabled only with MSIX */
+	if (int_mode_param == BNX2X_INT_MODE_MSI ||
+	    int_mode_param == BNX2X_INT_MODE_INTX)
+		BNX2X_ERR("Forced MSI/INTx mode is incompatible with SRIOV\n");
+
+	err = -EIO;
+	/* verify ari is enabled */
+	if (!bnx2x_ari_enabled(bp->pdev)) {
+		BNX2X_ERR("ARI not supported, SRIOV can not be enabled\n");
+		return err;
+	}
+
+	/* verify igu is in normal mode */
+	if (CHIP_INT_MODE_IS_BC(bp)) {
+		BNX2X_ERR("IGU not normal mode,  SRIOV can not be enabled\n");
+		return err;
+	}
+
+	/* allocate the vfs database */
+	bp->vfdb = kzalloc(sizeof(*(bp->vfdb)), GFP_KERNEL);
+	if (!bp->vfdb) {
+		BNX2X_ERR("failed to allocate vf database\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	/* get the sriov info - Linux already collected all the pertinent
+	 * information, however the sriov structure is for the private use
+	 * of the pci module. Also we want this information regardless
+	 * of the hyper-visor.
+	 */
+	iov = &(bp->vfdb->sriov);
+	err = bnx2x_sriov_info(bp, iov);
+	if (err)
+		goto failed;
+
+	/* SR-IOV capability was enabled but there are no VFs*/
+	if (iov->total == 0)
+		goto failed;
+
+	/* calculate the actual number of VFs */
+	iov->nr_virtfn = min_t(u16, iov->total, (u16)num_vfs_param);
+
+	/* allocate the vf array */
+	bp->vfdb->vfs = kzalloc(sizeof(struct bnx2x_virtf) *
+				BNX2X_NR_VIRTFN(bp), GFP_KERNEL);
+	if (!bp->vfdb->vfs) {
+		BNX2X_ERR("failed to allocate vf array\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	/* Initial VF init - index and abs_vfid - nr_virtfn must be set */
+	for_each_vf(bp, i) {
+		bnx2x_vf(bp, i, index) = i;
+		bnx2x_vf(bp, i, abs_vfid) = iov->first_vf_in_pf + i;
+		bnx2x_vf(bp, i, state) = VF_FREE;
+		INIT_LIST_HEAD(&bnx2x_vf(bp, i, op_list_head));
+		mutex_init(&bnx2x_vf(bp, i, op_mutex));
+		bnx2x_vf(bp, i, op_current) = CHANNEL_TLV_NONE;
+	}
+
+	/* re-read the IGU CAM for VFs - index and abs_vfid must be set */
+	bnx2x_get_vf_igu_cam_info(bp);
+
+	/* get the total queue count and allocate the global queue arrays */
+	qcount = bnx2x_iov_get_max_queue_count(bp);
+
+	/* allocate the queue arrays for all VFs */
+	bp->vfdb->vfqs = kzalloc(qcount * sizeof(struct bnx2x_vf_queue),
+				 GFP_KERNEL);
+	if (!bp->vfdb->vfqs) {
+		BNX2X_ERR("failed to allocate vf queue array\n");
+		err = -ENOMEM;
+		goto failed;
+	}
+
+	return 0;
+failed:
+	DP(BNX2X_MSG_IOV, "Failed err=%d\n", err);
+	__bnx2x_iov_free_vfdb(bp);
+	return err;
+}
+
 void bnx2x_iov_remove_one(struct bnx2x *bp)
 {
 	/* if SRIOV is not enabled there's nothing to do */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index d86b289..3b758d4 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -26,6 +26,8 @@
  * The VF array is indexed by the relative vfid.
  */
 #define BNX2X_VF_MAX_QUEUES		16
+#define BNX2X_VF_MAX_TPA_AGG_QUEUES	8
+
 struct bnx2x_sriov {
 	u32 first_vf_in_pf;
 
@@ -91,6 +93,11 @@ struct bnx2x_virtf;
 /* VFOP definitions */
 typedef void (*vfop_handler_t)(struct bnx2x *bp, struct bnx2x_virtf *vf);
 
+struct bnx2x_vfop_cmd {
+	vfop_handler_t done;
+	bool block;
+};
+
 /* VFOP queue filters command additional arguments */
 struct bnx2x_vfop_filter {
 	struct list_head link;
@@ -406,6 +413,11 @@ static u8 vfq_cl_id(struct bnx2x_virtf *vf, struct bnx2x_vf_queue *q)
 	return vf->igu_base_id + q->index;
 }
 
+static inline u8 vfq_stat_id(struct bnx2x_virtf *vf, struct bnx2x_vf_queue *q)
+{
+	return vfq_cl_id(vf, q);
+}
+
 static inline u8 vfq_qzone_id(struct bnx2x_virtf *vf, struct bnx2x_vf_queue *q)
 {
 	return vfq_cl_id(vf, q);
@@ -438,6 +450,44 @@ int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 int bnx2x_vf_init(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		  dma_addr_t *sb_map);
 
+/* VFOP generic helpers */
+#define bnx2x_vfop_default(state) do {				\
+		BNX2X_ERR("Bad state %d\n", (state));		\
+		vfop->rc = -EINVAL;				\
+		goto op_err;					\
+	} while (0)
+
+enum {
+	VFOP_DONE,
+	VFOP_CONT,
+	VFOP_VERIFY_PEND,
+};
+
+#define bnx2x_vfop_finalize(vf, rc, next) do {				\
+		if ((rc) < 0)						\
+			goto op_err;					\
+		else if ((rc) > 0)					\
+			goto op_pending;				\
+		else if ((next) == VFOP_DONE)				\
+			goto op_done;					\
+		else if ((next) == VFOP_VERIFY_PEND)			\
+			BNX2X_ERR("expected pending");			\
+		else {							\
+			DP(BNX2X_MSG_IOV, "no ramrod. scheduling");	\
+			atomic_set(&vf->op_in_progress, 1);		\
+			queue_delayed_work(bnx2x_wq, &bp->sp_task, 0);  \
+			return;						\
+		}							\
+	} while (0)
+
+#define bnx2x_vfop_opset(first_state, trans_hndlr, done_hndlr)		\
+	do {								\
+		vfop->state = first_state;				\
+		vfop->op_p = &vf->op_params;				\
+		vfop->transition = trans_hndlr;				\
+		vfop->done = done_hndlr;				\
+	} while (0)
+
 static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
 						struct bnx2x_virtf *vf)
 {
@@ -446,6 +496,131 @@ static inline struct bnx2x_vfop *bnx2x_vfop_cur(struct bnx2x *bp,
 	return list_first_entry(&vf->op_list_head, struct bnx2x_vfop, link);
 }
 
+static inline struct bnx2x_vfop *bnx2x_vfop_add(struct bnx2x *bp,
+						struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = kzalloc(sizeof(*vfop), GFP_KERNEL);
+
+	WARN(!mutex_is_locked(&vf->op_mutex), "about to access vf op linked list but mutex was not locked!");
+	if (vfop) {
+		INIT_LIST_HEAD(&vfop->link);
+		list_add(&vfop->link, &vf->op_list_head);
+	}
+	return vfop;
+}
+
+static inline void bnx2x_vfop_end(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				  struct bnx2x_vfop *vfop)
+{
+	/* rc < 0 - error, otherwise set to 0 */
+	DP(BNX2X_MSG_IOV, "rc was %d\n", vfop->rc);
+	if (vfop->rc >= 0)
+		vfop->rc = 0;
+	DP(BNX2X_MSG_IOV, "rc is now %d\n", vfop->rc);
+
+	/* unlink the current op context and propagate error code
+	 * must be done before invoking the 'done()' handler
+	 */
+	WARN(!mutex_is_locked(&vf->op_mutex),
+	     "about to access vf op linked list but mutex was not locked!");
+	list_del(&vfop->link);
+
+	if (list_empty(&vf->op_list_head)) {
+		DP(BNX2X_MSG_IOV, "list was empty %d\n", vfop->rc);
+		vf->op_rc = vfop->rc;
+		DP(BNX2X_MSG_IOV, "copying rc vf->op_rc %d,  vfop->rc %d\n",
+		   vf->op_rc, vfop->rc);
+	} else {
+		struct bnx2x_vfop *cur_vfop;
+		DP(BNX2X_MSG_IOV, "list not empty %d\n", vfop->rc);
+		cur_vfop = bnx2x_vfop_cur(bp, vf);
+		cur_vfop->rc = vfop->rc;
+		DP(BNX2X_MSG_IOV, "copying rc vf->op_rc %d, vfop->rc %d\n",
+		   vf->op_rc, vfop->rc);
+	}
+
+	/* invoke done handler */
+	if (vfop->done) {
+		DP(BNX2X_MSG_IOV, "calling done handler\n");
+		vfop->done(bp, vf);
+	}
+
+	DP(BNX2X_MSG_IOV, "done handler complete. vf->op_rc %d, vfop->rc %d\n",
+	   vf->op_rc, vfop->rc);
+
+	/* if this is the last nested op reset the wait_blocking flag
+	 * to release any blocking wrappers, only after 'done()' is invoked
+	 */
+	if (list_empty(&vf->op_list_head)) {
+		DP(BNX2X_MSG_IOV, "list was empty after done %d\n", vfop->rc);
+		vf->op_wait_blocking = false;
+	}
+
+	kfree(vfop);
+}
+
+static inline int bnx2x_vfop_wait_blocking(struct bnx2x *bp,
+					   struct bnx2x_virtf *vf)
+{
+	/* can take a while if any port is running */
+	int cnt = 5000;
+
+	might_sleep();
+	while (cnt--) {
+		if (vf->op_wait_blocking == false) {
+#ifdef BNX2X_STOP_ON_ERROR
+			DP(BNX2X_MSG_IOV, "exit  (cnt %d)\n", 5000 - cnt);
+#endif
+			return 0;
+		}
+		usleep_range(1000, 2000);
+
+		if (bp->panic)
+			return -EIO;
+	}
+
+	/* timeout! */
+#ifdef BNX2X_STOP_ON_ERROR
+	bnx2x_panic();
+#endif
+
+	return -EBUSY;
+}
+
+static inline int bnx2x_vfop_transition(struct bnx2x *bp,
+					struct bnx2x_virtf *vf,
+					vfop_handler_t transition,
+					bool block)
+{
+	if (block)
+		vf->op_wait_blocking = true;
+	transition(bp, vf);
+	if (block)
+		return bnx2x_vfop_wait_blocking(bp, vf);
+	return 0;
+}
+
+/* VFOP queue construction helpers */
+void bnx2x_vfop_qctor_dump_tx(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			    struct bnx2x_queue_init_params *init_params,
+			    struct bnx2x_queue_setup_params *setup_params,
+			    u16 q_idx, u16 sb_idx);
+
+void bnx2x_vfop_qctor_dump_rx(struct bnx2x *bp, struct bnx2x_virtf *vf,
+			    struct bnx2x_queue_init_params *init_params,
+			    struct bnx2x_queue_setup_params *setup_params,
+			    u16 q_idx, u16 sb_idx);
+
+void bnx2x_vfop_qctor_prep(struct bnx2x *bp,
+			   struct bnx2x_virtf *vf,
+			   struct bnx2x_vf_queue *q,
+			   struct bnx2x_vfop_qctor_params *p,
+			   unsigned long q_type);
+int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
+			  struct bnx2x_virtf *vf,
+			  struct bnx2x_vfop_cmd *cmd,
+			  int qid);
+
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
 /* VF FLR helpers */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index ba5503e..eb10378 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -375,6 +375,149 @@ static void bnx2x_vf_mbx_init_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	bnx2x_vf_mbx_resp(bp, vf);
 }
 
+/* convert MBX queue-flags to standard SP queue-flags */
+static void bnx2x_vf_mbx_set_q_flags(u32 mbx_q_flags,
+				     unsigned long *sp_q_flags)
+{
+	if (mbx_q_flags & VFPF_QUEUE_FLG_TPA)
+		__set_bit(BNX2X_Q_FLG_TPA, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_TPA_IPV6)
+		__set_bit(BNX2X_Q_FLG_TPA_IPV6, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_TPA_GRO)
+		__set_bit(BNX2X_Q_FLG_TPA_GRO, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_STATS)
+		__set_bit(BNX2X_Q_FLG_STATS, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_OV)
+		__set_bit(BNX2X_Q_FLG_OV, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_VLAN)
+		__set_bit(BNX2X_Q_FLG_VLAN, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_COS)
+		__set_bit(BNX2X_Q_FLG_COS, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_HC)
+		__set_bit(BNX2X_Q_FLG_HC, sp_q_flags);
+	if (mbx_q_flags & VFPF_QUEUE_FLG_DHC)
+		__set_bit(BNX2X_Q_FLG_DHC, sp_q_flags);
+}
+
+static void bnx2x_vf_mbx_setup_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				 struct bnx2x_vf_mbx *mbx)
+{
+	struct vfpf_setup_q_tlv *setup_q = &mbx->msg->req.setup_q;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vf_mbx_resp,
+		.block = false,
+	};
+
+	/* verify vf_qid */
+	if (setup_q->vf_qid >= vf_rxq_count(vf)) {
+		BNX2X_ERR("vf_qid %d invalid, max queue count is %d\n",
+			  setup_q->vf_qid, vf_rxq_count(vf));
+		vf->op_rc = -EINVAL;
+		goto response;
+	}
+
+	/* tx queues must be setup alongside rx queues thus if the rx queue
+	 * is not marked as valid there's nothing to do.
+	 */
+	if (setup_q->param_valid & (VFPF_RXQ_VALID|VFPF_TXQ_VALID)) {
+		struct bnx2x_vf_queue *q = vfq_get(vf, setup_q->vf_qid);
+		unsigned long q_type = 0;
+
+		struct bnx2x_queue_init_params *init_p;
+		struct bnx2x_queue_setup_params *setup_p;
+
+		/* reinit the VF operation context */
+		memset(&vf->op_params.qctor, 0 , sizeof(vf->op_params.qctor));
+		setup_p = &vf->op_params.qctor.prep_qsetup;
+		init_p =  &vf->op_params.qctor.qstate.params.init;
+
+		/* activate immediateely */
+		__set_bit(BNX2X_Q_FLG_ACTIVE, &setup_p->flags);
+
+		if (setup_q->param_valid & VFPF_TXQ_VALID) {
+			struct bnx2x_txq_setup_params *txq_params =
+				&setup_p->txq_params;
+
+			__set_bit(BNX2X_Q_TYPE_HAS_TX, &q_type);
+
+			/* save sb resource index */
+			q->sb_idx = setup_q->txq.vf_sb;
+
+			/* tx init */
+			init_p->tx.hc_rate = setup_q->txq.hc_rate;
+			init_p->tx.sb_cq_index = setup_q->txq.sb_index;
+
+			bnx2x_vf_mbx_set_q_flags(setup_q->txq.flags,
+						 &init_p->tx.flags);
+
+			/* tx setup - flags */
+			bnx2x_vf_mbx_set_q_flags(setup_q->txq.flags,
+						 &setup_p->flags);
+
+			/* tx setup - general, nothing */
+
+			/* tx setup - tx */
+			txq_params->dscr_map = setup_q->txq.txq_addr;
+			txq_params->sb_cq_index = setup_q->txq.sb_index;
+			txq_params->traffic_type = setup_q->txq.traffic_type;
+
+			bnx2x_vfop_qctor_dump_tx(bp, vf, init_p, setup_p,
+						 q->index, q->sb_idx);
+		}
+
+		if (setup_q->param_valid & VFPF_RXQ_VALID) {
+			struct bnx2x_rxq_setup_params *rxq_params =
+							&setup_p->rxq_params;
+
+			__set_bit(BNX2X_Q_TYPE_HAS_RX, &q_type);
+
+			/* Note: there is no support for different SBs
+			 * for TX and RX
+			 */
+			q->sb_idx = setup_q->rxq.vf_sb;
+
+			/* rx init */
+			init_p->rx.hc_rate = setup_q->rxq.hc_rate;
+			init_p->rx.sb_cq_index = setup_q->rxq.sb_index;
+			bnx2x_vf_mbx_set_q_flags(setup_q->rxq.flags,
+						 &init_p->rx.flags);
+
+			/* rx setup - flags */
+			bnx2x_vf_mbx_set_q_flags(setup_q->rxq.flags,
+						 &setup_p->flags);
+
+			/* rx setup - general */
+			setup_p->gen_params.mtu = setup_q->rxq.mtu;
+
+			/* rx setup - rx */
+			rxq_params->drop_flags = setup_q->rxq.drop_flags;
+			rxq_params->dscr_map = setup_q->rxq.rxq_addr;
+			rxq_params->sge_map = setup_q->rxq.sge_addr;
+			rxq_params->rcq_map = setup_q->rxq.rcq_addr;
+			rxq_params->rcq_np_map = setup_q->rxq.rcq_np_addr;
+			rxq_params->buf_sz = setup_q->rxq.buf_sz;
+			rxq_params->tpa_agg_sz = setup_q->rxq.tpa_agg_sz;
+			rxq_params->max_sges_pkt = setup_q->rxq.max_sge_pkt;
+			rxq_params->sge_buf_sz = setup_q->rxq.sge_buf_sz;
+			rxq_params->cache_line_log =
+				setup_q->rxq.cache_line_log;
+			rxq_params->sb_cq_index = setup_q->rxq.sb_index;
+
+			bnx2x_vfop_qctor_dump_rx(bp, vf, init_p, setup_p,
+						 q->index, q->sb_idx);
+		}
+		/* complete the preparations */
+		bnx2x_vfop_qctor_prep(bp, vf, q, &vf->op_params.qctor, q_type);
+
+		vf->op_rc = bnx2x_vfop_qsetup_cmd(bp, vf, &cmd, q->index);
+		if (vf->op_rc)
+			goto response;
+		return;
+	}
+response:
+	bnx2x_vf_mbx_resp(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -397,6 +540,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_INIT:
 			bnx2x_vf_mbx_init_vf(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_SETUP_Q:
+			bnx2x_vf_mbx_setup_q(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 16/22] bnx2x: Support of PF driver of a VF q_filters request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (14 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 15/22] bnx2x: Support of PF driver of a VF setup_q request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 17/22] bnx2x: Support of PF driver of a VF q_teardown request Ariel Elior
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The VF driver uses the 'q_filters' message on the VF <-> PF channel
for configuring an open queue, for example when the rxmode changes.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  294 ++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |   32 +++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |  277 +++++++++++++++++++
 3 files changed, 597 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index bde2f47..472e8ca 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -101,6 +101,8 @@ static void bnx2x_vf_igu_ack_sb(struct bnx2x *bp, struct bnx2x_virtf *vf,
 }
 /* VFOP - VF slow-path operation support */
 
+#define BNX2X_VFOP_FILTER_ADD_CNT_MAX		0x10000
+
 /* VFOP operations states */
 enum bnx2x_vfop_qctor_state {
 	   BNX2X_VFOP_QCTOR_INIT,
@@ -123,6 +125,17 @@ enum bnx2x_vfop_qsetup_state {
 	   BNX2X_VFOP_QSETUP_DONE
 };
 
+enum bnx2x_vfop_mcast_state {
+	   BNX2X_VFOP_MCAST_DEL,
+	   BNX2X_VFOP_MCAST_ADD,
+	   BNX2X_VFOP_MCAST_CHK_DONE
+};
+
+enum bnx2x_vfop_rxmode_state {
+	   BNX2X_VFOP_RXMODE_CONFIG,
+	   BNX2X_VFOP_RXMODE_DONE
+};
+
 #define bnx2x_vfop_reset_wq(vf)	atomic_set(&vf->op_in_progress, 0)
 
 void bnx2x_vfop_qctor_dump_tx(struct bnx2x *bp, struct bnx2x_virtf *vf,
@@ -572,6 +585,57 @@ bnx2x_vfop_vlan_mac_prep_ramrod(struct bnx2x_vlan_mac_ramrod_params *ramrod,
 	ureq->cmd = flags->add ? BNX2X_VLAN_MAC_ADD : BNX2X_VLAN_MAC_DEL;
 }
 
+static inline void
+bnx2x_vfop_mac_prep_ramrod(struct bnx2x_vlan_mac_ramrod_params *ramrod,
+			   struct bnx2x_vfop_vlan_mac_flags *flags)
+{
+	bnx2x_vfop_vlan_mac_prep_ramrod(ramrod, flags);
+	set_bit(BNX2X_ETH_MAC, &ramrod->user_req.vlan_mac_flags);
+}
+
+int bnx2x_vfop_mac_list_cmd(struct bnx2x *bp,
+			    struct bnx2x_virtf *vf,
+			    struct bnx2x_vfop_cmd *cmd,
+			    struct bnx2x_vfop_filters *macs,
+			    int qid, bool drv_only)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_vfop_args_filters filters = {
+			.multi_filter = macs,
+			.credit = NULL,		/* consume credit */
+		};
+		struct bnx2x_vfop_vlan_mac_flags flags = {
+			.drv_only = drv_only,
+			.dont_consume = (filters.credit != NULL),
+			.single_cmd = false,
+			.add = false, /* don't care since only the items in the
+				       * filters list affect the sp operation,
+				       * not the list itself
+				       */
+		};
+		struct bnx2x_vlan_mac_ramrod_params *ramrod =
+			&vf->op_params.vlan_mac;
+
+		/* set ramrod params */
+		bnx2x_vfop_mac_prep_ramrod(ramrod, &flags);
+
+		/* set object */
+		ramrod->vlan_mac_obj = &bnx2x_vfq(vf, qid, mac_obj);
+
+		/* set extra args */
+		filters.multi_filter->add_cnt = BNX2X_VFOP_FILTER_ADD_CNT_MAX;
+		vfop->args.filters = filters;
+
+		bnx2x_vfop_opset(BNX2X_VFOP_MAC_CONFIG_LIST,
+				 bnx2x_vfop_vlan_mac, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_vlan_mac,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 int bnx2x_vfop_vlan_set_cmd(struct bnx2x *bp,
 			    struct bnx2x_virtf *vf,
 			    struct bnx2x_vfop_cmd *cmd,
@@ -611,6 +675,48 @@ int bnx2x_vfop_vlan_set_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
+int bnx2x_vfop_vlan_list_cmd(struct bnx2x *bp,
+			     struct bnx2x_virtf *vf,
+			     struct bnx2x_vfop_cmd *cmd,
+			     struct bnx2x_vfop_filters *vlans,
+			     int qid, bool drv_only)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_vfop_args_filters filters = {
+			.multi_filter = vlans,
+			.credit = &bnx2x_vfq(vf, qid, vlan_count),
+		};
+		struct bnx2x_vfop_vlan_mac_flags flags = {
+			.drv_only = drv_only,
+			.dont_consume = (filters.credit != NULL),
+			.single_cmd = false,
+			.add = false, /* don't care */
+		};
+		struct bnx2x_vlan_mac_ramrod_params *ramrod =
+			&vf->op_params.vlan_mac;
+
+		/* set ramrod params */
+		bnx2x_vfop_vlan_mac_prep_ramrod(ramrod, &flags);
+
+		/* set object */
+		ramrod->vlan_mac_obj = &bnx2x_vfq(vf, qid, vlan_obj);
+
+		/* set extra args */
+		filters.multi_filter->add_cnt = vf_vlan_rules_cnt(vf) -
+			atomic_read(filters.credit);
+
+		vfop->args.filters = filters;
+
+		bnx2x_vfop_opset(BNX2X_VFOP_VLAN_CONFIG_LIST,
+				 bnx2x_vfop_vlan_mac, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_vlan_mac,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 /* VFOP queue setup (queue constructor + set vlan 0) */
 static void bnx2x_vfop_qsetup(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
@@ -676,16 +782,180 @@ int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
-static u8 bnx2x_iov_get_max_queue_count(struct bnx2x *bp)
+/* VFOP multi-casts */
+static void bnx2x_vfop_mcast(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_mcast_ramrod_params *mcast = &vfop->op_p->mcast;
+	struct bnx2x_raw_obj *raw = &mcast->mcast_obj->raw;
+	struct bnx2x_vfop_args_mcast *args = &vfop->args.mc_list;
+	enum bnx2x_vfop_mcast_state state = vfop->state;
 	int i;
-	u8 queue_count = 0;
 
-	if (IS_SRIOV(bp))
-		for_each_vf(bp, i)
-			queue_count += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+	bnx2x_vfop_reset_wq(vf);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_MCAST_DEL:
+		/* clear existing mcasts */
+		vfop->state = BNX2X_VFOP_MCAST_ADD;
+		vfop->rc = bnx2x_config_mcast(bp, mcast, BNX2X_MCAST_CMD_DEL);
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT);
+
+	case BNX2X_VFOP_MCAST_ADD:
+		if (raw->check_pending(raw))
+			goto op_pending;
+
+		if (args->mc_num) {
+			/* update mcast list on the ramrod params */
+			INIT_LIST_HEAD(&mcast->mcast_list);
+			for (i = 0; i < args->mc_num; i++)
+				list_add_tail(&(args->mc[i].link),
+					      &mcast->mcast_list);
+			/* add new mcasts */
+			vfop->state = BNX2X_VFOP_MCAST_CHK_DONE;
+			vfop->rc = bnx2x_config_mcast(bp, mcast,
+						      BNX2X_MCAST_CMD_ADD);
+		}
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+
+	case BNX2X_VFOP_MCAST_CHK_DONE:
+		vfop->rc = raw->check_pending(raw) ? 1 : 0;
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_err:
+	BNX2X_ERR("MCAST CONFIG error: rc %d\n", vfop->rc);
+op_done:
+	kfree(args->mc);
+	bnx2x_vfop_end(bp, vf, vfop);
+op_pending:
+	return;
+}
+
+int bnx2x_vfop_mcast_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd,
+			 bnx2x_mac_addr_t *mcasts,
+			 int mcast_num, bool drv_only)
+{
+	struct bnx2x_vfop *vfop = NULL;
+	size_t mc_sz = mcast_num * sizeof(struct bnx2x_mcast_list_elem);
+	struct bnx2x_mcast_list_elem *mc = mc_sz ? kzalloc(mc_sz, GFP_KERNEL) :
+		NULL;
+
+	if (!mc_sz || mc) {
+		vfop = bnx2x_vfop_add(bp, vf);
+		if (vfop) {
+			int i;
+			struct bnx2x_mcast_ramrod_params *ramrod =
+				&vf->op_params.mcast;
+
+			/* set ramrod params */
+			memset(ramrod, 0, sizeof(*ramrod));
+			ramrod->mcast_obj = &vf->mcast_obj;
+			if (drv_only)
+				set_bit(RAMROD_DRV_CLR_ONLY,
+					&ramrod->ramrod_flags);
+
+			/* copy mcasts pointers */
+			vfop->args.mc_list.mc_num = mcast_num;
+			vfop->args.mc_list.mc = mc;
+			for (i = 0; i < mcast_num; i++)
+				mc[i].mac = mcasts[i];
+
+			bnx2x_vfop_opset(BNX2X_VFOP_MCAST_DEL,
+					 bnx2x_vfop_mcast, cmd->done);
+			return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_mcast,
+						     cmd->block);
+		} else {
+			kfree(mc);
+		}
+	}
+	return -ENOMEM;
+}
+
+/* VFOP rx-mode */
+static void bnx2x_vfop_rxmode(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_rx_mode_ramrod_params *ramrod = &vfop->op_p->rx_mode;
+	enum bnx2x_vfop_rxmode_state state = vfop->state;
+
+	bnx2x_vfop_reset_wq(vf);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_RXMODE_CONFIG:
+		/* next state */
+		vfop->state = BNX2X_VFOP_RXMODE_DONE;
+
+		vfop->rc = bnx2x_config_rx_mode(bp, ramrod);
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+op_err:
+	BNX2X_ERR("RXMODE error: rc %d\n", vfop->rc);
+op_done:
+	case BNX2X_VFOP_RXMODE_DONE:
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_pending:
+	return;
+}
+
+int bnx2x_vfop_rxmode_cmd(struct bnx2x *bp,
+			  struct bnx2x_virtf *vf,
+			  struct bnx2x_vfop_cmd *cmd,
+			  int qid, unsigned long accept_flags)
+{
+
+	struct bnx2x_vf_queue *vfq = vfq_get(vf, qid);
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_rx_mode_ramrod_params *ramrod =
+			&vf->op_params.rx_mode;
+
+		memset(ramrod, 0, sizeof(*ramrod));
+
+		/* Prepare ramrod parameters */
+		ramrod->cid = vfq->cid;
+		ramrod->cl_id = vfq_cl_id(vf, vfq);
+		ramrod->rx_mode_obj = &bp->rx_mode_obj;
+		ramrod->func_id = FW_VF_HANDLE(vf->abs_vfid);
+
+		ramrod->rx_accept_flags = accept_flags;
+		ramrod->tx_accept_flags = accept_flags;
+		ramrod->pstate = &vf->filter_state;
+		ramrod->state = BNX2X_FILTER_RX_MODE_PENDING;
+
+		set_bit(BNX2X_FILTER_RX_MODE_PENDING, &vf->filter_state);
+		set_bit(RAMROD_RX, &ramrod->ramrod_flags);
+		set_bit(RAMROD_TX, &ramrod->ramrod_flags);
+
+		ramrod->rdata =
+			bnx2x_vf_sp(bp, vf, rx_mode_rdata.e2);
+		ramrod->rdata_mapping =
+			bnx2x_vf_sp_map(bp, vf, rx_mode_rdata.e2);
+
+		bnx2x_vfop_opset(BNX2X_VFOP_RXMODE_CONFIG,
+				 bnx2x_vfop_rxmode, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_rxmode,
+					     cmd->block);
+	}
+	return -ENOMEM;
 
-	return queue_count;
 }
 
 /* VF enable primitives
@@ -1054,6 +1324,18 @@ static int bnx2x_sriov_info(struct bnx2x *bp, struct bnx2x_sriov *iov)
 	return 0;
 }
 
+static u8 bnx2x_iov_get_max_queue_count(struct bnx2x *bp)
+{
+	int i;
+	u8 queue_count = 0;
+
+	if (IS_SRIOV(bp))
+		for_each_vf(bp, i)
+			queue_count += bnx2x_vf(bp, i, alloc_resc.num_sbs);
+
+	return queue_count;
+}
+
 /* must be called after PF bars are mapped */
 int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
 				 int num_vfs_param)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 3b758d4..776aa75 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -442,6 +442,10 @@ void bnx2x_iov_sp_task(struct bnx2x *bp);
 /* global vf mailbox routines */
 void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event);
 void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid);
+
+/* CORE VF API */
+typedef u8 bnx2x_mac_addr_t[ETH_ALEN];
+
 /* acquire */
 int bnx2x_vf_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 			  struct vf_pf_resc_request *resc);
@@ -616,11 +620,39 @@ void bnx2x_vfop_qctor_prep(struct bnx2x *bp,
 			   struct bnx2x_vf_queue *q,
 			   struct bnx2x_vfop_qctor_params *p,
 			   unsigned long q_type);
+int bnx2x_vfop_mac_list_cmd(struct bnx2x *bp,
+			    struct bnx2x_virtf *vf,
+			    struct bnx2x_vfop_cmd *cmd,
+			    struct bnx2x_vfop_filters *macs,
+			    int qid, bool drv_only);
+
+int bnx2x_vfop_vlan_set_cmd(struct bnx2x *bp,
+			    struct bnx2x_virtf *vf,
+			    struct bnx2x_vfop_cmd *cmd,
+			    int qid, u16 vid, bool add);
+
+int bnx2x_vfop_vlan_list_cmd(struct bnx2x *bp,
+			     struct bnx2x_virtf *vf,
+			     struct bnx2x_vfop_cmd *cmd,
+			     struct bnx2x_vfop_filters *vlans,
+			     int qid, bool drv_only);
+
 int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
 			  struct bnx2x_virtf *vf,
 			  struct bnx2x_vfop_cmd *cmd,
 			  int qid);
 
+int bnx2x_vfop_mcast_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd,
+			 bnx2x_mac_addr_t *mcasts,
+			 int mcast_num, bool drv_only);
+
+int bnx2x_vfop_rxmode_cmd(struct bnx2x *bp,
+			  struct bnx2x_virtf *vf,
+			  struct bnx2x_vfop_cmd *cmd,
+			  int qid, unsigned long accept_flags);
+
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
 /* VF FLR helpers */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index eb10378..06598ec 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -518,6 +518,280 @@ response:
 	bnx2x_vf_mbx_resp(bp, vf);
 }
 
+enum bnx2x_vfop_filters_state {
+	   BNX2X_VFOP_MBX_Q_FILTERS_MACS,
+	   BNX2X_VFOP_MBX_Q_FILTERS_VLANS,
+	   BNX2X_VFOP_MBX_Q_FILTERS_RXMODE,
+	   BNX2X_VFOP_MBX_Q_FILTERS_MCAST,
+	   BNX2X_VFOP_MBX_Q_FILTERS_DONE
+};
+
+static int bnx2x_vf_mbx_macvlan_list(struct bnx2x *bp,
+				     struct bnx2x_virtf *vf,
+				     struct vfpf_set_q_filters_tlv *tlv,
+				     struct bnx2x_vfop_filters **pfl,
+				     u32 type_flag)
+{
+	int i, j;
+	struct bnx2x_vfop_filters *fl = NULL;
+	size_t fsz;
+
+	fsz = tlv->n_mac_vlan_filters * sizeof(struct bnx2x_vfop_filter) +
+		sizeof(struct bnx2x_vfop_filters);
+
+	fl = kzalloc(fsz, GFP_KERNEL);
+	if (!fl)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&fl->head);
+
+	for (i = 0, j = 0; i < tlv->n_mac_vlan_filters; i++) {
+		struct vfpf_q_mac_vlan_filter *msg_filter = &tlv->filters[i];
+
+		if ((msg_filter->flags & type_flag) != type_flag)
+			continue;
+		if (type_flag == VFPF_Q_FILTER_DEST_MAC_VALID) {
+			fl->filters[j].mac = msg_filter->mac;
+			fl->filters[j].type = BNX2X_VFOP_FILTER_MAC;
+		} else {
+			fl->filters[j].vid = msg_filter->vlan_tag;
+			fl->filters[j].type = BNX2X_VFOP_FILTER_VLAN;
+		}
+		fl->filters[j].add =
+			(msg_filter->flags & VFPF_Q_FILTER_SET_MAC) ?
+			true : false;
+		list_add_tail(&fl->filters[j++].link, &fl->head);
+	}
+	if (list_empty(&fl->head))
+		kfree(fl);
+	else
+		*pfl = fl;
+
+	return 0;
+}
+
+static void bnx2x_vf_mbx_dp_q_filter(struct bnx2x *bp, int msglvl, int idx,
+				       struct vfpf_q_mac_vlan_filter *filter)
+{
+	DP(msglvl, "MAC-VLAN[%d] -- flags=0x%x", idx, filter->flags);
+	if (filter->flags & VFPF_Q_FILTER_VLAN_TAG_VALID)
+		DP_CONT(msglvl, ", vlan=%d", filter->vlan_tag);
+	if (filter->flags & VFPF_Q_FILTER_DEST_MAC_VALID)
+		DP_CONT(msglvl, ", MAC=%pM", filter->mac);
+	DP_CONT(msglvl, "\n");
+}
+
+static void bnx2x_vf_mbx_dp_q_filters(struct bnx2x *bp, int msglvl,
+				       struct vfpf_set_q_filters_tlv *filters)
+{
+	int i;
+
+	if (filters->flags & VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED)
+		for (i = 0; i < filters->n_mac_vlan_filters; i++)
+			bnx2x_vf_mbx_dp_q_filter(bp, msglvl, i,
+						 &filters->filters[i]);
+
+	if (filters->flags & VFPF_SET_Q_FILTERS_RX_MASK_CHANGED)
+		DP(msglvl, "RX-MASK=0x%x\n", filters->rx_mask);
+
+	if (filters->flags & VFPF_SET_Q_FILTERS_MULTICAST_CHANGED)
+		for (i = 0; i < filters->n_multicast; i++)
+			DP(msglvl, "MULTICAST=%pM\n", filters->multicast[i]);
+}
+
+#define VFPF_MAC_FILTER		VFPF_Q_FILTER_DEST_MAC_VALID
+#define VFPF_VLAN_FILTER	VFPF_Q_FILTER_VLAN_TAG_VALID
+
+static void bnx2x_vfop_mbx_qfilters(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	int rc;
+
+	struct vfpf_set_q_filters_tlv *msg =
+		&BP_VF_MBX(bp, vf->index)->msg->req.set_q_filters;
+
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	enum bnx2x_vfop_filters_state state = vfop->state;
+
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vfop_mbx_qfilters,
+		.block = false,
+	};
+
+	DP(BNX2X_MSG_IOV, "STATE: %d\n", state);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	switch (state) {
+	case BNX2X_VFOP_MBX_Q_FILTERS_MACS:
+		/* next state */
+		vfop->state = BNX2X_VFOP_MBX_Q_FILTERS_VLANS;
+
+		/* check for any vlan/mac changes */
+		if (msg->flags & VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED) {
+
+			/* build mac list */
+			struct bnx2x_vfop_filters *fl = NULL;
+			vfop->rc = bnx2x_vf_mbx_macvlan_list(bp, vf, msg, &fl,
+							     VFPF_MAC_FILTER);
+			if (vfop->rc)
+				goto op_err;
+
+			if (fl) {
+				/* set mac list */
+				rc = bnx2x_vfop_mac_list_cmd(bp, vf, &cmd, fl,
+							     msg->vf_qid,
+							     false);
+				if (rc) {
+					vfop->rc = rc;
+					goto op_err;
+				}
+				return;
+			}
+		}
+		/* fall through */
+
+	case BNX2X_VFOP_MBX_Q_FILTERS_VLANS:
+		/* next state */
+		vfop->state = BNX2X_VFOP_MBX_Q_FILTERS_RXMODE;
+
+		/* check for any vlan/mac changes */
+		if (msg->flags & VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED) {
+
+			/* build vlan list */
+			struct bnx2x_vfop_filters *fl = NULL;
+			vfop->rc = bnx2x_vf_mbx_macvlan_list(bp, vf, msg, &fl,
+							     VFPF_VLAN_FILTER);
+			if (vfop->rc)
+				goto op_err;
+
+			if (fl) {
+				/* set vlan list */
+				rc = bnx2x_vfop_vlan_list_cmd(bp, vf, &cmd, fl,
+							      msg->vf_qid,
+							      false);
+				if (rc) {
+					vfop->rc = rc;
+					goto op_err;
+				}
+				return;
+			}
+		}
+		/* fall through */
+
+	case BNX2X_VFOP_MBX_Q_FILTERS_RXMODE:
+		/* next state */
+		vfop->state = BNX2X_VFOP_MBX_Q_FILTERS_MCAST;
+
+		if (msg->flags & VFPF_SET_Q_FILTERS_RX_MASK_CHANGED) {
+			unsigned long accept = 0;
+
+			/* covert VF-PF if mask to bnx2x accept flags */
+			if (msg->rx_mask & VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST)
+				__set_bit(BNX2X_ACCEPT_UNICAST, &accept);
+
+			if (msg->rx_mask &
+					VFPF_RX_MASK_ACCEPT_MATCHED_MULTICAST)
+				__set_bit(BNX2X_ACCEPT_MULTICAST, &accept);
+
+			if (msg->rx_mask & VFPF_RX_MASK_ACCEPT_ALL_UNICAST)
+				__set_bit(BNX2X_ACCEPT_ALL_UNICAST, &accept);
+
+			if (msg->rx_mask & VFPF_RX_MASK_ACCEPT_ALL_MULTICAST)
+				__set_bit(BNX2X_ACCEPT_ALL_MULTICAST, &accept);
+
+			if (msg->rx_mask & VFPF_RX_MASK_ACCEPT_BROADCAST)
+				__set_bit(BNX2X_ACCEPT_BROADCAST, &accept);
+
+			/* A packet arriving the vf's mac should be accepted
+			 * with any vlan
+			 */
+			__set_bit(BNX2X_ACCEPT_ANY_VLAN, &accept);
+
+			/* set rx-mode */
+			rc = bnx2x_vfop_rxmode_cmd(bp, vf, &cmd,
+						   msg->vf_qid, accept);
+			if (rc) {
+				vfop->rc = rc;
+				goto op_err;
+			}
+			return;
+		}
+		/* fall through */
+
+	case BNX2X_VFOP_MBX_Q_FILTERS_MCAST:
+		/* next state */
+		vfop->state = BNX2X_VFOP_MBX_Q_FILTERS_DONE;
+
+		if (msg->flags & VFPF_SET_Q_FILTERS_MULTICAST_CHANGED) {
+			/* set mcasts */
+			rc = bnx2x_vfop_mcast_cmd(bp, vf, &cmd, msg->multicast,
+						  msg->n_multicast, false);
+			if (rc) {
+				vfop->rc = rc;
+				goto op_err;
+			}
+			return;
+		}
+		/* fall through */
+op_done:
+	case BNX2X_VFOP_MBX_Q_FILTERS_DONE:
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+op_err:
+	BNX2X_ERR("QFILTERS[%d:%d] error: rc %d\n",
+		  vf->abs_vfid, msg->vf_qid, vfop->rc);
+	goto op_done;
+
+	default:
+		bnx2x_vfop_default(state);
+	}
+}
+
+static int bnx2x_vfop_mbx_qfilters_cmd(struct bnx2x *bp,
+					struct bnx2x_virtf *vf,
+					struct bnx2x_vfop_cmd *cmd)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+	if (vfop) {
+		bnx2x_vfop_opset(BNX2X_VFOP_MBX_Q_FILTERS_MACS,
+				 bnx2x_vfop_mbx_qfilters, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_mbx_qfilters,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
+static void bnx2x_vf_mbx_set_q_filters(struct bnx2x *bp,
+				       struct bnx2x_virtf *vf,
+				       struct bnx2x_vf_mbx *mbx)
+{
+	struct vfpf_set_q_filters_tlv *filters = &mbx->msg->req.set_q_filters;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vf_mbx_resp,
+		.block = false,
+	};
+
+	/* verify vf_qid */
+	if (filters->vf_qid > vf_rxq_count(vf))
+		goto response;
+
+	DP(BNX2X_MSG_IOV, "VF[%d] Q_FILTERS: queue[%d]\n",
+	   vf->abs_vfid,
+	   filters->vf_qid);
+
+	/* print q_filter message */
+	bnx2x_vf_mbx_dp_q_filters(bp, BNX2X_MSG_IOV, filters);
+
+	vf->op_rc = bnx2x_vfop_mbx_qfilters_cmd(bp, vf, &cmd);
+	if (vf->op_rc)
+		goto response;
+	return;
+
+response:
+	bnx2x_vf_mbx_resp(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -543,6 +817,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_SETUP_Q:
 			bnx2x_vf_mbx_setup_q(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_SET_Q_FILTERS:
+			bnx2x_vf_mbx_set_q_filters(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 17/22] bnx2x: Support of PF driver of a VF q_teardown request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (15 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 16/22] bnx2x: Support of PF driver of a VF q_filters request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 18/22] bnx2x: Support of PF driver of a VF close request Ariel Elior
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The 'q_teardown' request is basically the opposite of the 'q_setup'.
Here the PF driver removes from the device the queue it opened against
the VF fastpath ring at 'setup_q' stage, along with all related
rx_mode info.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  270 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    5 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   20 ++
 3 files changed, 295 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 472e8ca..fa76c4e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -110,6 +110,13 @@ enum bnx2x_vfop_qctor_state {
 	   BNX2X_VFOP_QCTOR_INT_EN
 };
 
+enum bnx2x_vfop_qdtor_state {
+	   BNX2X_VFOP_QDTOR_HALT,
+	   BNX2X_VFOP_QDTOR_TERMINATE,
+	   BNX2X_VFOP_QDTOR_CFCDEL,
+	   BNX2X_VFOP_QDTOR_DONE
+};
+
 enum bnx2x_vfop_vlan_mac_state {
 	   BNX2X_VFOP_VLAN_MAC_CONFIG_SINGLE,
 	   BNX2X_VFOP_VLAN_MAC_CLEAR,
@@ -136,6 +143,14 @@ enum bnx2x_vfop_rxmode_state {
 	   BNX2X_VFOP_RXMODE_DONE
 };
 
+enum bnx2x_vfop_qteardown_state {
+	   BNX2X_VFOP_QTEARDOWN_RXMODE,
+	   BNX2X_VFOP_QTEARDOWN_CLR_VLAN,
+	   BNX2X_VFOP_QTEARDOWN_CLR_MAC,
+	   BNX2X_VFOP_QTEARDOWN_QDTOR,
+	   BNX2X_VFOP_QTEARDOWN_DONE
+};
+
 #define bnx2x_vfop_reset_wq(vf)	atomic_set(&vf->op_in_progress, 0)
 
 void bnx2x_vfop_qctor_dump_tx(struct bnx2x *bp, struct bnx2x_virtf *vf,
@@ -342,6 +357,101 @@ static int bnx2x_vfop_qctor_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
+/* VFOP queue destruction */
+static void bnx2x_vfop_qdtor(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vfop_args_qdtor *qdtor = &vfop->args.qdtor;
+	struct bnx2x_queue_state_params *q_params = &vfop->op_p->qctor.qstate;
+	enum bnx2x_vfop_qdtor_state state = vfop->state;
+
+	bnx2x_vfop_reset_wq(vf);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_QDTOR_HALT:
+
+		/* has this queue already been stopped? */
+		if (bnx2x_get_q_logical_state(bp, q_params->q_obj) ==
+		    BNX2X_Q_LOGICAL_STATE_STOPPED) {
+			DP(BNX2X_MSG_IOV,
+			   "Entered qdtor but queue was already stopped. Aborting gracefully\n");
+			goto op_done;
+		}
+
+		/* next state */
+		vfop->state = BNX2X_VFOP_QDTOR_TERMINATE;
+
+		q_params->cmd = BNX2X_Q_CMD_HALT;
+		vfop->rc = bnx2x_queue_state_change(bp, q_params);
+
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT);
+
+	case BNX2X_VFOP_QDTOR_TERMINATE:
+		/* next state */
+		vfop->state = BNX2X_VFOP_QDTOR_CFCDEL;
+
+		q_params->cmd = BNX2X_Q_CMD_TERMINATE;
+		vfop->rc = bnx2x_queue_state_change(bp, q_params);
+
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_CONT);
+
+	case BNX2X_VFOP_QDTOR_CFCDEL:
+		/* next state */
+		vfop->state = BNX2X_VFOP_QDTOR_DONE;
+
+		q_params->cmd = BNX2X_Q_CMD_CFC_DEL;
+		vfop->rc = bnx2x_queue_state_change(bp, q_params);
+
+		bnx2x_vfop_finalize(vf, vfop->rc, VFOP_DONE);
+op_err:
+	BNX2X_ERR("QDTOR[%d:%d] error: cmd %d, rc %d\n",
+		  vf->abs_vfid, qdtor->qid, q_params->cmd, vfop->rc);
+op_done:
+	case BNX2X_VFOP_QDTOR_DONE:
+		/* invalidate the context */
+		qdtor->cxt->ustorm_ag_context.cdu_usage = 0;
+		qdtor->cxt->xstorm_ag_context.cdu_reserved = 0;
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_pending:
+	return;
+}
+
+static int bnx2x_vfop_qdtor_cmd(struct bnx2x *bp,
+				struct bnx2x_virtf *vf,
+				struct bnx2x_vfop_cmd *cmd,
+				int qid)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_queue_state_params *qstate =
+			&vf->op_params.qctor.qstate;
+
+		memset(qstate, 0, sizeof(*qstate));
+		qstate->q_obj = &bnx2x_vfq(vf, qid, sp_obj);
+
+		vfop->args.qdtor.qid = qid;
+		vfop->args.qdtor.cxt = bnx2x_vfq(vf, qid, cxt);
+
+		bnx2x_vfop_opset(BNX2X_VFOP_QDTOR_HALT,
+				 bnx2x_vfop_qdtor, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qdtor,
+					     cmd->block);
+	}
+	DP(BNX2X_MSG_IOV, "VF[%d] failed to add a vfop. rc %d\n",
+	   vf->abs_vfid, vfop->rc);
+	return -ENOMEM;
+}
+
 static void
 bnx2x_vf_set_igu_info(struct bnx2x *bp, u8 igu_sb_id, u8 abs_vfid)
 {
@@ -593,6 +703,45 @@ bnx2x_vfop_mac_prep_ramrod(struct bnx2x_vlan_mac_ramrod_params *ramrod,
 	set_bit(BNX2X_ETH_MAC, &ramrod->user_req.vlan_mac_flags);
 }
 
+static int bnx2x_vfop_mac_delall_cmd(struct bnx2x *bp,
+				     struct bnx2x_virtf *vf,
+				     struct bnx2x_vfop_cmd *cmd,
+				     int qid, bool drv_only)
+{
+
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_vfop_args_filters filters = {
+			.multi_filter = NULL,	/* single */
+			.credit = NULL,		/* consume credit */
+		};
+		struct bnx2x_vfop_vlan_mac_flags flags = {
+			.drv_only = drv_only,
+			.dont_consume = (filters.credit != NULL),
+			.single_cmd = true,
+			.add = false /* don't care */,
+		};
+		struct bnx2x_vlan_mac_ramrod_params *ramrod =
+			&vf->op_params.vlan_mac;
+
+		/* set ramrod params */
+		bnx2x_vfop_mac_prep_ramrod(ramrod, &flags);
+
+		/* set object */
+		ramrod->vlan_mac_obj = &bnx2x_vfq(vf, qid, mac_obj);
+
+		/* set extra args */
+		vfop->args.filters = filters;
+
+		bnx2x_vfop_opset(BNX2X_VFOP_VLAN_MAC_CLEAR,
+				 bnx2x_vfop_vlan_mac, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_vlan_mac,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 int bnx2x_vfop_mac_list_cmd(struct bnx2x *bp,
 			    struct bnx2x_virtf *vf,
 			    struct bnx2x_vfop_cmd *cmd,
@@ -675,6 +824,44 @@ int bnx2x_vfop_vlan_set_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
+static int bnx2x_vfop_vlan_delall_cmd(struct bnx2x *bp,
+			       struct bnx2x_virtf *vf,
+			       struct bnx2x_vfop_cmd *cmd,
+			       int qid, bool drv_only)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		struct bnx2x_vfop_args_filters filters = {
+			.multi_filter = NULL, /* single command */
+			.credit = &bnx2x_vfq(vf, qid, vlan_count),
+		};
+		struct bnx2x_vfop_vlan_mac_flags flags = {
+			.drv_only = drv_only,
+			.dont_consume = (filters.credit != NULL),
+			.single_cmd = true,
+			.add = false, /* don't care */
+		};
+		struct bnx2x_vlan_mac_ramrod_params *ramrod =
+			&vf->op_params.vlan_mac;
+
+		/* set ramrod params */
+		bnx2x_vfop_vlan_mac_prep_ramrod(ramrod, &flags);
+
+		/* set object */
+		ramrod->vlan_mac_obj = &bnx2x_vfq(vf, qid, vlan_obj);
+
+		/* set extra args */
+		vfop->args.filters = filters;
+
+		bnx2x_vfop_opset(BNX2X_VFOP_VLAN_MAC_CLEAR,
+				 bnx2x_vfop_vlan_mac, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_vlan_mac,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 int bnx2x_vfop_vlan_list_cmd(struct bnx2x *bp,
 			     struct bnx2x_virtf *vf,
 			     struct bnx2x_vfop_cmd *cmd,
@@ -958,6 +1145,89 @@ int bnx2x_vfop_rxmode_cmd(struct bnx2x *bp,
 
 }
 
+/* VFOP queue tear-down ('drop all' rx-mode, clear vlans, clear macs,
+ * queue destructor)
+ */
+static void bnx2x_vfop_qdown(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	int qid = vfop->args.qx.qid;
+	enum bnx2x_vfop_qteardown_state state = vfop->state;
+	struct bnx2x_vfop_cmd cmd;
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	cmd.done = bnx2x_vfop_qdown;
+	cmd.block = false;
+
+	switch (state) {
+	case BNX2X_VFOP_QTEARDOWN_RXMODE:
+		/* Drop all */
+		vfop->state = BNX2X_VFOP_QTEARDOWN_CLR_VLAN;
+		vfop->rc = bnx2x_vfop_rxmode_cmd(bp, vf, &cmd, qid, 0);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QTEARDOWN_CLR_VLAN:
+		/* vlan-clear-all: don't consume credit */
+		vfop->state = BNX2X_VFOP_QTEARDOWN_CLR_MAC;
+		vfop->rc = bnx2x_vfop_vlan_delall_cmd(bp, vf, &cmd, qid, false);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QTEARDOWN_CLR_MAC:
+		/* mac-clear-all: consume credit */
+		vfop->state = BNX2X_VFOP_QTEARDOWN_QDTOR;
+		vfop->rc = bnx2x_vfop_mac_delall_cmd(bp, vf, &cmd, qid, false);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QTEARDOWN_QDTOR:
+		/* run the queue destruction flow */
+		DP(BNX2X_MSG_IOV, "case: BNX2X_VFOP_QTEARDOWN_QDTOR\n");
+		vfop->state = BNX2X_VFOP_QTEARDOWN_DONE;
+		DP(BNX2X_MSG_IOV, "new state: BNX2X_VFOP_QTEARDOWN_DONE\n");
+		vfop->rc = bnx2x_vfop_qdtor_cmd(bp, vf, &cmd, qid);
+		DP(BNX2X_MSG_IOV, "returned from cmd");
+		if (vfop->rc)
+			goto op_err;
+		return;
+op_err:
+	BNX2X_ERR("QTEARDOWN[%d:%d] error: rc %d\n",
+		  vf->abs_vfid, qid, vfop->rc);
+
+	case BNX2X_VFOP_QTEARDOWN_DONE:
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+	default:
+		bnx2x_vfop_default(state);
+	}
+}
+
+int bnx2x_vfop_qdown_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd,
+			 int qid)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		vfop->args.qx.qid = qid;
+		bnx2x_vfop_opset(BNX2X_VFOP_QTEARDOWN_RXMODE,
+				 bnx2x_vfop_qdown, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qdown,
+					     cmd->block);
+	}
+
+	return -ENOMEM;
+}
+
 /* VF enable primitives
  *
  * when pretend is required the caller is responsible
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 776aa75..09b3394 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -642,6 +642,11 @@ int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
 			  struct bnx2x_vfop_cmd *cmd,
 			  int qid);
 
+int bnx2x_vfop_qdown_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd,
+			 int qid);
+
 int bnx2x_vfop_mcast_cmd(struct bnx2x *bp,
 			 struct bnx2x_virtf *vf,
 			 struct bnx2x_vfop_cmd *cmd,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 06598ec..ad92a4d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -792,6 +792,23 @@ response:
 	bnx2x_vf_mbx_resp(bp, vf);
 }
 
+static void bnx2x_vf_mbx_teardown_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				    struct bnx2x_vf_mbx *mbx)
+{
+	int qid = mbx->msg->req.q_op.vf_qid;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vf_mbx_resp,
+		.block = false,
+	};
+
+	DP(BNX2X_MSG_IOV, "VF[%d] Q_TEARDOWN: vf_qid=%d\n",
+	   vf->abs_vfid, qid);
+
+	vf->op_rc = bnx2x_vfop_qdown_cmd(bp, vf, &cmd, qid);
+	if (vf->op_rc)
+		bnx2x_vf_mbx_resp(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -820,6 +837,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_SET_Q_FILTERS:
 			bnx2x_vf_mbx_set_q_filters(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_TEARDOWN_Q:
+			bnx2x_vf_mbx_teardown_q(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 18/22] bnx2x: Support of PF driver of a VF close request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (16 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 17/22] bnx2x: Support of PF driver of a VF q_teardown request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 19/22] bnx2x: Support of PF driver of a VF release request Ariel Elior
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The 'close' command is the opposite of an init request. Here the
queues of the VF are closed (if any are opened) and released.
This flow applies the 'q_teardown' flow on all the queues.
The VF state is changed by this request.
Interrupts are disabled for the VF when closed.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |   97 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    4 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   27 ++++++
 3 files changed, 128 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index fa76c4e..c5d471e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -138,6 +138,11 @@ enum bnx2x_vfop_mcast_state {
 	   BNX2X_VFOP_MCAST_CHK_DONE
 };
 
+enum bnx2x_vfop_close_state {
+	   BNX2X_VFOP_CLOSE_QUEUES,
+	   BNX2X_VFOP_CLOSE_HW
+};
+
 enum bnx2x_vfop_rxmode_state {
 	   BNX2X_VFOP_RXMODE_CONFIG,
 	   BNX2X_VFOP_RXMODE_DONE
@@ -2303,6 +2308,28 @@ static void bnx2x_vf_qtbl_set_q(struct bnx2x *bp, u8 abs_vfid, u8 qid,
 	REG_WR(bp, reg, val);
 }
 
+static void bnx2x_vf_clr_qtbl(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	int i;
+
+	for_each_vfq(vf, i)
+		bnx2x_vf_qtbl_set_q(bp, vf->abs_vfid,
+				    vfq_qzone_id(vf, vfq_get(vf, i)), false);
+}
+
+static void bnx2x_vf_igu_disable(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	u32 val;
+
+	/* clear the VF configuration - pretend */
+	bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf->abs_vfid));
+	val = REG_RD(bp, IGU_REG_VF_CONFIGURATION);
+	val &= ~(IGU_VF_CONF_MSI_MSIX_EN | IGU_VF_CONF_SINGLE_ISR_EN |
+		 IGU_VF_CONF_FUNC_EN | IGU_VF_CONF_PARENT_MASK);
+	REG_WR(bp, IGU_REG_VF_CONFIGURATION, val);
+	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
+}
+
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
 	return min_t(u8, min_t(u8, vf_sb_count(vf), BNX2X_CIDS_PER_VF),
@@ -2474,6 +2501,76 @@ int bnx2x_vf_init(struct bnx2x *bp, struct bnx2x_virtf *vf, dma_addr_t *sb_map)
 	return 0;
 }
 
+/* VFOP close (teardown the queues, delete mcasts and close HW) */
+static void bnx2x_vfop_close(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vfop_args_qx *qx = &vfop->args.qx;
+	enum bnx2x_vfop_close_state state = vfop->state;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vfop_close,
+		.block = false,
+	};
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_CLOSE_QUEUES:
+
+		if (++(qx->qid) < vf_rxq_count(vf)) {
+			vfop->rc = bnx2x_vfop_qdown_cmd(bp, vf, &cmd, qx->qid);
+			if (vfop->rc)
+				goto op_err;
+			return;
+		}
+
+		/* remove multicasts */
+		vfop->state = BNX2X_VFOP_CLOSE_HW;
+		vfop->rc = bnx2x_vfop_mcast_cmd(bp, vf, &cmd, NULL, 0, false);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_CLOSE_HW:
+
+		/* disable the interrupts */
+		DP(BNX2X_MSG_IOV, "disabling igu\n");
+		bnx2x_vf_igu_disable(bp, vf);
+
+		/* disable the VF */
+		DP(BNX2X_MSG_IOV, "clearing qtbl\n");
+		bnx2x_vf_clr_qtbl(bp, vf);
+
+		goto op_done;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_err:
+	BNX2X_ERR("VF[%d] CLOSE error: rc %d\n", vf->abs_vfid, vfop->rc);
+op_done:
+	vf->state = VF_ACQUIRED;
+	DP(BNX2X_MSG_IOV, "set state to acquired\n");
+	bnx2x_vfop_end(bp, vf, vfop);
+}
+
+int bnx2x_vfop_close_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+	if (vfop) {
+		vfop->args.qx.qid = -1; /* loop */
+		bnx2x_vfop_opset(BNX2X_VFOP_CLOSE_QUEUES,
+				 bnx2x_vfop_close, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_close,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
 			      enum channel_tlvs tlv)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 09b3394..2e2c571 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -658,6 +658,10 @@ int bnx2x_vfop_rxmode_cmd(struct bnx2x *bp,
 			  struct bnx2x_vfop_cmd *cmd,
 			  int qid, unsigned long accept_flags);
 
+int bnx2x_vfop_close_cmd(struct bnx2x *bp,
+			 struct bnx2x_virtf *vf,
+			 struct bnx2x_vfop_cmd *cmd);
+
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
 /* VF FLR helpers */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index ad92a4d..fef4564 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -809,6 +809,30 @@ static void bnx2x_vf_mbx_teardown_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		bnx2x_vf_mbx_resp(bp, vf);
 }
 
+/* Done handler for 'close' operation: send response and reopen the
+ * channel.
+ */
+static void bnx2x_vf_mbx_close_done(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	bnx2x_vf_mbx_resp(bp, vf);
+	bnx2x_vf_enable_mbx(bp, vf->abs_vfid);
+}
+
+static void bnx2x_vf_mbx_close_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				  struct bnx2x_vf_mbx *mbx)
+{
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vf_mbx_close_done,
+		.block = false,
+	};
+
+	DP(BNX2X_MSG_IOV, "VF[%d] VF_CLOSE\n", vf->abs_vfid);
+
+	vf->op_rc = bnx2x_vfop_close_cmd(bp, vf, &cmd);
+	if (vf->op_rc)
+		bnx2x_vf_mbx_close_done(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -840,6 +864,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_TEARDOWN_Q:
 			bnx2x_vf_mbx_teardown_q(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_CLOSE:
+			bnx2x_vf_mbx_close_vf(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 19/22] bnx2x: Support of PF driver of a VF release request
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (17 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 18/22] bnx2x: Support of PF driver of a VF close request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 20/22] bnx2x: Support VF FLR Ariel Elior
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The 'release' request is the opposite of the 'acquire' request.
At release, all the resources allocated to the VF are reclaimed.
The release flow applies the close flow if applicable.
Note that there are actually two types of release:
1. The VF has been removed, and so issued a 'release' request
over the VF <-> PF Channel.
2. The PF is going down and so has to release all of it's VFs.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |    1 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  121 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |   21 ++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   25 ++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h  |    1 +
 5 files changed, 168 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 3618afb..384f303 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -8628,6 +8628,7 @@ void bnx2x_chip_cleanup(struct bnx2x *bp, int unload_mode, bool keep_link)
 
 	netif_addr_unlock_bh(bp->dev);
 
+	bnx2x_iov_chip_cleanup(bp);
 
 
 	/*
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index c5d471e..658915f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -1425,6 +1425,14 @@ bnx2x_iov_static_resc(struct bnx2x *bp, struct vf_pf_resc_request *resc)
 	/* num_sbs already set */
 }
 
+/* FLR routines: */
+static void bnx2x_vf_free_resc(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	/* reset the state variables */
+	bnx2x_iov_static_resc(bp, &vf->alloc_resc);
+	vf->state = VF_FREE;
+}
+
 /* IOV global initialization routines  */
 void bnx2x_iov_init_dq(struct bnx2x *bp)
 {
@@ -1950,6 +1958,21 @@ int bnx2x_iov_nic_init(struct bnx2x *bp)
 	return 0;
 }
 
+/* called by bnx2x_chip_cleanup */
+int bnx2x_iov_chip_cleanup(struct bnx2x *bp)
+{
+	int i;
+
+	if (!IS_SRIOV(bp))
+		return 0;
+
+	/* release all the VFs */
+	for_each_vf(bp, i)
+		bnx2x_vf_release(bp, BP_VF(bp, i), true); /* blocking */
+
+	return 0;
+}
+
 /* called by bnx2x_init_hw_func, returns the next ilt line */
 int bnx2x_iov_init_ilt(struct bnx2x *bp, u16 line)
 {
@@ -2571,6 +2594,104 @@ int bnx2x_vfop_close_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
+/* VF release can be called either: 1. the VF was acquired but
+ * not enabled 2. the vf was enabled or in the process of being
+ * enabled
+ */
+static void bnx2x_vfop_release(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vfop_release,
+		.block = false,
+	};
+
+	DP(BNX2X_MSG_IOV, "vfop->rc %d\n", vfop->rc);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "VF[%d] STATE: %s\n", vf->abs_vfid,
+	   vf->state == VF_FREE ? "Free" :
+	   vf->state == VF_ACQUIRED ? "Acquired" :
+	   vf->state == VF_ENABLED ? "Enabled" :
+	   vf->state == VF_RESET ? "Reset" :
+	   "Unknown");
+
+	switch (vf->state) {
+	case VF_ENABLED:
+		vfop->rc = bnx2x_vfop_close_cmd(bp, vf, &cmd);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case VF_ACQUIRED:
+		DP(BNX2X_MSG_IOV, "about to free resources\n");
+		bnx2x_vf_free_resc(bp, vf);
+		DP(BNX2X_MSG_IOV, "vfop->rc %d\n", vfop->rc);
+		goto op_done;
+
+	case VF_FREE:
+	case VF_RESET:
+		/* do nothing */
+		goto op_done;
+	default:
+		bnx2x_vfop_default(vf->state);
+	}
+op_err:
+	BNX2X_ERR("VF[%d] RELEASE error: rc %d\n", vf->abs_vfid, vfop->rc);
+op_done:
+	bnx2x_vfop_end(bp, vf, vfop);
+}
+
+int bnx2x_vfop_release_cmd(struct bnx2x *bp,
+			   struct bnx2x_virtf *vf,
+			   struct bnx2x_vfop_cmd *cmd)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+	if (vfop) {
+		bnx2x_vfop_opset(-1, /* use vf->state */
+				 bnx2x_vfop_release, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_release,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
+/* VF release ~ VF close + VF release-resources
+ * Release is the ultimate SW shutdown and is called whenever an
+ * irrecoverable error is encountered.
+ */
+void bnx2x_vf_release(struct bnx2x *bp, struct bnx2x_virtf *vf, bool block)
+{
+	struct bnx2x_vfop_cmd cmd = {
+		.done = NULL,
+		.block = block,
+	};
+	int rc;
+	bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_RELEASE_VF);
+	rc = bnx2x_vfop_release_cmd(bp, vf, &cmd);
+	if (rc)
+		WARN(rc,
+		     "VF[%d] Failed to allocate resources for release op- rc=%d\n",
+		     vf->abs_vfid, rc);
+}
+
+static inline void bnx2x_vf_get_sbdf(struct bnx2x *bp,
+			      struct bnx2x_virtf *vf, u32 *sbdf)
+{
+	*sbdf = vf->devfn | (vf->bus << 8);
+}
+
+static inline void bnx2x_vf_get_bars(struct bnx2x *bp, struct bnx2x_virtf *vf,
+		       struct bnx2x_vf_bar_info *bar_info)
+{
+	int n;
+	bar_info->nr_bars = bp->vfdb->sriov.nres;
+	for (n = 0; n < bar_info->nr_bars; n++)
+		bar_info->bars[n] = vf->bars[n];
+}
+
 void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
 			      enum channel_tlvs tlv)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 2e2c571..8e65b29 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -51,6 +51,11 @@ struct bnx2x_vf_bar {
 	u32 size;
 };
 
+struct bnx2x_vf_bar_info {
+	struct bnx2x_vf_bar bars[PCI_SRIOV_NUM_BARS];
+	u8 nr_bars;
+};
+
 /* vf queue (used both for rx or tx) */
 struct bnx2x_vf_queue {
 	struct eth_context		*cxt;
@@ -430,6 +435,7 @@ void bnx2x_iov_remove_one(struct bnx2x *bp);
 void bnx2x_iov_free_mem(struct bnx2x *bp);
 int bnx2x_iov_alloc_mem(struct bnx2x *bp);
 int bnx2x_iov_nic_init(struct bnx2x *bp);
+int bnx2x_iov_chip_cleanup(struct bnx2x *bp);
 void bnx2x_iov_init_dq(struct bnx2x *bp);
 void bnx2x_iov_init_dmae(struct bnx2x *bp);
 void bnx2x_iov_set_queue_sp_obj(struct bnx2x *bp, int vf_cid,
@@ -547,6 +553,11 @@ static inline void bnx2x_vfop_end(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	if (vfop->done) {
 		DP(BNX2X_MSG_IOV, "calling done handler\n");
 		vfop->done(bp, vf);
+	} else {
+		/* there is no done handler for the operation to unlock
+		 * the mutex. Must have gotten here from PF initiated VF RELEASE
+		 */
+		bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_RELEASE_VF);
 	}
 
 	DP(BNX2X_MSG_IOV, "done handler complete. vf->op_rc %d, vfop->rc %d\n",
@@ -662,6 +673,16 @@ int bnx2x_vfop_close_cmd(struct bnx2x *bp,
 			 struct bnx2x_virtf *vf,
 			 struct bnx2x_vfop_cmd *cmd);
 
+int bnx2x_vfop_release_cmd(struct bnx2x *bp,
+			   struct bnx2x_virtf *vf,
+			   struct bnx2x_vfop_cmd *cmd);
+
+/* VF release ~ VF close + VF release-resources
+ *
+ * Release is the ultimate SW shutdown and is called whenever an
+ * irrecoverable error is encountered.
+ */
+void bnx2x_vf_release(struct bnx2x *bp, struct bnx2x_virtf *vf, bool block);
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
 /* VF FLR helpers */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index fef4564..65a3723 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -231,7 +231,7 @@ static void bnx2x_vf_mbx_resp(struct bnx2x *bp, struct bnx2x_virtf *vf)
 		if (rc) {
 			BNX2X_ERR("Failed to copy response body to VF %d\n",
 				  vf->abs_vfid);
-			return;
+			goto mbx_error;
 		}
 		vf_addr -= sizeof(u64);
 		pf_addr -= sizeof(u64);
@@ -258,8 +258,12 @@ static void bnx2x_vf_mbx_resp(struct bnx2x *bp, struct bnx2x_virtf *vf)
 	if (rc) {
 		BNX2X_ERR("Failed to copy response status to VF %d\n",
 			  vf->abs_vfid);
+		goto mbx_error;
 	}
 	return;
+
+mbx_error:
+	bnx2x_vf_release(bp, vf, false); /* non blocking */
 }
 
 static void bnx2x_vf_mbx_acquire_resp(struct bnx2x *bp, struct bnx2x_virtf *vf,
@@ -833,6 +837,21 @@ static void bnx2x_vf_mbx_close_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		bnx2x_vf_mbx_close_done(bp, vf);
 }
 
+static void bnx2x_vf_mbx_release_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
+				    struct bnx2x_vf_mbx *mbx)
+{
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vf_mbx_resp,
+		.block = false,
+	};
+
+	DP(BNX2X_MSG_IOV, "VF[%d] VF_RELEASE\n", vf->abs_vfid);
+
+	vf->op_rc = bnx2x_vfop_release_cmd(bp, vf, &cmd);
+	if (vf->op_rc)
+		bnx2x_vf_mbx_resp(bp, vf);
+}
+
 /* dispatch request */
 static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				  struct bnx2x_vf_mbx *mbx)
@@ -867,6 +886,9 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		case CHANNEL_TLV_CLOSE:
 			bnx2x_vf_mbx_close_vf(bp, vf, mbx);
 			break;
+		case CHANNEL_TLV_RELEASE:
+			bnx2x_vf_mbx_release_vf(bp, vf, mbx);
+			break;
 		}
 
 	/* unknown TLV - this may belong to a VF driver from the future - a
@@ -961,6 +983,7 @@ void bnx2x_vf_mbx(struct bnx2x *bp, struct vf_pf_event_data *vfpf_event)
 	goto mbx_done;
 
 mbx_error:
+	bnx2x_vf_release(bp, vf, false); /* non blocking */
 mbx_done:
 	return;
 }
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index a289e59..c4d4891 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -309,6 +309,7 @@ enum channel_tlvs {
 	   CHANNEL_TLV_TEARDOWN_Q,
 	   CHANNEL_TLV_CLOSE,
 	   CHANNEL_TLV_RELEASE,
+	   CHANNEL_TLV_PF_RELEASE_VF,
 	   CHANNEL_TLV_LIST_END,
 	   CHANNEL_TLV_MAX
 };
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 20/22] bnx2x: Support VF FLR
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (18 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 19/22] bnx2x: Support of PF driver of a VF release request Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 21/22] bnx2x: Support PF <-> VF Bulletin Board Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 22/22] bnx2x: Add VF device ids and enable feature Ariel Elior
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The FLR indication arrives as an attention from the management processor.
Upon VF flr all FLRed function in the indication have already been
released by Firmware and now we basically need to free the resources
allocated to those VFs, and clean any remainders from the device
(FLR final cleanup).

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    6 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   19 +-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h   |    1 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |  294 +++++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |    7 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h  |    1 +
 6 files changed, 321 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index a7d9fb0..a2b8ffa 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1879,7 +1879,13 @@ void bnx2x_prep_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
 int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae);
 void bnx2x_dp_dmae(struct bnx2x *bp, struct dmae_command *dmae, int msglvl);
 
+/* FLR related routines */
+u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp);
+void bnx2x_tx_hw_flushed(struct bnx2x *bp, u32 poll_count);
+int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func, u32 poll_cnt);
 u8 bnx2x_is_pcie_pending(struct pci_dev *dev);
+int bnx2x_flr_clnup_poll_hw_counter(struct bnx2x *bp, u32 reg,
+				    char *msg, u32 poll_cnt);
 
 void bnx2x_calc_fc_adv(struct bnx2x *bp);
 int bnx2x_sp_post(struct bnx2x *bp, int command, int cid,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 384f303..91a2eb5 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -1095,8 +1095,8 @@ static u32 bnx2x_flr_clnup_reg_poll(struct bnx2x *bp, u32 reg,
 	return val;
 }
 
-static int bnx2x_flr_clnup_poll_hw_counter(struct bnx2x *bp, u32 reg,
-					   char *msg, u32 poll_cnt)
+int bnx2x_flr_clnup_poll_hw_counter(struct bnx2x *bp, u32 reg,
+				    char *msg, u32 poll_cnt)
 {
 	u32 val = bnx2x_flr_clnup_reg_poll(bp, reg, 0, poll_cnt);
 	if (val != 0) {
@@ -1106,7 +1106,8 @@ static int bnx2x_flr_clnup_poll_hw_counter(struct bnx2x *bp, u32 reg,
 	return 0;
 }
 
-static u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp)
+/* Common routines with VF FLR cleanup */
+u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp)
 {
 	/* adjust polling timeout */
 	if (CHIP_REV_IS_EMUL(bp))
@@ -1118,7 +1119,7 @@ static u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp)
 	return FLR_POLL_CNT;
 }
 
-static void bnx2x_tx_hw_flushed(struct bnx2x *bp, u32 poll_count)
+void bnx2x_tx_hw_flushed(struct bnx2x *bp, u32 poll_count)
 {
 	struct pbf_pN_cmd_regs cmd_regs[] = {
 		{0, (CHIP_IS_E3B0(bp)) ?
@@ -1193,8 +1194,7 @@ static void bnx2x_tx_hw_flushed(struct bnx2x *bp, u32 poll_count)
 	(((index) << SDM_OP_GEN_AGG_VECT_IDX_SHIFT) & SDM_OP_GEN_AGG_VECT_IDX)
 
 
-static int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func,
-					 u32 poll_cnt)
+int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func, u32 poll_cnt)
 {
 	struct sdm_op_gen op_gen = {0};
 
@@ -1219,7 +1219,8 @@ static int bnx2x_send_final_clnup(struct bnx2x *bp, u8 clnup_func,
 		BNX2X_ERR("FW final cleanup did not succeed\n");
 		DP(BNX2X_MSG_SP, "At timeout completion address contained %x\n",
 		   (REG_RD(bp, comp_addr)));
-		ret = 1;
+		bnx2x_panic();
+		return 1;
 	}
 	/* Zero completion for nxt FLR */
 	REG_WR(bp, comp_addr, 0);
@@ -3901,6 +3902,10 @@ static void bnx2x_attn_int_deasserted3(struct bnx2x *bp, u32 attn)
 
 			if (val & DRV_STATUS_DRV_INFO_REQ)
 				bnx2x_handle_drv_info_req(bp);
+
+			if (val & DRV_STATUS_VF_DISABLED)
+				bnx2x_vf_handle_flr_event(bp);
+
 			if ((bp->port.pmf == 0) && (val & DRV_STATUS_PMF))
 				bnx2x_pmf_update(bp);
 
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 1008d3f..a015965 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -876,6 +876,7 @@
 #define HC_CONFIG_0_REG_MSI_MSIX_INT_EN_0			 (0x1<<2)
 #define HC_CONFIG_0_REG_SINGLE_ISR_EN_0				 (0x1<<1)
 #define HC_CONFIG_1_REG_BLOCK_DISABLE_1				 (0x1<<0)
+#define DORQ_REG_VF_USAGE_CNT					 0x170320
 #define HC_REG_AGG_INT_0					 0x108050
 #define HC_REG_AGG_INT_1					 0x108054
 #define HC_REG_ATTN_BIT 					 0x108120
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 658915f..a0a019c0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -137,6 +137,17 @@ enum bnx2x_vfop_mcast_state {
 	   BNX2X_VFOP_MCAST_ADD,
 	   BNX2X_VFOP_MCAST_CHK_DONE
 };
+enum bnx2x_vfop_qflr_state {
+	   BNX2X_VFOP_QFLR_CLR_VLAN,
+	   BNX2X_VFOP_QFLR_CLR_MAC,
+	   BNX2X_VFOP_QFLR_TERMINATE,
+	   BNX2X_VFOP_QFLR_DONE
+};
+
+enum bnx2x_vfop_flr_state {
+	   BNX2X_VFOP_FLR_QUEUES,
+	   BNX2X_VFOP_FLR_HW
+};
 
 enum bnx2x_vfop_close_state {
 	   BNX2X_VFOP_CLOSE_QUEUES,
@@ -974,6 +985,94 @@ int bnx2x_vfop_qsetup_cmd(struct bnx2x *bp,
 	return -ENOMEM;
 }
 
+/* VFOP queue FLR handling (clear vlans, clear macs, queue destructor) */
+static void bnx2x_vfop_qflr(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	int qid = vfop->args.qx.qid;
+	enum bnx2x_vfop_qflr_state state = vfop->state;
+	struct bnx2x_queue_state_params *qstate;
+	struct bnx2x_vfop_cmd cmd;
+
+	bnx2x_vfop_reset_wq(vf);
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "VF[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	cmd.done = bnx2x_vfop_qflr;
+	cmd.block = false;
+
+	switch (state) {
+	case BNX2X_VFOP_QFLR_CLR_VLAN:
+		/* vlan-clear-all: driver-only, don't consume credit */
+		vfop->state = BNX2X_VFOP_QFLR_CLR_MAC;
+		vfop->rc = bnx2x_vfop_vlan_delall_cmd(bp, vf, &cmd, qid, true);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QFLR_CLR_MAC:
+		/* mac-clear-all: driver only consume credit */
+		vfop->state = BNX2X_VFOP_QFLR_TERMINATE;
+		vfop->rc = bnx2x_vfop_mac_delall_cmd(bp, vf, &cmd, qid, true);
+		DP(BNX2X_MSG_IOV,
+		   "VF[%d] vfop->rc after bnx2x_vfop_mac_delall_cmd was %d",
+		   vf->abs_vfid, vfop->rc);
+		if (vfop->rc)
+			goto op_err;
+		return;
+
+	case BNX2X_VFOP_QFLR_TERMINATE:
+		qstate = &vfop->op_p->qctor.qstate;
+		memset(qstate , 0, sizeof(*qstate));
+		qstate->q_obj = &bnx2x_vfq(vf, qid, sp_obj);
+		vfop->state = BNX2X_VFOP_QFLR_DONE;
+
+		DP(BNX2X_MSG_IOV, "VF[%d] qstate during flr was %d",
+		   vf->abs_vfid, qstate->q_obj->state);
+
+		if (qstate->q_obj->state != BNX2X_Q_STATE_RESET) {
+			qstate->q_obj->state = BNX2X_Q_STATE_STOPPED;
+			qstate->cmd = BNX2X_Q_CMD_TERMINATE;
+			vfop->rc = bnx2x_queue_state_change(bp, qstate);
+			bnx2x_vfop_finalize(vf, vfop->rc, VFOP_VERIFY_PEND);
+		} else {
+			goto op_done;
+		}
+
+op_err:
+	BNX2X_ERR("QFLR[%d:%d] error: rc %d\n",
+		  vf->abs_vfid, qid, vfop->rc);
+op_done:
+	case BNX2X_VFOP_QFLR_DONE:
+		bnx2x_vfop_end(bp, vf, vfop);
+		return;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_pending:
+	return;
+}
+
+static int bnx2x_vfop_qflr_cmd(struct bnx2x *bp,
+			       struct bnx2x_virtf *vf,
+			       struct bnx2x_vfop_cmd *cmd,
+			       int qid)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+
+	if (vfop) {
+		vfop->args.qx.qid = qid;
+		bnx2x_vfop_opset(BNX2X_VFOP_QFLR_CLR_VLAN,
+				 bnx2x_vfop_qflr, cmd->done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qflr,
+					     cmd->block);
+	}
+	return -ENOMEM;
+}
+
 /* VFOP multi-casts */
 static void bnx2x_vfop_mcast(struct bnx2x *bp, struct bnx2x_virtf *vf)
 {
@@ -1433,6 +1532,201 @@ static void bnx2x_vf_free_resc(struct bnx2x *bp, struct bnx2x_virtf *vf)
 	vf->state = VF_FREE;
 }
 
+static void bnx2x_vf_flr_clnup_hw(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	u32 poll_cnt = bnx2x_flr_clnup_poll_count(bp);
+
+	/* DQ usage counter */
+	bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf->abs_vfid));
+	bnx2x_flr_clnup_poll_hw_counter(bp, DORQ_REG_VF_USAGE_CNT,
+					"DQ VF usage counter timed out",
+					poll_cnt);
+	bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
+
+	/* FW cleanup command - poll for the results */
+	if (bnx2x_send_final_clnup(bp, (u8)FW_VF_HANDLE(vf->abs_vfid),
+				   poll_cnt))
+		BNX2X_ERR("VF[%d] Final cleanup timed-out\n", vf->abs_vfid);
+
+	/* verify TX hw is flushed */
+	bnx2x_tx_hw_flushed(bp, poll_cnt);
+}
+
+static void bnx2x_vfop_flr(struct bnx2x *bp, struct bnx2x_virtf *vf)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_cur(bp, vf);
+	struct bnx2x_vfop_args_qx *qx = &vfop->args.qx;
+	enum bnx2x_vfop_flr_state state = vfop->state;
+	struct bnx2x_vfop_cmd cmd = {
+		.done = bnx2x_vfop_flr,
+		.block = false,
+	};
+
+	if (vfop->rc < 0)
+		goto op_err;
+
+	DP(BNX2X_MSG_IOV, "vf[%d] STATE: %d\n", vf->abs_vfid, state);
+
+	switch (state) {
+	case BNX2X_VFOP_FLR_QUEUES:
+		/* the cleanup operations are valid if and only if the VF
+		 * was first acquired.
+		 */
+		if (++(qx->qid) < vf_rxq_count(vf)) {
+			vfop->rc = bnx2x_vfop_qflr_cmd(bp, vf, &cmd,
+						       qx->qid);
+			if (vfop->rc)
+				goto op_err;
+			return;
+		}
+		/* remove multicasts */
+		vfop->state = BNX2X_VFOP_FLR_HW;
+		vfop->rc = bnx2x_vfop_mcast_cmd(bp, vf, &cmd, NULL,
+						0, true);
+		if (vfop->rc)
+			goto op_err;
+		return;
+	case BNX2X_VFOP_FLR_HW:
+
+		/* dispatch final cleanup and wait for HW queues to flush */
+		bnx2x_vf_flr_clnup_hw(bp, vf);
+
+		/* release VF resources */
+		bnx2x_vf_free_resc(bp, vf);
+
+		/* re-open the mailbox */
+		bnx2x_vf_enable_mbx(bp, vf->abs_vfid);
+
+		goto op_done;
+	default:
+		bnx2x_vfop_default(state);
+	}
+op_err:
+	BNX2X_ERR("VF[%d] FLR error: rc %d\n", vf->abs_vfid, vfop->rc);
+op_done:
+	vf->flr_clnup_stage = VF_FLR_ACK;
+	bnx2x_vfop_end(bp, vf, vfop);
+	bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_FLR);
+}
+
+static int bnx2x_vfop_flr_cmd(struct bnx2x *bp,
+			      struct bnx2x_virtf *vf,
+			      vfop_handler_t done)
+{
+	struct bnx2x_vfop *vfop = bnx2x_vfop_add(bp, vf);
+	if (vfop) {
+		vfop->args.qx.qid = -1; /* loop */
+		bnx2x_vfop_opset(BNX2X_VFOP_FLR_QUEUES,
+				 bnx2x_vfop_flr, done);
+		return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_flr, false);
+	}
+	return -ENOMEM;
+}
+
+static void bnx2x_vf_flr_clnup(struct bnx2x *bp, struct bnx2x_virtf *prev_vf)
+{
+	int i = prev_vf ? prev_vf->index + 1 : 0;
+	struct bnx2x_virtf *vf;
+
+	/* find next VF to cleanup */
+next_vf_to_clean:
+	for (;
+	     i < BNX2X_NR_VIRTFN(bp) &&
+	     (bnx2x_vf(bp, i, state) != VF_RESET ||
+	      bnx2x_vf(bp, i, flr_clnup_stage) != VF_FLR_CLN);
+	     i++)
+		;
+
+	DP(BNX2X_MSG_IOV, "next vf to cleanup: %d. num of vfs: %d\n", i,
+	   BNX2X_NR_VIRTFN(bp));
+
+	if (i < BNX2X_NR_VIRTFN(bp)) {
+		vf = BP_VF(bp, i);
+
+		/* lock the vf pf channel */
+		bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_FLR);
+
+		/* invoke the VF FLR SM */
+		if (bnx2x_vfop_flr_cmd(bp, vf, bnx2x_vf_flr_clnup)) {
+			BNX2X_ERR("VF[%d]: FLR cleanup failed -ENOMEM\n",
+				  vf->abs_vfid);
+
+			/* mark the VF to be ACKED and continue */
+			vf->flr_clnup_stage = VF_FLR_ACK;
+			goto next_vf_to_clean;
+		}
+		return;
+	}
+
+	/* we are done, update vf records */
+	for_each_vf(bp, i) {
+		vf = BP_VF(bp, i);
+
+		if (vf->flr_clnup_stage != VF_FLR_ACK)
+			continue;
+
+		vf->flr_clnup_stage = VF_FLR_EPILOG;
+	}
+
+	/* Acknowledge the handled VFs.
+	 * we are acknowledge all the vfs which an flr was requested for, even
+	 * if amongst them there are such that we never opened, since the mcp
+	 * will interrupt us immediately again if we only ack some of the bits,
+	 * resulting in an endless loop. This can happen for example in KVM
+	 * where an 'all ones' flr request is sometimes given by hyper visor
+	 */
+	DP(BNX2X_MSG_MCP, "DRV_STATUS_VF_DISABLED ACK for vfs 0x%x 0x%x\n",
+	   bp->vfdb->flrd_vfs[0], bp->vfdb->flrd_vfs[1]);
+	for (i = 0; i < FLRD_VFS_DWORDS; i++)
+		SHMEM2_WR(bp, drv_ack_vf_disabled[BP_FW_MB_IDX(bp)][i],
+			  bp->vfdb->flrd_vfs[i]);
+
+	bnx2x_fw_command(bp, DRV_MSG_CODE_VF_DISABLED_DONE, 0);
+
+	/* clear the acked bits - better yet if the MCP implemented
+	 * write to clear semantics
+	 */
+	for (i = 0; i < FLRD_VFS_DWORDS; i++)
+		SHMEM2_WR(bp, drv_ack_vf_disabled[BP_FW_MB_IDX(bp)][i], 0);
+}
+
+void bnx2x_vf_handle_flr_event(struct bnx2x *bp)
+{
+	int i;
+
+	/* Read FLR'd VFs */
+	for (i = 0; i < FLRD_VFS_DWORDS; i++)
+		bp->vfdb->flrd_vfs[i] = SHMEM2_RD(bp, mcp_vf_disabled[i]);
+
+	DP(BNX2X_MSG_MCP,
+	   "DRV_STATUS_VF_DISABLED received for vfs 0x%x 0x%x\n",
+	   bp->vfdb->flrd_vfs[0], bp->vfdb->flrd_vfs[1]);
+
+	for_each_vf(bp, i) {
+		struct bnx2x_virtf *vf = BP_VF(bp, i);
+		u32 reset = 0;
+
+		if (vf->abs_vfid < 32)
+			reset = bp->vfdb->flrd_vfs[0] & (1 << vf->abs_vfid);
+		else
+			reset = bp->vfdb->flrd_vfs[1] &
+				(1 << (vf->abs_vfid - 32));
+
+		if (reset) {
+			/* set as reset and ready for cleanup */
+			vf->state = VF_RESET;
+			vf->flr_clnup_stage = VF_FLR_CLN;
+
+			DP(BNX2X_MSG_IOV,
+			   "Initiating Final cleanup for VF %d\n",
+			   vf->abs_vfid);
+		}
+	}
+
+	/* do the FLR cleanup for all marked VFs*/
+	bnx2x_vf_flr_clnup(bp, NULL);
+}
+
 /* IOV global initialization routines  */
 void bnx2x_iov_init_dq(struct bnx2x *bp)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 8e65b29..8ffef4e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -685,9 +685,16 @@ int bnx2x_vfop_release_cmd(struct bnx2x *bp,
 void bnx2x_vf_release(struct bnx2x *bp, struct bnx2x_virtf *vf, bool block);
 int bnx2x_vf_idx_by_abs_fid(struct bnx2x *bp, u16 abs_vfid);
 u8 bnx2x_vf_max_queue_cnt(struct bnx2x *bp, struct bnx2x_virtf *vf);
+
+/* FLR routines */
+
 /* VF FLR helpers */
 int bnx2x_vf_flr_clnup_epilog(struct bnx2x *bp, u8 abs_vfid);
 void bnx2x_vf_enable_access(struct bnx2x *bp, u8 abs_vfid);
+
+/* Handles an FLR (or VF_DISABLE) notification form the MCP */
+void bnx2x_vf_handle_flr_event(struct bnx2x *bp);
+
 void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
 		   u16 length);
 void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index c4d4891..4ab176b 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -311,6 +311,7 @@ enum channel_tlvs {
 	   CHANNEL_TLV_RELEASE,
 	   CHANNEL_TLV_PF_RELEASE_VF,
 	   CHANNEL_TLV_LIST_END,
+	   CHANNEL_TLV_FLR,
 	   CHANNEL_TLV_MAX
 };
 
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 21/22] bnx2x: Support PF <-> VF Bulletin Board
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (19 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 20/22] bnx2x: Support VF FLR Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  2012-12-10 15:46 ` [PATCH net-next v3 22/22] bnx2x: Add VF device ids and enable feature Ariel Elior
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

The PF <-> VF Bulletin Board is a simple interface between the
PF and the VF. The main reason for the Bulletin Board is to allow
the PF to be the initiator. The VF publishes at 'acquire' stage
the GPA of a Bulletin Board structure it has allocated. The PF notes
this GPA in the VF database. The VF samples the Bulletin Board
periodically for new messages. The latest version of the BB is always
used.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |    6 ++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c   |   89 +++++++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h   |    2 +
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   96 ++++++++++++++++++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |   13 +++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h |   18 ++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c  |   67 ++++++++++++++
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h  |   37 ++++++++
 8 files changed, 327 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index a2b8ffa..d6c89d6 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -1259,6 +1259,12 @@ struct bnx2x {
 	/* we set aside a copy of the acquire response */
 	struct pfvf_acquire_resp_tlv acquire_resp;
 
+	/* bulletin board for messages from pf to vf*/
+	union pf_vf_bulletin   *pf2vf_bulletin;
+	dma_addr_t		pf2vf_bulletin_mapping;
+
+	struct pf_vf_bulletin_content	old_bulletin;
+
 	struct net_device	*dev;
 	struct pci_dev		*pdev;
 
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 0a00c4c..67f7334 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -3787,6 +3787,95 @@ int bnx2x_setup_tc(struct net_device *dev, u8 num_tc)
 	return 0;
 }
 
+/* New mac for VF. Consider these cases:
+ * 1. VF hasn't been acquired yet - save the mac in local bulletin board and
+ *    supply at acquire.
+ * 2. VF has already been acquired but has not yet initialized - store in local
+ *    bulletin board. mac will be posted on VF bulletin board after VF init. VF
+ *    will configure this mac when it is ready.
+ * 3. VF has already initialized but has not yet setup a queue - post the new
+ *    mac on VF's bulletin board right now. VF will configure this mac when it
+ *    is ready.
+ * 4. VF has already set a queue - delete any macs already configured for this
+ *    queue and manually config the new mac.
+ * In any event, once this function has been called refuse any attempts by the
+ * VF to configure any mac for itself except for this mac. In case of a race
+ * where the VF fails to see the new post on its bulletin board before sending a
+ * mac configuration request, the PF will simply fail the request and VF can try
+ * again after consulting its bulletin board
+ */
+int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac)
+{
+
+	struct bnx2x *bp = netdev_priv(dev);
+	int rc, q_logical_state, vfidx = queue;
+	struct bnx2x_virtf *vf = BP_VF(bp, vfidx);
+	struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vfidx);
+
+	/* if SRIOV is disabled there is nothing to do (and somewhere, someone
+	 * has erred).
+	 */
+	if (!IS_SRIOV(bp)) {
+		BNX2X_ERR("bnx2x_set_vf_mac called though sriov is disabled\n");
+		return -EINVAL;
+	}
+
+	if (!is_valid_ether_addr(mac)) {
+		BNX2X_ERR("mac address invalid\n");
+		return -EINVAL;
+	}
+
+	/* update PF's copy of the VF's bulletin. will no longer accept mac
+	 * configuration requests from vf unless match this mac
+	 */
+	bulletin->valid_bitmap |= 1 << MAC_ADDR_VALID;
+	memcpy(bulletin->mac, mac, ETH_ALEN);
+
+	/* Post update on VF's bulletin board */
+	rc = bnx2x_post_vf_bulletin(bp, vfidx);
+	if (rc) {
+		BNX2X_ERR("failed to update VF[%d] bulletin", vfidx);
+		return rc;
+	}
+
+	/* is vf initialized and queue set up? */
+	q_logical_state =
+		bnx2x_get_q_logical_state(bp, &bnx2x_vfq(vf, 0, sp_obj));
+	if (vf->state == VF_ENABLED &&
+	    q_logical_state == BNX2X_Q_LOGICAL_STATE_ACTIVE) {
+
+		/* configure the mac in device on this vf's queue */
+		unsigned long flags = 0;
+		struct bnx2x_vlan_mac_obj *mac_obj = &bnx2x_vfq(vf, 0, mac_obj);
+
+		/* must lock vfpf channel to protect against vf flows */
+		bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC);
+
+		/* remove existing eth macs */
+		rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_ETH_MAC, true);
+		if (rc) {
+			BNX2X_ERR("failed to delete eth macs\n");
+			return -EINVAL;
+		}
+
+		/* remove existing uc list macs */
+		rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_UC_LIST_MAC, true);
+		if (rc) {
+			BNX2X_ERR("failed to delete uc_list macs\n");
+			return -EINVAL;
+		}
+
+		/* configure the new mac to device */
+		__set_bit(RAMROD_COMP_WAIT, &flags);
+		bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true,
+				  BNX2X_ETH_MAC, &flags);
+
+		bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC);
+	}
+
+	return rc;
+}
+
 /* called with rtnl_lock */
 int bnx2x_change_mac_addr(struct net_device *dev, void *p)
 {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index cd1eaff..23a1fa9 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -496,6 +496,8 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev);
 /* setup_tc callback */
 int bnx2x_setup_tc(struct net_device *dev, u8 num_tc);
 
+int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac);
+
 /* select_queue callback */
 u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb);
 
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 91a2eb5..b44ef79 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -5247,6 +5247,63 @@ void bnx2x_drv_pulse(struct bnx2x *bp)
 		 bp->fw_drv_pulse_wr_seq);
 }
 
+/* crc is the first field in the bulletin board. compute the crc over the
+ * entire bulletin board excluding the crc field itself
+ */
+u32 bnx2x_crc_vf_bulletin(struct bnx2x *bp,
+			  struct pf_vf_bulletin_content *bulletin)
+{
+	return crc32(BULLETIN_CRC_SEED,
+		 ((u8 *)bulletin) + sizeof(bulletin->crc),
+		 BULLETIN_CONTENT_SIZE - sizeof(bulletin->crc));
+}
+
+/* Check for new posts on the bulletin board */
+enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp)
+{
+	struct pf_vf_bulletin_content bulletin = bp->pf2vf_bulletin->content;
+	int attempts;
+
+	/* bulletin board hasn't changed since last sample */
+	if (bp->old_bulletin.version == bulletin.version)
+		return PFVF_BULLETIN_UNCHANGED;
+
+	/* validate crc of new bulletin board */
+	if (bp->old_bulletin.version != bp->pf2vf_bulletin->content.version) {
+
+		/* sampling structure in mid post may result with corrupted data
+		 * validate crc to ensure coherency.
+		 */
+		for (attempts = 0; attempts < BULLETIN_ATTEMPTS; attempts++) {
+			bulletin = bp->pf2vf_bulletin->content;
+			if (bulletin.crc == bnx2x_crc_vf_bulletin(bp,
+								  &bulletin))
+				break;
+
+			BNX2X_ERR("bad crc on bulletin board. contained %x computed %x\n",
+				  bulletin.crc,
+				  bnx2x_crc_vf_bulletin(bp, &bulletin));
+		}
+		if (attempts >= BULLETIN_ATTEMPTS) {
+			BNX2X_ERR("pf to vf bulletin board crc was wrong %d consecutive times. Aborting\n",
+				  attempts);
+			return PFVF_BULLETIN_CRC_ERR;
+		}
+	}
+
+	/* the mac address in bulletin board is valid and is new */
+	if (bulletin.valid_bitmap & 1 << MAC_ADDR_VALID &&
+	    memcmp(bulletin.mac, bp->old_bulletin.mac, ETH_ALEN)) {
+
+		/* update new mac to net device */
+		memcpy(bp->dev->dev_addr, bulletin.mac, ETH_ALEN);
+	}
+
+	/* copy new bulletin board to bp */
+	bp->old_bulletin = bulletin;
+
+	return PFVF_BULLETIN_UPDATED;
+}
 
 static void bnx2x_timer(unsigned long data)
 {
@@ -5283,6 +5340,10 @@ static void bnx2x_timer(unsigned long data)
 	if (bp->state == BNX2X_STATE_OPEN)
 		bnx2x_stats_handle(bp, STATS_EVENT_UPDATE);
 
+	/* sample pf vf bulletin board for new posts from pf */
+	if (IS_VF(bp))
+		bnx2x_sample_bulletin(bp);
+
 	mod_timer(&bp->timer, jiffies + bp->current_interval);
 }
 
@@ -11665,7 +11726,7 @@ static const struct net_device_ops bnx2x_netdev_ops = {
 	.ndo_poll_controller	= poll_bnx2x,
 #endif
 	.ndo_setup_tc		= bnx2x_setup_tc,
-
+	.ndo_set_vf_mac		= bnx2x_set_vf_mac,
 #ifdef NETDEV_FCOE_WWNN
 	.ndo_fcoe_get_wwn	= bnx2x_fcoe_get_wwn,
 #endif
@@ -12327,6 +12388,11 @@ static int bnx2x_init_one(struct pci_dev *pdev,
 		/* allocate vf2pf mailbox for vf to pf channel */
 		BNX2X_PCI_ALLOC(bp->vf2pf_mbox, &bp->vf2pf_mbox_mapping,
 				sizeof(struct bnx2x_vf_mbx_msg));
+
+		/* allocate pf 2 vf bulletin board */
+		BNX2X_PCI_ALLOC(bp->pf2vf_bulletin, &bp->pf2vf_bulletin_mapping,
+				sizeof(union pf_vf_bulletin));
+
 	} else {
 		doorbell_size = BNX2X_L2_MAX_CID(bp) * (1 << BNX2X_DB_SHIFT);
 		if (doorbell_size > pci_resource_len(pdev, 2)) {
@@ -13385,6 +13451,9 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
 	req->resc_request.num_mac_filters = VF_ACQUIRE_MAC_FILTERS;
 	req->resc_request.num_mc_filters = VF_ACQUIRE_MC_FILTERS;
 
+	/* pf 2 vf bulletin board address */
+	req->bulletin_addr = bp->pf2vf_bulletin_mapping;
+
 	/* add list termination tlv */
 	bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
 		      sizeof(struct channel_list_end_tlv));
@@ -13741,6 +13810,9 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
 	req->filters[0].flags =
 		VFPF_Q_FILTER_DEST_MAC_VALID | VFPF_Q_FILTER_SET_MAC;
 
+	/* sample bulletin board for new mac */
+	bnx2x_sample_bulletin(bp);
+
 	/* copy mac from device to request */
 	memcpy(req->filters[0].mac, bp->dev->dev_addr, ETH_ALEN);
 
@@ -13758,6 +13830,28 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
 		return rc;
 	}
 
+	/* failure may mean PF was configured with a new mac for us */
+	while (resp->hdr.status == PFVF_STATUS_FAILURE) {
+
+		DP(BNX2X_MSG_IOV,
+		   "vfpf SET MAC failed. Check bulletin board for new posts\n");
+
+		/* check if bulletin board was updated */
+		if (bnx2x_sample_bulletin(bp) == PFVF_BULLETIN_UPDATED) {
+
+			/* copy mac from device to request */
+			memcpy(req->filters[0].mac, bp->dev->dev_addr,
+			       ETH_ALEN);
+
+			/* send message to pf */
+			rc = bnx2x_send_msg2pf(bp, &resp->hdr.status,
+					       bp->vf2pf_mbox_mapping);
+		} else {
+			/* no new info in bulletin */
+			break;
+		}
+	}
+
 	if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
 		BNX2X_ERR("vfpf SET MAC failed: %d\n", resp->hdr.status);
 		return -EINVAL;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index a0a019c0..11d603f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -2060,6 +2060,10 @@ void bnx2x_iov_free_mem(struct bnx2x *bp)
 	BNX2X_PCI_FREE(BP_VF_MBX_DMA(bp)->addr,
 		       BP_VF_MBX_DMA(bp)->mapping,
 		       BP_VF_MBX_DMA(bp)->size);
+
+	BNX2X_PCI_FREE(BP_VF_BULLETIN_DMA(bp)->addr,
+		       BP_VF_BULLETIN_DMA(bp)->mapping,
+		       BP_VF_BULLETIN_DMA(bp)->size);
 }
 
 int bnx2x_iov_alloc_mem(struct bnx2x *bp)
@@ -2099,6 +2103,12 @@ int bnx2x_iov_alloc_mem(struct bnx2x *bp)
 			tot_size);
 	BP_VF_MBX_DMA(bp)->size = tot_size;
 
+	/* allocate local bulletin boards */
+	tot_size = BNX2X_NR_VIRTFN(bp) * BULLETIN_CONTENT_SIZE;
+	BNX2X_PCI_ALLOC(BP_VF_BULLETIN_DMA(bp)->addr,
+			&BP_VF_BULLETIN_DMA(bp)->mapping, tot_size);
+	BP_VF_BULLETIN_DMA(bp)->size = tot_size;
+
 	return 0;
 
 alloc_mem_err:
@@ -2815,6 +2825,9 @@ int bnx2x_vf_init(struct bnx2x *bp, struct bnx2x_virtf *vf, dma_addr_t *sb_map)
 
 	vf->state = VF_ENABLED;
 
+	/* update vf bulletin board */
+	bnx2x_post_vf_bulletin(bp, vf->index);
+
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index 8ffef4e..5c8aac1 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -379,6 +379,12 @@ struct bnx2x_vfdb {
 	struct bnx2x_vf_mbx	mbxs[BNX2X_MAX_NUM_OF_VFS];
 #define BP_VF_MBX(bp, vfid)	(&((bp)->vfdb->mbxs[(vfid)]))
 
+	struct hw_dma		bulletin_dma;
+#define BP_VF_BULLETIN_DMA(bp)	(&((bp)->vfdb->bulletin_dma))
+#define	BP_VF_BULLETIN(bp, vf) \
+	(((struct pf_vf_bulletin_content *)(BP_VF_BULLETIN_DMA(bp)->addr)) \
+	 + (vf))
+
 	struct hw_dma		sp_dma;
 #define bnx2x_vf_sp(bp, vf, field) ((bp)->vfdb->sp_dma.addr +		\
 		(vf)->index * sizeof(struct bnx2x_vf_sp) +		\
@@ -703,4 +709,16 @@ void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list);
 
 bool bnx2x_tlv_supported(u16 tlvtype);
 
+u32 bnx2x_crc_vf_bulletin(struct bnx2x *bp,
+			  struct pf_vf_bulletin_content *bulletin);
+int bnx2x_post_vf_bulletin(struct bnx2x *bp, int vf);
+
+enum sample_bulletin_result {
+	   PFVF_BULLETIN_UNCHANGED,
+	   PFVF_BULLETIN_UPDATED,
+	   PFVF_BULLETIN_CRC_ERR
+};
+
+enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp);
+
 #endif /* bnx2x_sriov.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 65a3723..f246c63 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -301,6 +301,10 @@ static void bnx2x_vf_mbx_acquire_resp(struct bnx2x *bp, struct bnx2x_virtf *vf,
 		resc->num_mc_filters = 0;
 
 		if (status == PFVF_STATUS_SUCCESS) {
+			/* fill in the allocated resources */
+			struct pf_vf_bulletin_content *bulletin =
+				BP_VF_BULLETIN(bp, vf->index);
+
 			for_each_vfq(vf, i)
 				resc->hw_qid[i] =
 					vfq_qzone_id(vf, vfq_get(vf, i));
@@ -309,6 +313,12 @@ static void bnx2x_vf_mbx_acquire_resp(struct bnx2x *bp, struct bnx2x_virtf *vf,
 				resc->hw_sbs[i].hw_sb_id = vf_igu_sb(vf, i);
 				resc->hw_sbs[i].sb_qid = vf_hc_qzone(vf, i);
 			}
+
+			/* if a mac has been set for this vf, supply it */
+			if (bulletin->valid_bitmap & 1 << MAC_ADDR_VALID) {
+				memcpy(resc->current_mac_addr, bulletin->mac,
+				       ETH_ALEN);
+			}
 		}
 	}
 
@@ -360,6 +370,9 @@ static void bnx2x_vf_mbx_acquire(struct bnx2x *bp, struct bnx2x_virtf *vf,
 	/* acquire the resources */
 	rc = bnx2x_vf_acquire(bp, vf, &acquire->resc_request);
 
+	/* store address of vf's bulletin board */
+	vf->bulletin_map = acquire->bulletin_addr;
+
 	/* response */
 	bnx2x_vf_mbx_acquire_resp(bp, vf, mbx, rc);
 }
@@ -771,11 +784,39 @@ static void bnx2x_vf_mbx_set_q_filters(struct bnx2x *bp,
 				       struct bnx2x_vf_mbx *mbx)
 {
 	struct vfpf_set_q_filters_tlv *filters = &mbx->msg->req.set_q_filters;
+	struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vf->index);
 	struct bnx2x_vfop_cmd cmd = {
 		.done = bnx2x_vf_mbx_resp,
 		.block = false,
 	};
 
+	/* if a mac was already set for this VF via the set vf mac ndo, we only
+	 * accept mac configurations of that mac. Why accept them at all?
+	 * because PF may have been unable to configure the mac at the time
+	 * since queue was not set up.
+	 */
+	if (bulletin->valid_bitmap & 1 << MAC_ADDR_VALID) {
+
+		/* once a mac was set by ndo can only accept a single mac... */
+		if (filters->n_mac_vlan_filters > 1) {
+			BNX2X_ERR("VF[%d] requested the addition of multiple macs after set_vf_mac ndo was called\n",
+				  vf->abs_vfid);
+			vf->op_rc = -EPERM;
+			goto response;
+		}
+
+		/* ...and only the mac set by the ndo */
+		if (filters->n_mac_vlan_filters == 1 &&
+		    memcmp(filters->filters->mac, bulletin->mac, ETH_ALEN)) {
+
+			BNX2X_ERR("VF[%d] requested the addition of a mac address not matching the one configured by set_vf_mac ndo\n",
+				  vf->abs_vfid);
+
+			vf->op_rc = -EPERM;
+			goto response;
+		}
+	}
+
 	/* verify vf_qid */
 	if (filters->vf_qid > vf_rxq_count(vf))
 		goto response;
@@ -987,3 +1028,29 @@ mbx_error:
 mbx_done:
 	return;
 }
+
+/* propagate local bulletin board to vf */
+int bnx2x_post_vf_bulletin(struct bnx2x *bp, int vf)
+{
+	struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vf);
+	dma_addr_t pf_addr = BP_VF_BULLETIN_DMA(bp)->mapping +
+		vf * BULLETIN_CONTENT_SIZE;
+	dma_addr_t vf_addr = bnx2x_vf(bp, vf, bulletin_map);
+	u32 len = BULLETIN_CONTENT_SIZE;
+	int rc;
+
+	/* can only update vf after init took place */
+	if (bnx2x_vf(bp, vf, state) != VF_ENABLED &&
+	    bnx2x_vf(bp, vf, state) != VF_ACQUIRED)
+		return 0;
+
+	/* increment bulletin board version and compute crc */
+	bulletin->version++;
+	bulletin->crc = bnx2x_crc_vf_bulletin(bp, bulletin);
+
+	/* propagate bulletin board via dmae to vm memory */
+	rc = bnx2x_copy32_vf_dmae(bp, false, pf_addr,
+				  bnx2x_vf(bp, vf, abs_vfid), U64_HI(vf_addr),
+				  U64_LO(vf_addr), len/4);
+	return rc;
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index 4ab176b..74d2831 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -37,6 +37,7 @@ struct hw_sb_info {
  * A.K.A VF-PF mailbox
  */
 #define TLV_BUFFER_SIZE			1024
+#define PF_VF_BULLETIN_SIZE		512
 
 #define VFPF_QUEUE_FLG_TPA		0x0001
 #define VFPF_QUEUE_FLG_TPA_IPV6		0x0002
@@ -60,6 +61,9 @@ struct hw_sb_info {
 #define VFPF_RX_MASK_ACCEPT_ALL_UNICAST		0x00000004
 #define VFPF_RX_MASK_ACCEPT_ALL_MULTICAST	0x00000008
 #define VFPF_RX_MASK_ACCEPT_BROADCAST		0x00000010
+#define BULLETIN_CONTENT_SIZE		(sizeof(struct pf_vf_bulletin_content))
+#define BULLETIN_ATTEMPTS	5 /* crc failures before throwing towel */
+#define BULLETIN_CRC_SEED	0
 
 enum {
 	PFVF_STATUS_WAITING = 0,
@@ -298,6 +302,38 @@ union pfvf_tlvs {
 	struct tlv_buffer_size		tlv_buf_size;
 };
 
+/* This is a structure which is allocated in the VF, which the PF may update
+ * when it deems it necessary to do so. The bulletin board is sampled
+ * periodically by the VF. A copy per VF is maintained in the PF (to prevent
+ * loss of data upon multiple updates (or the need for read modify write)).
+ */
+struct pf_vf_bulletin_size {
+	u8 size[PF_VF_BULLETIN_SIZE];
+};
+
+struct pf_vf_bulletin_content {
+	u32 crc;			/* crc of structure to ensure is not in
+					 * mid-update
+					 */
+	u32 version;
+
+	aligned_u64 valid_bitmap;	/* bitmap indicating which fields
+					 * hold valid values
+					 */
+
+#define MAC_ADDR_VALID		0	/* alert the vf that a new mac address
+					 * is available for it
+					 */
+
+	u8 mac[ETH_ALEN];
+	u8 padding[2];
+};
+
+union pf_vf_bulletin {
+	struct pf_vf_bulletin_content content;
+	struct pf_vf_bulletin_size size;
+};
+
 #define MAX_TLVS_IN_LIST 50
 
 enum channel_tlvs {
@@ -312,6 +348,7 @@ enum channel_tlvs {
 	   CHANNEL_TLV_PF_RELEASE_VF,
 	   CHANNEL_TLV_LIST_END,
 	   CHANNEL_TLV_FLR,
+	   CHANNEL_TLV_PF_SET_MAC,
 	   CHANNEL_TLV_MAX
 };
 
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH net-next v3 22/22] bnx2x: Add VF device ids and enable feature
  2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
                   ` (20 preceding siblings ...)
  2012-12-10 15:46 ` [PATCH net-next v3 21/22] bnx2x: Support PF <-> VF Bulletin Board Ariel Elior
@ 2012-12-10 15:46 ` Ariel Elior
  21 siblings, 0 replies; 27+ messages in thread
From: Ariel Elior @ 2012-12-10 15:46 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ariel Elior, Eilon Greenstein

Add the various VF device ids (of all supported hardware)
Add the calls to enable_sriov and disable_sriov to enable the
SR-IOV feature. This patch also advances the version and release
date of the bnx2x module.

Signed-off-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
---
 drivers/net/ethernet/broadcom/bnx2x/bnx2x.h       |   22 +++++-
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c  |   85 +++++++++++++++++++--
 drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c |    4 +
 3 files changed, 101 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index d6c89d6..08afced 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -26,8 +26,8 @@
  * (you will need to reboot afterwards) */
 /* #define BNX2X_STOP_ON_ERROR */
 
-#define DRV_MODULE_VERSION      "1.78.00-0"
-#define DRV_MODULE_RELDATE      "2012/09/27"
+#define DRV_MODULE_VERSION      "1.78.01-0"
+#define DRV_MODULE_RELDATE      "2012/10/30"
 #define BNX2X_BC_VER            0x040200
 
 #if defined(CONFIG_DCB)
@@ -802,36 +802,46 @@ struct bnx2x_common {
 #define CHIP_NUM_57711E			0x1650
 #define CHIP_NUM_57712			0x1662
 #define CHIP_NUM_57712_MF		0x1663
+#define CHIP_NUM_57712_VF		0x166f
 #define CHIP_NUM_57713			0x1651
 #define CHIP_NUM_57713E			0x1652
 #define CHIP_NUM_57800			0x168a
 #define CHIP_NUM_57800_MF		0x16a5
+#define CHIP_NUM_57800_VF		0x16a9
 #define CHIP_NUM_57810			0x168e
 #define CHIP_NUM_57810_MF		0x16ae
+#define CHIP_NUM_57810_VF		0x16af
 #define CHIP_NUM_57811			0x163d
 #define CHIP_NUM_57811_MF		0x163e
+#define CHIP_NUM_57811_VF		0x163f
 #define CHIP_NUM_57840_OBSOLETE	0x168d
 #define CHIP_NUM_57840_MF_OBSOLETE	0x16ab
 #define CHIP_NUM_57840_4_10		0x16a1
 #define CHIP_NUM_57840_2_20		0x16a2
 #define CHIP_NUM_57840_MF		0x16a4
+#define CHIP_NUM_57840_VF		0x16ad
 #define CHIP_IS_E1(bp)			(CHIP_NUM(bp) == CHIP_NUM_57710)
 #define CHIP_IS_57711(bp)		(CHIP_NUM(bp) == CHIP_NUM_57711)
 #define CHIP_IS_57711E(bp)		(CHIP_NUM(bp) == CHIP_NUM_57711E)
 #define CHIP_IS_57712(bp)		(CHIP_NUM(bp) == CHIP_NUM_57712)
+#define CHIP_IS_57712_VF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57712_VF)
 #define CHIP_IS_57712_MF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57712_MF)
 #define CHIP_IS_57800(bp)		(CHIP_NUM(bp) == CHIP_NUM_57800)
 #define CHIP_IS_57800_MF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57800_MF)
+#define CHIP_IS_57800_VF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57800_VF)
 #define CHIP_IS_57810(bp)		(CHIP_NUM(bp) == CHIP_NUM_57810)
 #define CHIP_IS_57810_MF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57810_MF)
+#define CHIP_IS_57810_VF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57810_VF)
 #define CHIP_IS_57811(bp)		(CHIP_NUM(bp) == CHIP_NUM_57811)
 #define CHIP_IS_57811_MF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57811_MF)
+#define CHIP_IS_57811_VF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57811_VF)
 #define CHIP_IS_57840(bp)		\
 		((CHIP_NUM(bp) == CHIP_NUM_57840_4_10) || \
 		 (CHIP_NUM(bp) == CHIP_NUM_57840_2_20) || \
 		 (CHIP_NUM(bp) == CHIP_NUM_57840_OBSOLETE))
 #define CHIP_IS_57840_MF(bp)	((CHIP_NUM(bp) == CHIP_NUM_57840_MF) || \
 				 (CHIP_NUM(bp) == CHIP_NUM_57840_MF_OBSOLETE))
+#define CHIP_IS_57840_VF(bp)		(CHIP_NUM(bp) == CHIP_NUM_57840_VF)
 #define CHIP_IS_E1H(bp)			(CHIP_IS_57711(bp) || \
 					 CHIP_IS_57711E(bp))
 #define CHIP_IS_E2(bp)			(CHIP_IS_57712(bp) || \
@@ -840,10 +850,13 @@ struct bnx2x_common {
 					 CHIP_IS_57800_MF(bp) || \
 					 CHIP_IS_57810(bp) || \
 					 CHIP_IS_57810_MF(bp) || \
+					 CHIP_IS_57810_VF(bp) || \
 					 CHIP_IS_57811(bp) || \
 					 CHIP_IS_57811_MF(bp) || \
+					 CHIP_IS_57811_VF(bp) || \
 					 CHIP_IS_57840(bp) || \
-					 CHIP_IS_57840_MF(bp))
+					 CHIP_IS_57840_MF(bp) || \
+					 CHIP_IS_57840_VF(bp))
 #define CHIP_IS_E1x(bp)			(CHIP_IS_E1((bp)) || CHIP_IS_E1H((bp)))
 #define USES_WARPCORE(bp)		(CHIP_IS_E3(bp))
 #define IS_E1H_OFFSET			(!CHIP_IS_E1(bp))
@@ -1195,8 +1208,9 @@ struct bnx2x_fw_stats_data {
 enum {
 	BNX2X_SP_RTNL_SETUP_TC,
 	BNX2X_SP_RTNL_TX_TIMEOUT,
-	BNX2X_SP_RTNL_AFEX_F_UPDATE,
 	BNX2X_SP_RTNL_FAN_FAILURE,
+	BNX2X_SP_RTNL_AFEX_F_UPDATE,
+	BNX2X_SP_RTNL_ENABLE_SRIOV,
 	BNX2X_SP_RTNL_VFPF_MCAST,
 	BNX2X_SP_RTNL_VFPF_STORM_RX_MODE,
 };
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index b44ef79..2d19b04 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -193,12 +193,18 @@ static struct {
 #ifndef PCI_DEVICE_ID_NX2_57712_MF
 #define PCI_DEVICE_ID_NX2_57712_MF	CHIP_NUM_57712_MF
 #endif
+#ifndef PCI_DEVICE_ID_NX2_57712_VF
+#define PCI_DEVICE_ID_NX2_57712_VF	CHIP_NUM_57712_VF
+#endif
 #ifndef PCI_DEVICE_ID_NX2_57800
 #define PCI_DEVICE_ID_NX2_57800		CHIP_NUM_57800
 #endif
 #ifndef PCI_DEVICE_ID_NX2_57800_MF
 #define PCI_DEVICE_ID_NX2_57800_MF	CHIP_NUM_57800_MF
 #endif
+#ifndef PCI_DEVICE_ID_NX2_57800_VF
+#define PCI_DEVICE_ID_NX2_57800_VF	CHIP_NUM_57800_VF
+#endif
 #ifndef PCI_DEVICE_ID_NX2_57810
 #define PCI_DEVICE_ID_NX2_57810		CHIP_NUM_57810
 #endif
@@ -208,6 +214,9 @@ static struct {
 #ifndef PCI_DEVICE_ID_NX2_57840_O
 #define PCI_DEVICE_ID_NX2_57840_O	CHIP_NUM_57840_OBSOLETE
 #endif
+#ifndef PCI_DEVICE_ID_NX2_57810_VF
+#define PCI_DEVICE_ID_NX2_57810_VF	CHIP_NUM_57810_VF
+#endif
 #ifndef PCI_DEVICE_ID_NX2_57840_4_10
 #define PCI_DEVICE_ID_NX2_57840_4_10	CHIP_NUM_57840_4_10
 #endif
@@ -220,29 +229,41 @@ static struct {
 #ifndef PCI_DEVICE_ID_NX2_57840_MF
 #define PCI_DEVICE_ID_NX2_57840_MF	CHIP_NUM_57840_MF
 #endif
+#ifndef PCI_DEVICE_ID_NX2_57840_VF
+#define PCI_DEVICE_ID_NX2_57840_VF	CHIP_NUM_57840_VF
+#endif
 #ifndef PCI_DEVICE_ID_NX2_57811
 #define PCI_DEVICE_ID_NX2_57811		CHIP_NUM_57811
 #endif
 #ifndef PCI_DEVICE_ID_NX2_57811_MF
 #define PCI_DEVICE_ID_NX2_57811_MF	CHIP_NUM_57811_MF
 #endif
+#ifndef PCI_DEVICE_ID_NX2_57811_VF
+#define PCI_DEVICE_ID_NX2_57811_VF	CHIP_NUM_57811_VF
+#endif
+
 static DEFINE_PCI_DEVICE_TABLE(bnx2x_pci_tbl) = {
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57710), BCM57710 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57711), BCM57711 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57711E), BCM57711E },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57712), BCM57712 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57712_MF), BCM57712_MF },
+	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57712_VF), BCM57712_VF },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57800), BCM57800 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57800_MF), BCM57800_MF },
+	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57800_VF), BCM57800_VF },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57810), BCM57810 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57810_MF), BCM57810_MF },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_O), BCM57840_O },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_4_10), BCM57840_4_10 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_2_20), BCM57840_2_20 },
+	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57810_VF), BCM57810_VF },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_MFO), BCM57840_MFO },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_MF), BCM57840_MF },
+	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57840_VF), BCM57840_VF },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57811), BCM57811 },
 	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57811_MF), BCM57811_MF },
+	{ PCI_VDEVICE(BROADCOM, PCI_DEVICE_ID_NX2_57811_VF), BCM57811_VF },
 	{ 0 }
 };
 
@@ -9432,8 +9453,10 @@ static void bnx2x_sp_rtnl_task(struct work_struct *work)
 
 	rtnl_lock();
 
-	if (!netif_running(bp->dev))
-		goto sp_rtnl_exit;
+	if (!netif_running(bp->dev)) {
+		rtnl_unlock();
+		return;
+	}
 
 	/* if stop on error is defined no recovery flows should be executed */
 #ifdef BNX2X_STOP_ON_ERROR
@@ -9452,7 +9475,8 @@ static void bnx2x_sp_rtnl_task(struct work_struct *work)
 
 		bnx2x_parity_recover(bp);
 
-		goto sp_rtnl_exit;
+		rtnl_unlock();
+		return;
 	}
 
 	if (test_and_clear_bit(BNX2X_SP_RTNL_TX_TIMEOUT, &bp->sp_rtnl_state)) {
@@ -9466,7 +9490,8 @@ static void bnx2x_sp_rtnl_task(struct work_struct *work)
 		bnx2x_nic_unload(bp, UNLOAD_NORMAL, true);
 		bnx2x_nic_load(bp, LOAD_NORMAL);
 
-		goto sp_rtnl_exit;
+		rtnl_unlock();
+		return;
 	}
 #ifdef BNX2X_STOP_ON_ERROR
 sp_rtnl_not_reset:
@@ -9484,6 +9509,8 @@ sp_rtnl_not_reset:
 		DP(NETIF_MSG_HW, "fan failure detected. Unloading driver\n");
 		netif_device_detach(bp->dev);
 		bnx2x_close(bp->dev);
+		rtnl_unlock();
+		return;
 	}
 
 	if (test_and_clear_bit(BNX2X_SP_RTNL_VFPF_MCAST, &bp->sp_rtnl_state)) {
@@ -9499,8 +9526,28 @@ sp_rtnl_not_reset:
 		bnx2x_vfpf_storm_rx_mode(bp);
 	}
 
-sp_rtnl_exit:
+	/* work which needs rtnl lock not-taken (as it takes the lock itself and
+	 * can be called from other contexts as well)
+	 */
+
 	rtnl_unlock();
+
+	if (IS_SRIOV(bp) && test_and_clear_bit(BNX2X_SP_RTNL_ENABLE_SRIOV,
+					       &bp->sp_rtnl_state)) {
+		int rc = 0;
+
+		/* disbale sriov in case it is still enabled */
+		pci_disable_sriov(bp->pdev);
+		DP(BNX2X_MSG_IOV, "sriov disabled");
+
+		/* enable sriov */
+		DP(BNX2X_MSG_IOV, "vf num (%d)\n", (bp->vfdb->sriov.nr_virtfn));
+		rc = pci_enable_sriov(bp->pdev, (bp->vfdb->sriov.nr_virtfn));
+		if (rc)
+			BNX2X_ERR("pci_enable_sriov failed with %d\n", rc);
+		else
+			DP(BNX2X_MSG_IOV, "sriov enabled");
+	}
 }
 
 /* end of nic load/unload */
@@ -11359,6 +11406,26 @@ static int bnx2x_init_bp(struct bnx2x *bp)
  * net_device service functions
  */
 
+static int bnx2x_open_epilog(struct bnx2x *bp)
+{
+	/* Enable sriov via delayed work. This must be done via delayed work
+	 * because it causes the probe of the vf devices to be run, which invoke
+	 * register_netdevice which must have rtnl lock taken. As we are holding
+	 * the lock right now, that could only work if the probe would not take
+	 * the lock. However, as the probe of the vf may be called from other
+	 * contexts as well (such as passthrough to vm failes) it can't assume
+	 * the lock is being held for it. Using delayed work here allows the
+	 * probe code to simply take the lock (i.e. wait for it to be released
+	 * if it is being held).
+	 */
+	smp_mb__before_clear_bit();
+	set_bit(BNX2X_SP_RTNL_ENABLE_SRIOV, &bp->sp_rtnl_state);
+	smp_mb__after_clear_bit();
+	schedule_delayed_work(&bp->sp_rtnl_task, 0);
+
+	return 0;
+}
+
 /* called with rtnl_lock */
 static int bnx2x_open(struct net_device *dev)
 {
@@ -11366,6 +11433,7 @@ static int bnx2x_open(struct net_device *dev)
 	bool global = false;
 	int other_engine = BP_PATH(bp) ? 0 : 1;
 	bool other_load_status, load_status;
+	int rc;
 
 	bp->stats_init = true;
 
@@ -11423,7 +11491,12 @@ static int bnx2x_open(struct net_device *dev)
 		} while (0);
 
 	bp->recovery_state = BNX2X_RECOVERY_DONE;
-	return bnx2x_nic_load(bp, LOAD_OPEN);
+
+	rc = bnx2x_nic_load(bp, LOAD_OPEN);
+	if (rc)
+		return rc;
+
+	return bnx2x_open_epilog(bp);
 }
 
 /* called with rtnl_lock */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 11d603f..4d76b6c 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -2036,6 +2036,10 @@ void bnx2x_iov_remove_one(struct bnx2x *bp)
 	if (!IS_SRIOV(bp))
 		return;
 
+	DP(BNX2X_MSG_IOV, "about to call disable sriov");
+	pci_disable_sriov(bp->pdev);
+	DP(BNX2X_MSG_IOV, "sriov disabled");
+
 	/* free vf database */
 	__bnx2x_iov_free_vfdb(bp);
 }
-- 
1.7.9.GIT

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device
  2012-12-10 15:46 ` [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device Ariel Elior
@ 2012-12-10 18:35   ` David Miller
  2012-12-10 18:50     ` Joe Perches
  0 siblings, 1 reply; 27+ messages in thread
From: David Miller @ 2012-12-10 18:35 UTC (permalink / raw)
  To: ariele; +Cc: netdev, eilong

From: "Ariel Elior" <ariele@broadcom.com>
Date: Mon, 10 Dec 2012 17:46:25 +0200

>  static void bnx2x_get_pcie_width_speed(struct bnx2x *bp, int *width, int *speed)
>  {
> -	u32 val = REG_RD(bp, PCICFG_OFFSET + PCICFG_LINK_CONTROL);
> +	u32 val = 0;
> +	pci_read_config_dword(bp->pdev, PCICFG_LINK_CONTROL, &val);

Please put an empty line between function local variable declarations
and actual code.

> @@ -12023,85 +12057,115 @@ static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev,
>  	 * If MSI-X is not supported - return number of SBs needed to support
>  	 * one fast path queue: one FP queue + SB for CNIC
>  	 */
> -	if (!pos)
> +	if (!pos) {
> +		pr_info("no msix capability found");
>  		return 1 + cnic_cnt;
> +	}
> +
> +	pr_info("msix capability found");
>  

Use dev_info(), netdev_info(), or similar, rather than plain
pr_info().

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe
  2012-12-10 15:46 ` [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe Ariel Elior
@ 2012-12-10 18:37   ` David Miller
  0 siblings, 0 replies; 27+ messages in thread
From: David Miller @ 2012-12-10 18:37 UTC (permalink / raw)
  To: ariele; +Cc: netdev, eilong

From: "Ariel Elior" <ariele@broadcom.com>
Date: Mon, 10 Dec 2012 17:46:26 +0200

> +/* VF MBX (aka vf-pf channel) */
> +#include "bnx2x.h"
> +#include "bnx2x_sriov.h"
> +/* place a given tlv on the tlv buffer at a given offset */
> +void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
> +		   u16 length)

Put an empty line between include directives and the rest of the
file.

> +	while (tlv->type != CHANNEL_TLV_LIST_END) {
> +
> +		/* output tlv */
> +		DP(BNX2X_MSG_IOV, "TLV number %d: type %d, length %d\n", i,
> +		   tlv->type, tlv->length);

But this is a place where an empty line is not appropriate, please remove
the empty line after the one with the while().

> +/* acquire response tlv - carries the allocated resources */
> +struct pfvf_acquire_resp_tlv {
> +	struct pfvf_tlv hdr;
> +	struct pf_vf_pfdev_info {
> +		u32 chip_num;
> +		u32 pf_cap;
> +		#define PFVF_CAP_RSS		0x00000001
> +		#define PFVF_CAP_DHC		0x00000002
> +		#define PFVF_CAP_TPA		0x00000004

Please don't indent CPP defines excessively like this, they should be
at or near the initial column so that CPP directives can be easily
spotted by someone reading this code.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request
  2012-12-10 15:46 ` [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request Ariel Elior
@ 2012-12-10 18:39   ` David Miller
  0 siblings, 0 replies; 27+ messages in thread
From: David Miller @ 2012-12-10 18:39 UTC (permalink / raw)
  To: ariele; +Cc: netdev, eilong

From: "Ariel Elior" <ariele@broadcom.com>
Date: Mon, 10 Dec 2012 17:46:27 +0200

> +	/* PF released us */
> +	if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
> +		DP(BNX2X_MSG_SP, "vf released\n");
> +
> +	/* PF reports error */
> +	} else {

Seriously?  Do I even need to tell you how awful this looks?

Remove the empty line and put that comment before the "else"
somewhere where it doesn't have to have indentation which does
not match the basic block it is in.

I think I've reviewed enough of this series, please audit the
rest of coding style problems yourself.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device
  2012-12-10 18:35   ` David Miller
@ 2012-12-10 18:50     ` Joe Perches
  0 siblings, 0 replies; 27+ messages in thread
From: Joe Perches @ 2012-12-10 18:50 UTC (permalink / raw)
  To: David Miller; +Cc: ariele, netdev, eilong

On Mon, 2012-12-10 at 13:35 -0500, David Miller wrote:
> From: "Ariel Elior" <ariele@broadcom.com>
> Date: Mon, 10 Dec 2012 17:46:25 +0200
[]
> > @@ -12023,85 +12057,115 @@ static int bnx2x_get_num_non_def_sbs(struct pci_dev *pdev,
> >        * If MSI-X is not supported - return number of SBs needed to support
> >        * one fast path queue: one FP queue + SB for CNIC
> >        */
> > -     if (!pos)
> > +     if (!pos) {
> > +             pr_info("no msix capability found");
[]
> Use dev_info(), netdev_info(), or similar, rather than plain
> pr_info().

Also, please add terminating newlines to avoid
dmesg message interleaving.

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2012-12-10 18:50 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-10 15:46 [PATCH v3 net-next 00/22] bnx2x: support SR-IOV Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 01/22] bnx2x: Support probing and removing of VF device Ariel Elior
2012-12-10 18:35   ` David Miller
2012-12-10 18:50     ` Joe Perches
2012-12-10 15:46 ` [PATCH net-next v3 02/22] bnx2x: VF <-> PF channel 'acquire' at vf probe Ariel Elior
2012-12-10 18:37   ` David Miller
2012-12-10 15:46 ` [PATCH net-next v3 03/22] bnx2x: Add to VF <-> PF channel the release request Ariel Elior
2012-12-10 18:39   ` David Miller
2012-12-10 15:46 ` [PATCH net-next v3 04/22] bnx2x: Separate VF and PF logic Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 05/22] bnx2x: Add init, setup_q, set_mac to VF <-> PF channel Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 06/22] bnx2x: Add teardown_q and close " Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 07/22] bnx2x: Support ndo_set_rxmode in VF driver Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 08/22] bnx2x: VF fastpath Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 09/22] bnx2x: Allocate VF database in PF when VFs are present Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 10/22] bnx2x: Prepare device and initialize VF database Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 11/22] bnx2x: Infrastructure for VF <-> PF request on PF side Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 12/22] bnx2x: Support of PF driver of a VF acquire request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 13/22] bnx2x: Support of PF driver of a VF init request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 14/22] bnx2x: Support statistics collection for VFs by the PF Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 15/22] bnx2x: Support of PF driver of a VF setup_q request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 16/22] bnx2x: Support of PF driver of a VF q_filters request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 17/22] bnx2x: Support of PF driver of a VF q_teardown request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 18/22] bnx2x: Support of PF driver of a VF close request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 19/22] bnx2x: Support of PF driver of a VF release request Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 20/22] bnx2x: Support VF FLR Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 21/22] bnx2x: Support PF <-> VF Bulletin Board Ariel Elior
2012-12-10 15:46 ` [PATCH net-next v3 22/22] bnx2x: Add VF device ids and enable feature Ariel Elior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).